The federal government has gone back to the drawing board after receiving overwhelmingly negative public feedback to its proposed legislative and regulatory blueprint for tackling online “harmful content.”
During the federal election last fall, the Liberals pledged to introduce online harm regulatory legislation within 100 days of forming government.
At the end of that period, on Feb. 3, 2022, Canadian Heritage Minister Pablo Rodriguez, joined by Justice Minister David Lametti and Public Safety Minister Marco Mendicino, instead announced that the government is applying the brakes and “will engage a group of experts whose mandate will be to collaborate with stakeholders and Canadians, in order to provide the government with advice on how to adjust the proposal.”
Pledging to work “in a transparent and expedited manner” the government said it will propose a revised framework “as soon as possible” and as “as quickly as possible.”
“We are committed to ensuring that online platforms provide safe and respectful experiences for Canadians to engage and share information with one another,” Rodriguez said in a Feb. 3 statement. “This is a very important and complex issue. ... We will continue to engage stakeholders and Canadians in order to get this right.”
From July 29, 2021, to Sept. 25, 2021, the government conducted a public consultation on a detailed technical discussion paper that it unveiled for regulating online platforms such as Facebook, Instagram, Twitter, YouTube, TikTok and Pornhub, and combating five kinds of “harmful content online”: terrorist content; content that incites violence; hate speech; non-consensual sharing of intimate images; and content that sexually exploits children.
As the government acknowledged in its summary report of 8,796 submissions, including 422 unique responses from individuals, civil society, industry, academics and government-type organizations, most respondents agreed the federal government should in some way regulate harmful content online, but “only a small number of submissions from those stakeholders were supportive, or mostly supportive, of the framework as a whole.”
“Respondents identified a number of overarching concerns, including concerns related to the freedom of expression, privacy rights, the impact of the proposal on certain marginalized groups, and compliance with the Canadian Charter of Rights and Freedoms more generally,” the government summed up Feb. 3 in “What We Heard: The Government’s Proposed Approach to Address Harmful Content Online.”
A key overarching message was to “proceed with caution,” the government’s summary consultation report says.
Many respondents emphasized that the approach Canada adopts to the diverse cutting-edge legal, social, political, practical and business issues implicated in regulating online harms “would serve as a benchmark” for other governments which are also working on the same problems (the U.K., Australia, France and Germany are also taking action) “and would contribute significantly to international norm-setting.”
A legal authority on Internet regulation, Michael Geist, Canada Research Chair in Internet and E-commerce Law at the University of Ottawa’s common law faculty, credited the government for being “remarkably candid” in “the near-universal criticism that its plans sparked.”
Geist called Rodriguez’s plan to convene a new expert panel to develop revised recommendations for a legislative proposal “a good start” but advised the government start its task with the suggestions it received from its consultation.
“They provide not only criticism of the government’s initial plan, but also feature a genuine attempt to offer constructive reforms that better balance the widely held desire to address online harms with the need to safeguard freedom of expression, privacy, and fundamental rights and freedoms,” Geist told The Lawyer’s Daily.
OpenMedia, a civil society group which advocates to keep the Internet “open, affordable and surveillance-free,” urged the government not to rush its creation of a legislative and regulatory blueprint as “it will take a lot of time to work out fixes to everything that was wrong in last year’s proposal. Any proposal that impacts our online freedom of expression and privacy is incredibly sensitive, and getting it right via thorough and sustained consultation is far more important than moving it down the government’s fast track.”
Matt Hatfield, OpenMedia’s campaigns director, told The Lawyer’s Daily the federal government should “be much more thoughtful about the impact of its plan in practice, not merely their intent, for the next round of draft legislation development.The government’s first stab at this proposal would have strongly influenced online platforms to remove far more legal speech than illegal speech, much of it by automated algorithms, and report huge quantities of lawful speech, or relatively low-level illegal speech, directly to law enforcement. That in no way replicates our balance of rights and freedoms from offline spaces on the Internet — it creates a new status quo biased towards censorship and surveillance.”
Hatfield highlighted, for example, some counterproductive aspects of the government’s proposal his group sees — which he suggested could hurt the very vulnerable and marginalized members of society the regulation of online harms is supposed to help protect. “Excessive mandatory speed requirements [e.g. to take down content] with inflexible required response creates a weapon for bad actors to target and attack marginalized communities with, not to protect them,” said Hatfield, whose group saw more than 5,000 of its members e-mail Rodriguez to demand “a thorough reworking of the harmful content proposal.”
“No one learns the rules around speech like hateful groups, and no one is better at staying within those rules, while policing their victims for technical violation,” Hatfield remarked. “Preserving some flexibility for platforms to respond to the apparent intent of mass reporting, not a strict and narrow rule, is an important piece of the puzzle on moderating content.”
The government provides a fairly detailed summary of the issues raised by the public, without identifying the respondents (the government did not make the submissions themselves public, although Geist has posted the known ones on his website).
Among the “prominent” criticisms the government highlighted were:
- The scope of regulated entities, i.e. “online communication service providers” — which captures major platforms but exempts private communications — was “too broad” and not clear on which services would be included and what the threshold would be. E.g. would services like personal websites, blogs or message boards fall within the scope of regulation?
- The definition of a regulated entity overlooked the significant disparity in capacity between larger and smaller companies, possibly putting an unnecessary burden on smaller companies to comply with the extensive associated requirements. “There was a strong desire to have smaller platforms be exempt from the framework.”
- Most respondents criticized the proposed regulatory regime for introducing types of content that are too diverse to be treated within the same regime. The “Recourse Council” would not be able to judge such a variety of content, regardless of how well qualified its members might be. Many suggested that each type of content requires a specialized approach or maybe an entirely separate regulatory regime.
- Many respondents criticized the lack of definitional detail for the harms to be regulated. While the proposal said the definitions would be based on corresponding Criminal Code offences, it did not specify which offences would be included. Many questioned how platforms could assess whether content falls within one of the five categories: a “content moderation” professional with no legal training could quickly resort to bias in deciding which content to remove, leading to a chilling effect with disparate impact on marginalized communities and “over-censorship” of lawful expression writ large.
- Many cautioned against opening up the categories of harmful content to include speech that is harmful, but nevertheless legal (aka “lawful awful” speech). Requiring the removal of such speech would raise the risk of undermining access to information, limiting Charter-protected free expression, and restricting the ideas and viewpoints that are necessary in a free and democratic society.
- Most stakeholders flagged as “extremely problematic” the proposed proactive monitoring obligation on platforms to take all reasonable measures to identify harmful content on their platforms, and make it inaccessible to persons in Canada — a de facto system of prior restraint and inconsistent with the right to privacy. “By forcing platforms to proactively monitor their websites, as opposed to only moderating content by reactively responding to user flags, the regime would effectively force platforms to censor expression, in some instances prior to that expression being viewed by others.”
- Many respondents called for the removal of the 24-hour takedown period included in the proposed framework. It would incentivize platforms to be overvigilant and over-remove content, simply to avoid non-compliance with the removal window. Nearly all said 24 hours is not enough time to thoughtfully respond to flagged content, nor would it allow for “judicious, thoughtful analysis that balances” the Charter right to freedom of expression against the pressing policy objective of countering online harms. Some said that the 24-hour requirement would cause illegal content to be removed quickly, thus preventing police or CSIS from identifying, or investigating, crimes or threats to public safety.
Under the government’s proposed blueprint last yea,r regulated entities would be required to do whatever was reasonable and within their power to monitor for the regulated categories of harmful content on their services, including through the use of automated systems based on algorithms.
Once platform users flagged content, regulated entities would be required to respond to the flagged content by assessing whether it should be made inaccessible in Canada, according to the definitions outlined in legislation. If the content met the legislated definitions, the regulated entity would be required to make the content inaccessible from their service in Canada within 24 hours of being flagged. Regulated entities would also be required to establish robust flagging, notice and appeal systems for both authors of content, and those who flag content. Once a regulated entity made a determination on whether to make content inaccessible in Canada, they would be required to notify both the author of that content and the flagger of their decision, and give each party an opportunity to appeal that decision to the regulated entity.
The government’s proposal would create a new Digital Safety Commission of Canada to support three bodies that would operationalize, oversee, and enforce the new regime: the Digital Safety Commissioner of Canada, the Digital Recourse Council of Canada and an advisory board.
The commissioner’s powers would include powers to: take user complaints; proactively inspect for compliance; issue compliance orders; in specific instances of non-compliance with legislative and regulatory obligations, recommend administrative monetary penalties of up to $10 million, or three per cent of an entity’s gross global revenue, whichever is higher, for non-compliance to the proposed Personal Information and Data Protection Tribunal; and refer offences for non-compliance with certain statutory obligations to prosecutors with fines of up to $25 million, or five per cent of an entity’s gross global revenue, whichever is higher.
The commissioner, as an exceptional recourse, could also apply to the Federal Court to seek an order to require telecommunications service providers to implement a blocking or filtering mechanism to prevent access to all or part of a service in Canada that has repeatedly refused to remove child sexual exploitation and/or terrorist content; and collect and share information with other government departments and agencies for the purposes of administering the Act and other federal legislation.
Notably, the government’s proposal would make it easier for CSIS to obtain authorization from a Federal Court judge to get basic subscriber information, instead of using the current warrant application option which the government said takes 4-6 months to develop an application and seek the Federal Court’s approval (whereas Canadian police, by contrast, can obtain basic subscriber information in eight to 10 days, the government said). “It would not replace or eliminate CSIS’s requirement to obtain full warrant powers from the Federal Court should further investigation into the threat be necessary,” the government said in its discussion guide to its proposal.
If you have any information, story ideas or news tips for The Lawyer’s Daily, please contact Cristin Schmitz at Cristin.schmitz@lexisnexis.ca or call 613 820-2794.