Feds eye new powers to block platforms that fail to combat harmful content online

Feds eye new powers to block platforms that fail to combat harmful content online
Top Stories
OTTAWA -- The federal government is proposing the creation of new powers to block online platforms that repeatedly refuse to take down harmful content, and is looking at involving CSIS when it comes to combating online threats to national security and child exploitation content.

In launching a new proposal for how to tackle harmful online content, Canadas justice, public safety, and heritage ministers announced Thursday that they want to bring in new laws and regulations to force social media companies to be more accountable for five kinds of harmful content on their platforms: hate speech, child exploitation, the sharing of non-consensual images, incitements to violence, and terrorism.

And, theyre looking at what role federal security and intelligence agencies could play in enforcing these new rules, as well as the potential to completely block access within Canada to platforms that fail to act on content on their services that is deemed harmful.

Departmental officials outlined the proposal and issued a technical discussion paper detailing their legislative aims on Thursday.

Specifically, the government is proposing to create new rules and a compliance regime for online communication services that would force these companies to address harmful content posted on their platforms, including a requirement to review and remove, if necessary, problematic content within 24 hours of the post being flagged.

The sites may also be obliged, in certain circumstances, to preserve content and identifying information for potential future legal action. They could also have new options to alert authorities to potentially illegal content and content of national security concern if an imminent risk of harm is suspected.

The proposal would also compel platforms to provide data on their algorithms and other systems that scour for and flag potentially harmful content, provide a rationale for when action is taken on flagged posts, and would install a new system for Canadians to appeal platforms decisions around content moderation.

The new regime comes with a series of proposed and severe new sanctions for companies deemed to be repeatedly non-compliant. These consequences would include fines of up to five per cent of the companys annual global revenue or $25 million, whichever is higher.

And, as a last resort, if an online platform repeatedly failed to remove child sexual exploitation material or terrorist content, the government would seek the legal ability to block Canadians from accessing that service at all, through a court injunction forcing telecommunications service providers to restrict access to that site or service.
Read more on CTVnews
News Topics :
The “digital safety commissioner of Canada” would oversee a slate of new rules proposed by the federal government Thursday. There are also plans to consult broadly with Canadians online before...
That’s right, the UK government is saying ‘No end to end encryption for our kids please, they’re British’. And while this is merely guidance for now, the chill is real — because...
Social media firms will have to remove harmful content quickly or potentially face multi billion pound fines under new legislation. The government s Online Safety Bill, announced in the Queen s Speech,...
The new campaign includes Ruth Smeeth, of Index on Censorship, Jim Killock of the Open Rights Group, Gavin Millar QC and MP David Davis. While the group supports the bill...
The regulation “addressing the dissemination of terrorist content online” will come into force shortly after publication in the EU’s Official Journal — and start applying 12 months after that. The...