UK Activists Sound Alarm on Meta’s AI Risk Assessment Plans

UK Activists Sound Alarm on Meta’s AI Risk Assessment Plans UK Activists Sound Alarm on Meta’s AI Risk Assessment Plans

Meta is facing backlash over plans to automate crucial risk assessments using AI. UK internet safety groups slammed the move, telling watchdog Ofcom it’s a “retrograde and highly alarming step.”

The issue started after a report said up to 90% of Meta’s risk assessments on Facebook, Instagram, and WhatsApp could soon be done by AI tools. These assessments are key under the UK’s Online Safety Act, used to spot and manage harm risks, especially to children.

Campaigners including the Molly Rose Foundation and the NSPCC wrote to Ofcom CEO Melanie Dawes, demanding AI-driven risk assessments not count as “suitable and sufficient” under the law.

Advertisement

They also warned against platforms watering down risk checks.

Ofcom responded it’s “considering the concerns” and expects platforms to identify who completes and approves assessments.

Meta pushed back, saying the letter “deliberately misstated” its approach.

A Meta spokesperson said:

“We are not using AI to make decisions about risk.”

“Rather, our experts built a tool that helps teams identify when legal and policy requirements apply to specific products. We use technology, overseen by humans, to improve our ability to manage harmful content and our technological advancements have significantly improved safety outcomes.”

The Molly Rose Foundation flagged these concerns after NPR reported the AI setup would let Meta launch updates faster but with “higher risks” due to less human review.

Meta is also reportedly considering automating oversight on youth risks and misinformation.

Related: Meta’s content moderation changes ‘hugely concerning’, says Molly Rose Foundation
Related: Don’t weaken online safety laws for UK-US trade deal, campaigners urge

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement