Meta Aims to Automate Numerous Product Risk Assessments

The Meta AI app is displayed on a mobile phone with the Meta AI logo visible on a tablet in this photo illustration The Meta AI app is displayed on a mobile phone with the Meta AI logo visible on a tablet in this photo illustration

Meta is rolling out an AI-driven system to evaluate potential harms and privacy risks for updates across its apps, including Instagram and WhatsApp. Internal documents viewed by NPR reveal this could impact up to 90% of updates.

The issue started with a 2012 agreement between Facebook (now Meta) and the Federal Trade Commission, mandating thorough privacy reviews of product updates. Until now, humans conducted these evaluations.

Now, product teams must submit a questionnaire about updates. They’ll get an “instant decision” from an AI, detailing identified risks and necessary compliance measures before launch.

Advertisement

Critics are concerned. A former executive told NPR this shift raises “higher risks,” suggesting that negative consequences of product changes may go unchecked.

In a statement, a Meta spokesperson emphasized the company has invested over $8 billion in its privacy initiatives, stating:

“As risks evolve and our program matures, we enhance our processes to better identify risks, streamline decision-making, and improve people’s experience. We leverage technology to add consistency and predictability to low-risk decisions and rely on human expertise for rigorous assessments and oversight of novel or complex issues.”

This new AI approach promises faster updates for Meta, but with potential pitfalls ahead.

This post has been updated with additional quotes from Meta’s statement.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement