Facebook is facing fresh backlash over deepfake videos surfacing on its platform. The issue started when users noticed AI-generated clips manipulating public figures’ speeches. Soon after, concerns escalated over misinformation spreading unchecked.
The launch follows reports that Facebook’s moderation tools failed to flag the deepfakes quickly. Critics say Facebook is not doing enough to police harmful synthetic content. Users are demanding faster AI content regulation.
Facebook responded with a statement promising "improved detection systems" and increased investment in AI moderation.
Facebook’s move comes amid broader worries about AI’s role in fueling disinformation online.
Facebook spokesperson stated:
"We recognize the challenge posed by synthetic media and are committed to enhancing our detection capabilities. Our teams are actively working to identify and remove harmful content faster to protect our community."