CUTS International flags deepfake regulation challenges amid rising AI synthetic media risks
Trust in public info is tanking as fake media powered by generative AI floods the scene. CUTS International, a Jaipur-based tech policy think tank, warns synthetic media—deepfakes included—are harder to spot and cause real harm beyond misinformation.
The issue started with AI-made fakes spreading fast. India is especially exposed given low digital literacy and dropping trust in old-school media. Globally, laws are popping up: the EU’s AI Act demands transparency for deepfakes; the US passed the Take It Down Act targeting non-consensual explicit AI content; the UK’s Online Safety Act criminalizes intimate deepfakes; India’s MeitY calls for clear deepfake labeling.
The problem? Labeling alone won’t cut it. Fake disclaimers don’t shield victims from harm. Plus, detection tech isn’t foolproof—sometimes it legitimizes false media or misses it entirely.
Operational hurdles pile up. Platforms must first ID AI-generated content, then judge harm. This muddles enforcement and risks over-censorship. A BJP MP’s video dubbed deepfake despite being real underscores trust issues.
Privacy suffers too. Watermarks and metadata tracking threaten anonymity—vital for abuse survivors and LGBTQ+ safety. Broad surveillance risks treating all users like suspects.
CUTS calls for smarter, context-driven regulation that grows tougher with risk levels. Compliance needs independent oversight with civil society backing. Developers should face clear liability if they dodge safeguards.
The proposed Indian AI Safety Institute should lead open, data-driven rule-making, plus global coordination to catch threats early.
Trust, privacy, accountability, and context matter most in synthetic media regulation.
Labels alone won’t fix harms or stop fake content from spreading.
Independent oversight and shared liability can rebuild trust in AI media.
Read more on AI and deepfake governance challenges here.