Google DeepMind is facing backlash after launching Veo 3, an AI tool that can create hyper-realistic 8-second videos from text prompts. The tool is free and generates visuals and sound so lifelike they’re nearly impossible to distinguish from reality.
The issue started with GeoConfirmed reporting a surge in AI-made misinformation, including fake missile strike videos in Tehran and Tel Aviv. Similar fabrications spread recently during protests in Los Angeles.
Soon after the launch, Al Jazeera created fake protest footage and missile strike clips using Veo 3, bypassing the AI’s supposed “harmful content” blocks. The videos were convincing enough to fool security experts.
Ben Colman, CEO of deepfake detection firm Reality Defender, warned about the scale threat.
“I recently created a completely synthetic video of myself speaking at Web Summit using nothing but a single photograph and a few dollars. It fooled my own team, trusted colleagues, and security experts,” said Ben Colman.
“If I can do this in minutes, imagine what motivated bad actors are already doing with unlimited time and resources.”
“We’re not preparing for a future threat. We’re already behind in a race that started the moment Veo 3 launched. Robust solutions do exist and work — just not the ones the model makers are offering as the be-all, end-all.”
Google says it’s serious about safety, pointing to SynthID watermarks on all AI content and visible watermarks on Veo videos. But experts claim those protections weren’t fully active at launch, calling it reckless.
Joshua McKenty, CEO of deepfake detection startup Polyguard, accused Google of rushing.
“Google’s trying to win an argument that their AI matters when they’ve been losing dramatically,” McKenty said.
“They’re like the third horse in a two-horse race. They don’t care about customers. They care about their own shiny tech.”
Swarthmore’s Sukrit Venkatagiri criticized the whole AI industry.
“Companies are in a weird bind. If you don’t develop generative AI, you’re seen as falling behind and your stock takes a hit,” Venkatagiri said.
“But they also have a responsibility to make these products safe when deployed in the real world. I don’t think anyone cares about that right now. All of these companies are putting profit — or the promise of profit — over safety.”
Google’s own 2023 research flagged generative AI as a major misinformation risk but the company still pushed Veo 3 first.
The fallout is already visible. Fake videos mimicking news broadcasts of home break-ins, false celebrity incidents, and fabricated protests flooded social media days after Veo 3’s release.
Alejandra Caraballo of Harvard Law’s Cyberlaw Clinic, who tested Veo 3, warned the tool makes creating fake news trivial and fast.
“What’s worrying is how easy it is to repeat. Within ten minutes, I had multiple versions. This makes it harder to detect and easier to spread,” Caraballo wrote.
“The lack of a chyron [banner on a news broadcast] makes it trivial to add one after the fact to make it look like any particular news channel.”
A Penn State study found 48% of people got fooled by fake videos on social media. Younger adults, ironically, are more vulnerable due to reliance on social networks with weak editorial controls. A recent UNESCO survey found 62% of news influencers don’t fact-check before sharing.
Alternatives to Veo 3, like Deepbrain and Synthesia, offer AI avatar and dubbing but lack Veo’s full-scene video creation power.
Ben Colman summed up the risk:
“By the time fake content spreads across platforms that don’t check these markers [which is most of them], through channels that strip them out, or via bad actors who’ve learned to falsify them, the damage is done.”
Google declined an interview request from Al Jazeera. Deepbrain and CBS News Texas also didn’t respond to queries about related incidents.
Veo 3’s launch spotlights how fast AI video synthesis is outpacing safety measures — and the spreading misinformation may be just getting started.