Stuart Russell slams EU AI Act as too weak to stop AI-driven extinction risk.
The Berkeley AI expert says the current EU rules won’t cut it. He joined an open letter with Nobel winners Geoffrey Hinton and Daron Acemoglu pushing back against industry lobbying to water down the EU’s general-purpose AI (GPAI) rules. They want mandatory third-party audits to prove safety—no more trusting companies’ claims.
Russell warned the EU AI Act lets even extremely dangerous AI systems hit the market. Big tech firms only care about avoiding regulation.
"To industry, it doesn’t matter what the document says. The companies want to have no regulation at all."
"Once you have systems that can take control of our civilization and planet, then fining a one-digit percentage of the maximum of global revenues is ridiculous."
He called for safety standards like nuclear plants—but stricter. Still, companies barely understand how their own AI works, making real guarantees impossible.
Even EU Commission President Ursula von der Leyen has mentioned AI extinction risks and warned AI might rival human reasoning by next year. Yet strict rules are nowhere in sight.
Russell fears we’ll only see real regulation after a "Chernobyl-sized disaster." For now, he hopes the upcoming EU Code of Practice will at least force external safety tests.
"It wouldn’t be enough… but it would help considerably."