AI Companies Ill-Equipped to Handle Risks of Developing Human-Level Systems, Report Alerts

AI Companies Ill-Equipped to Handle Risks of Developing Human-Level Systems, Report Alerts AI Companies Ill-Equipped to Handle Risks of Developing Human-Level Systems, Report Alerts

The Future of Life Institute says AI companies are "fundamentally unprepared" for building human-level AI. None scored above a D on existential safety planning in its new AI safety index.

The index reviewed seven major AI players: Google DeepMind, OpenAI, Anthropic, Meta, xAI, plus China’s Zhipu AI and DeepSeek. Anthropic topped safety with a C+, followed by OpenAI at C and DeepMind at C-.

One of the report’s reviewers slammed firms for lacking “coherent, actionable plans” to keep powerful AI safe and controllable. The group warned that despite chasing artificial general intelligence (AGI) soon, no one is ready for the risks.

Advertisement

Max Tegmark, FLI co-founder and MIT professor, highlighted the urgency:

“It’s as if someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”

The report points out AI’s rapid progress, with startups like xAI’s Grok 4 and Google’s Gemini 2.5 pushing limits since the global AI summit in Paris. But safety measures are not keeping pace.

Google DeepMind pushed back, saying the report missed “all of Google DeepMind’s AI safety efforts” and promised their approach “extends well beyond what’s captured.”

Other companies contacted for comment include OpenAI, Anthropic, Meta, xAI, Zhipu AI, and DeepSeek.

Another group, SaferAI, also released a harsh report Thursday, calling AI firms’ risk management “weak to very weak” and “unacceptable.” The clock is ticking, but don’t expect a serious safety blueprint from today’s leading AI makers anytime soon.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement