‘AI Pioneer Raises Safety Concerns, Proposes Solutions to Control It’

‘AI Pioneer Raises Safety Concerns, Proposes Solutions to Control It’ ‘AI Pioneer Raises Safety Concerns, Proposes Solutions to Control It’

FBI links AI to California fertility clinic bombing

Two men suspected of bombing a California fertility clinic last month reportedly used AI to get bomb-making instructions. The FBI revealed this shocking connection but didn’t name the AI tool involved.

This incident highlights the urgent need to make AI safer. Right now, AI development is a wild west, with companies racing to build the fastest and flashiest systems at the cost of safety.

Advertisement

Just after the FBI’s revelation, AI pioneer Yoshua Bengio launched a nonprofit, LawZero, to build a safer AI model called “Scientist AI.” The model aims to be honest, safe, and transparent — designed to reduce harm from AI misuse.

Bengio said Scientist AI will:

  • Assess and share its confidence level in answers to avoid overconfidence.
  • Explain its reasoning clearly so humans can check its conclusions.

These features were common in older AI but got dropped in today’s fast-paced AI race.

“Scientist AI” will be “honest and not deceptive,” Bengio said, adding it will include safety-by-design principles.

Scientist AI will also monitor other, less reliable AI systems to fight harmful AI with safer AI, a practical move since humans can’t handle billions of AI queries daily.

Beyond language models, Bengio’s team is adding a “world model” — a core understanding of how the world works. Current AI lacks this and struggles with tasks like realistic hand movement or chess strategy despite simpler AIs outperforming humans in those areas.


Yoshua Bengio | Alex Wong/Getty Images

Bengio’s effort is a solid step toward trustworthy AI, but challenges remain. LawZero’s $30 million budget is tiny next to massive government projects like the $500 billion US AI initiative.

Data needs pose another hurdle since major tech firms control most datasets AI needs to train on.

And even if Scientist AI works perfectly, it’s unclear how it will control harmful AI still in the wild.

Still, this project could spark a shift in AI safety standards, pushing developers and policymakers to prioritize safer AI design.

“Scientist AI” could set new expectations for safe AI, motivating researchers, developers, and policymakers to prioritize safety.

If it had existed earlier, perhaps AI wouldn’t have been exploited to help build bombs — or fueled social media’s mental health crises. Now is the time to build AI we can trust.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement