The AI Forecast Sam Altman Admits Was Not Entirely Accurate

The AI Forecast Sam Altman Admits Was Not Entirely Accurate The AI Forecast Sam Altman Admits Was Not Entirely Accurate

OpenAI CEO Sam Altman says he nailed AI’s technical progress—but society isn’t reacting how he expected.

Altman told Jack Altman on the YouTube show Uncapped with Jack Altman that OpenAI’s o3 language model matches a human with a Ph.D. in many fields. Yet, public response feels underwhelming.

"I feel like we’ve been very right on the technical predictions, and then I somehow thought society would feel more different if we actually delivered on them than it does so far," Altman said.

Advertisement

"But I don’t even — it’s not even obvious that that’s a bad thing."

Altman pointed out AI can reason like a top competitor in fields like programming and math but people aren’t impressed yet. He imagined society would look way different now that AI has reached this level.

"The models can now do the kind of reasoning in a particular domain you’d expect a Ph.D. in that field to be able to do," he added.
"In some sense we’re like, ‘Oh okay, the AIs are like a top competitive programmer in the world now,’ or ‘AIs can get like a top score on the world’s hardest math competitions,’ or ‘AIs can like, you know, do problems that I’d expect an expert Ph.D. in my field to do,’ and we’re like not that impressed. It’s crazy."

AI use is growing, especially as a co-pilot for tasks. Altman sees major change coming if AI gains autonomy, especially for accelerating scientific discovery. He says scientists are already faster with AI tools, though fully autonomous AI science isn’t here yet.

Despite others like Anthropic’s Dario Amodei and DeepMind’s Demis Hassabis sounding alarm bells about AI risk, Altman is less worried. He pointed to realistic risks like bioweapons or infrastructure attacks but found robot helpers more concerning for domestic safety.

"You already hear scientists who say they’re faster with AI," Altman said.
"Like, we don’t have AI maybe autonomously doing science, but if a human scientist is three times as productive using o3, that’s still a pretty big deal. And then, as that keeps going and the AI can like autonomously do some science, figure out novel physics … "

"I don’t know about way riskier. I think like, the ability to make a bioweapon or like, take down a country’s whole grid — you can do quite damaging things without physical stuff," he said.
"It gets riskier in like sillier ways. Like, I would be afraid to have a humanoid robot walking around my house that might fall on my baby, unless I like really, really trusted it."

Altman admits he’s uncertain what society will look like if AI hits extreme capability levels. He’s focused on AI’s power but urges discussions on how society gains from it.

"I think we will get to extremely smart and capable models — capable of discovering important new ideas, capable of automating huge amounts of work," he said.
"But then I feel totally confused about what society looks like if that happens. So I’m like most interested in the capabilities questions, but I feel like maybe at this point more people should be talking about, like, how do we make sure society gets the value out of this?"

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement