OpenAI CEO Sam Altman argues AGI is inevitable. He leans into a Silicon Valley mantra: “Technology happens because it is possible.” Altman paraphrased physicist Robert Oppenheimer to make the point in a 2019 interview with the New York Times.
Altman and other AI lab heads admit AI-driven human extinction is a real danger. Some see AGI as a looming species-level threat—a smarter new “species” that could outnumber or outsmart humans, treating us like obstacles or resources.
“Technology happens because it is possible,” Sam Altman told the New York Times in 2019.
The popular response inside Silicon Valley? Build it anyway. If you don’t, someone else will. This view fuels a new ideology called effective accelerationism (e/acc). It treats the AI boom as unstoppable—a natural law driven by “technocapital,” spinning forward without pause.
But tech inevitability isn’t a given. History shows humanity has resisted powerful tech before, from recombinant DNA regulation in the 1970s to nuclear weapons disarmament talks decades later. Even cloning humans remains banned, though the tech exists.
The key difference: building AGI requires massive compute power, controlled by a few monopolistic chip makers and cloud providers. This bottleneck opens the door to “compute governance.” Governments could mandate safety guardrails by regulating who trains these massive models—cutting off unsafe AGI development at the source.
Past big tech challenges suggest regulation can work. Campaigns like Extinction Rebellion pushed climate emergencies into policy. The Sierra Club helped shutter a third of US coal plants, slashing carbon emissions. International treaties chipped away at nuclear arsenals. These weren’t inevitable outcomes—they were fought for.
Government priorities differ from private capitalists’. Profit-driven AI races risk skipping safety tests. OpenAI lost an employee recently over safety concerns tied to AGI behavior. Governments can afford to slow things down.
“If things get hotter and hotter and arms control remains an issue, maybe I should go see [Soviet leader Yuri] Andropov and propose eliminating all nuclear weapons.”
— Ronald Reagan, 1983
Public opinion isn’t behind AI superintelligence either. A majority of voters oppose superhuman AI, fearing its impact. Yet boosters dismiss these concerns as provincial or neo-Luddite.
The takeaway: AGI’s arrival isn’t locked in stone. Timelines and outcomes still depend on choices made now. Slowing down, setting guardrails, and international cooperation could steer development away from extinction risks.
“No technology is inevitable, not even something as tempting as AGI.”