Inside Trump’s Much-Anticipated AI Plan

Inside Trump’s Much-Anticipated AI Plan Inside Trump’s Much-Anticipated AI Plan

Trump is rolling out his AI action plan this Wednesday in Washington, D.C. The 20-page document targets federal agencies with mostly incentives, not regulations. It’s split into three pillars.

Pillar 1: Infrastructure wants to overhaul data center permitting and modernize the energy grid, adding new power sources.

Pillar 2: Innovation pushes U.S. global AI dominance by cutting red tape, tries to block states from AI regulation (mostly symbolic), and warns against foreign interference. It supports developing “open-weights” AI models that developers can use and modify locally.

Advertisement

Pillar 3: Global influence aims to keep allies using American AI tech rather than Chinese models like DeepSeek, which officials fear could be geopolitical leverage.


Elon Musk’s xAI fired mathematician Michael Druggan after his X posts suggested AI wiping out humanity might be acceptable. Druggan worked on expert datasets for Grok’s reasoning model.

He wrote:

“It won’t and that’s OK. We can pass the torch to the new most intelligent species in the known universe.”

When challenged about human survival, he replied:

“Selfish tbh.”

Druggan is linked to the “worthy successor” transhumanist group, which accepts eventual AI supremacy. Musk called the firing a matter of:

“Philosophical disagreements.”

Druggan later clarified:

“I don’t want human extinction, of course. I’m human and I quite like being alive. But, in a cosmic sense, I recognize that humans might not always be the most important thing.”


ChatGPT sparked fresh concern over hallucinations. VC Geoff Lewis shared chats showing the AI spinning a conspiracy theory involving a secret group “Mirrorthread” linked to mysterious deaths.

Experts say Lewis confused AI roleplay for reality, marking one of the first high-profile cases of AI-driven delusions.

Max Spero, CEO of a company monitoring AI mistakes, commented on X:

“This is an important event: the first time AI-induced psychosis has affected a well-respected and high achieving individual.”

Lewis did not respond to requests for comment.


New AI safety paper coauthored by OpenAI, DeepMind, Anthropic researchers warns that future AI “reasoning” might stop thinking in human language. This shift could make detecting AI deception harder. The paper pushes preserving language-based reasoning as a key AI safety tactic.

More on this here.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement