The Trump administration rolled out its new AI Action Plan, aiming to secure U.S. dominance in AI by pushing innovation, infrastructure, and global influence. The plan wants AI systems free from ideological bias and focused on “objective truth.” But experts say it’s heavy-handed, unrealistic, and risks deepening corporate control.
The plan orders government AI to reflect “truth rather than social engineering agendas,” interfering with private companies’ content moderation choices. Critics warn this undermines freedom of speech and ignores that AI’s training data and developers inherently carry biases.
Brookings scholars highlight multiple flaws:
Sorelle Friedler called out the government’s role in speech control:
“These decisions are not for the government to make, whether by prohibiting content or requiring it. In a document largely aimed at stripping away regulations, this provision stands out as remarkably heavy-handed.”
Cameron F. Kerry flagged the plan’s push to export American AI stacks, sparking “sovereign AI” issues:
“The ‘American AI technology stack’ frame plants an American flag smack into the AI sovereignty space…An all-or-nothing sales pitch from the U.S. will make it hard to find this balance and heighten concerns about dependence on U.S. technology.”
Aaron Klein and Jude Poirier warned about unstopped bias in financial AI systems, noting the dismantling of the Consumer Financial Protection Bureau weakens oversight:
“AI operates on existing data, collected and processed over decades of discrimination…The plan neglects problems with AI’s incorporation of existing biased data.”
Raj Korpan slammed the plan for gutting the National Science Foundation (NSF), the key AI research funder:
“While the NSF is expected to carry out this agenda, the Trump administration has simultaneously defunded, politicized, and destabilized it…Defunding that capacity while accelerating deregulation…undermines public trust and institutional legitimacy.”
Ivan Lopez urged caution on AI in healthcare, calling fast deployment dangerous without solid evaluation:
“We should not treat health care as a mere productivity frontier and ignore the stakes…Until rigorous evaluation is the default, fast-tracking AI into clinical care will cause more harm than good.”
Nicol Turner Lee blasted the plan’s attempt to remove references to misinformation, DEI, and climate issues, calling belief in “objective truth” for AI unrealistic:
“AI models are also influenced by developers whose values, norms, and worldviews factor into their reasoning behind the model’s design…it is highly improbable to train AI on data that has not otherwise been impacted by the lived experiences of people and their communities.”
Judy Wang and Nicol Turner Lee revealed Trump’s quiet move to undermine copyright protections for AI training:
“President Trump made his stance clear, stating that paying for every data point being used to training AI models is simply ‘not doable’ and that China is ‘not doing it.’ …It marginalizes small creators who lack the resources to litigate against tech giants and deepens the imbalance…”
Tom Wheeler criticized the absence of competition policy in the plan, warning deregulation only helps dominant tech firms:
“If American AI models and applications are to match the president’s ambitious vision, then policymakers must be as innovative as the AI engineers themselves…Prioritizing deregulation will enrich a few already dominant companies, discourage emerging competitors, and slow development.”
Other experts note missing support for universities and researchers amid funding cuts and visa restrictions, which threaten the talent pipeline critical to AI leadership.
Landry Signé called for “agile governance” over deregulation to handle AI risks:
“The administration could better bridge the gap between ambition and responsibility by incorporating ‘agile governance’…that promotes innovative capacity through co-creation, participatory design, and regulatory experimentation.”
Niam Yaraghi said the plan’s AI push could improve healthcare but lacks interoperability and privacy details crucial for lasting impact.
Stephanie K. Pell noted federal agencies are tasked with safeguarding AI but warns staff cuts and morale problems may hobble this effort.
President Trump’s AI Action Plan lays out big ambitions but faces heavy skepticism over sweeping deregulation, federal interference in content, research funding cuts, and sidelining ethical safeguards. The U.S. AI race is on, but this patchwork strategy risks widening gaps instead of closing them.