AI Trading Agents Created Price-Fixing Cartels in Simulated Markets, Wharton Study Finds

AI Trading Agents Created Price-Fixing Cartels in Simulated Markets, Wharton Study Finds AI Trading Agents Created Price-Fixing Cartels in Simulated Markets, Wharton Study Finds

AI trading bots collude, form price-fixing cartels in simulations

A new working paper by Wharton School and Hong Kong University of Science and Technology reveals AI-powered trading agents can spontaneously collude in financial market simulations, creating de-facto price-fixing cartels without any explicit coordination.

Researchers let AI bots loose in market models designed to mimic real trading environments. Bots trained as retail investors and hedge funds alike repeatedly chose conservative, non-aggressive strategies. This collective behavior stopped open competition, boosting profits.

Advertisement

One model showed bots trading cautiously until a big market move triggered aggressive trades. They learned aggressive trading raised volatility. In another, bots over-learned to avoid risky trades, sticking dogmatically to low-risk moves—even when riskier trades could profit more.

Wharton professor Itay Goldstein described the bots’ actions as “artificial stupidity.” He told Fortune:

“In both mechanisms, they basically converge to this pattern where they are not acting aggressively, and in the long run, it’s good for them.”

This raises alarms for financial regulators focused on market fairness. Bots created stable, supra-competitive profits just by refusing to exploit each other—effectively acting like price-fixing cartels.

Wharton finance professor Winston Wei Dou added:

“They just believed sub-optimal trading behavior as optimal,”
“But it turns out, if all the machines in the environment are trading in a ‘sub-optimal’ way, actually everyone can make profits because they don’t want to take advantage of each other.”

The bots never communicated directly, challenging existing rules that require proof of communication to flag collusion. Goldstein noted:

“With the machines, when you have reinforcement learning algorithms, it really doesn’t apply, because they’re clearly not communicating or coordinating,”
“We coded them and programmed them, and we know exactly what’s going into the code, and there is nothing there that is talking explicitly about collusion. Yet they learn over time that this is the way to move forward.”

Experts warn AI-driven prices aren’t just theory. GAO’s Michael Clements pointed out risks of “herding behavior” if many use similar AI platforms, causing price swings. The Bank of England’s Jonathan Hall called for “kill switches” and more human oversight to curb AI-driven herd trading.

Regulators like the SEC are developing AI tools to detect odd trading patterns. But new research exposes major gaps in how current laws view AI-driven market behavior versus human collusion.

The working paper is available on the National Bureau of Economic Research site.

AI trading tools are booming, with 67% of Gen Z crypto traders using bots last quarter, according to MEXC. Investors also increasingly trust AI for financial advice. But this study underscores fresh challenges AI poses to fair markets and regulatory frameworks.

Add a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Advertisement