-
A study from the Wharton School of the University of Pennsylvania and the Hong Kong University of Science and Technology The study found that when placed in simulated markets, AI trading bots did not compete with each other but began to collude to manipulate price behavior. Studying how AI performs in market settings can help regulators understand gaps in existing rules and regulations, the study authors said.
Artificial intelligence is both smart and stupid enough to form widespread price-fixing cartels under financial market conditions if left to its own devices.
Earlier this year, a working paper posted on the National Bureau of Economic Research website by the Wharton School of the University of Pennsylvania and the Hong Kong University of Science and Technology found that when AI-powered trading agents were released into simulated markets, the bots colluded with each other and participated in manipulating prices for collective profits.
In the study, the researchers put the robot to work in a market model, which is essentially a computer program designed to simulate real market conditions and train artificial intelligence to interpret market pricing data, with virtual market makers setting prices based on different variables in the model. These markets may have varying degrees of “noise,” which refers to conflicting amounts of information and price movements in different market contexts. While some bots are trained to act like retail investors and others to act like hedge funds, in many cases the machines engage in “pervasive” price manipulation by collectively refusing to actively trade without being explicitly told.
In an algorithmic model that considers price-triggered strategies, AI agents trade conservatively based on signals until a large enough market move triggers them to trade very aggressively. These bots are trained with reinforcement learning and are sophisticated enough to implicitly understand that a wide range of aggressive trading can create greater market volatility.
In another model, AI bots have an over-pruning bias and are trained to internalize that if any risky trade results in a negative outcome, they should not engage in that strategy again. These bots engage in conservative trades in a “dogmatic” manner, even when more aggressive trades are deemed more profitable, and they collectively behave in a manner that the study calls “artificial stupidity.”
“In both regimes, they’re basically going to gravitate toward this pattern of not taking aggressive action, which is good for them in the long run,” said study co-author Itay Goldstein, a Wharton finance professor. wealth.
Financial regulators have long been working to tackle anti-competitive practices such as collusion and price-fixing in the market. But in retail, artificial intelligence is already in the spotlight, especially as companies using algorithmic pricing come under scrutiny. This month, Instacart, which uses artificial intelligence-driven pricing tools, announced it was ending its program after some customers saw different prices for the same items on the delivery company’s platform. An experimental analysis by Consumer Reports found that nearly 75% of groceries on Instacart are sold at multiple prices.
“for [Securities and Exchange Commission] “For financial market regulators, their primary goal is not only to maintain this stability, but also to ensure market competitiveness and market efficiency,” said Wharton finance professor Winston Wei Dou, one of the study’s authors. wealth.
With this in mind, Dou and two colleagues began placing trading agent robots into various simulated markets based on high and low levels of “noise” to determine how the AI would perform in financial markets. These bots ultimately achieve “hypercompetitive profits” by collectively and spontaneously deciding to avoid aggressive trading behavior.
“They just think suboptimal trading behavior is optimal,” Dou said. “But it turns out that if all the machines in the environment trade in a ‘suboptimal’ way, everyone actually makes a profit because they don’t want to take advantage of each other.”
In short, the bots did not question their conservative trading behavior because they were all making money and therefore stopped competing against each other, forming a de facto cartel.
Artificial intelligence tools for financial services, such as trading agent bots, are becoming increasingly attractive due to their ability to increase consumer inclusion in financial markets and save investors time and money on advisory services. Nearly one-third of U.S. investors said they would be willing to receive financial planning advice from artificial intelligence-generated tools, according to a 2023 survey by financial planning nonprofit CFP Council. A report released in July by cryptocurrency exchange MEXC found that 67% of traders among 78,000 Gen Z users activated at least one AI-powered trading bot during the last fiscal quarter.
But Michael Clements, director of financial markets and communities at the Government Accountability Office (GAO), said that while AI trading agents have many benefits, they are not without risks. Beyond cybersecurity concerns and potentially biased decision-making, these trading bots can have a real impact on the market.
“Many AI models are trained on the same data,” Clements said. wealth. “If there is consolidation within AI and there are only a few major providers on these platforms, you could get herd behavior – a large number of individuals and entities buying or selling at the same time, which could lead to some price dislocation.”
Jonathan Hall, an external official on the Bank of England’s financial policy committee, warned last year that AI bots would encourage such “herding behavior” which could undermine market resilience. He advocates setting up a “kill switch” for the technology and increasing human oversight.
Clements explained that so far, many financial regulators have been able to apply well-established rules and regulations to AI, for example, “whether lending decisions are made via AI or paper and pencil, the rules still apply equally.”
Some agencies, such as the U.S. Securities and Exchange Commission, have even chosen to retaliate in kind, developing artificial intelligence tools to detect abnormal trading behavior.
“On the one hand, AI may lead to an environment of unusual trading,” Clements said. “On the other hand, regulators would also be in a better position to detect it.”
Dou and Goldstein say regulators have expressed interest in their research, which the authors say could help reveal gaps in current regulation of artificial intelligence in financial services. When regulators previously looked for instances of collusion, they looked for evidence of communication between individuals because they believed that price manipulation could not truly be sustained unless people communicated with each other. But in Dou and Goldstein’s study, the robots had no clear form of communication.
“For machines, it really doesn’t work when you have reinforcement learning algorithms because they’re obviously not communicating or coordinating,” Goldstein said. “We coded and programmed them, we knew exactly what was in the code, and there was nothing that explicitly talked about collusion. But over time, they learned that this was the way forward.”
Goldstein sees the difference in how human and robot traders communicate behind the scenes as one of the “most fundamental issues” regulators can learn to adapt to rapidly evolving artificial intelligence technology.
“If you use it to think about collusion that occurs because of communication and coordination,” he said, “that’s obviously not the way you think about it when you’re dealing with algorithms.”
A version of this story was published on Fortune.com on August 1, 2025.
This story originally appeared on Fortune.com