The use of artificial intelligence in capital markets is becoming more widespread due to its perceived ability to enhance efficiency. But a recent study by Itay Goldstein and Winston Wei Dou, which is part of the Jacobs Levy Center’s working paper series on SSRN, demonstrates the ever-present risk of AI-powered market manipulation through collusive trading – despite AI having no intention of collusion.
Professors Dou and Goldstein elaborate on their findings and discuss how investors and regulators might address potential AI collusion in financial markets.
Your paper issues a warning about the potential for AI in capital markets to cause unintended effects. Did those findings surprise you or match what you hypothesized?
Winston Wei Dou: We have recently seen the rise of artificial intelligence (AI) applications in financial markets, considered a major technological breakthrough like computers and the internet. AI is believed to boost financial market efficiency through various channels. For example, AI can process vast amounts of data at speeds far beyond human capability and solve dynamic optimization problems with greater sophistication, enabling quicker analysis and more informed decision-making, free from human error and emotional bias. Additionally, AI can optimize order matching processes in trading platforms, ensuring better prices for market participants and improved market liquidity.
However, our research shows that when shifting the focus from a single AI to the interactions of multiple AIs, we must consider AI equilibrium. There is a risk of potential AI-driven market manipulation through collusive trading in this equilibrium. This collusion can robustly arise without any form of agreement, communication, or even intent among AI algorithms. Such collusive trading compromises market efficiency by decreasing liquidity, diminishing price informativeness, and widening mispricing. These unintended effects of AI technologies on financial markets and price formation can reshape regulators and other practitioners’ prior beliefs about how AI would transform the landscape of financial markets.
Although we anticipated the potential risk of AI collusion through price-trigger strategies in a low-noise trading environment, some of our findings were surprising. Specifically, we find that AI collusion can still robustly arise in a highly noisy trading environment where the sophistication or learning capacity of AI algorithms is relatively low compared to the complexity of the environment. We call the former AI collusion mechanism “algorithmic collusion through artificial intelligence,” and the latter “algorithmic collusion through artificial stupidity.” While both mechanisms adversely affect market efficiency, the resulting market dynamics, welfare implications for different investor groups, and the way AI equilibrium changes with the trading environment differ significantly between the two.
What was your motivation to study collusion in AI-powered trading within financial markets?
Winston Wei Dou: There has been a strong call from the U.S. Congress and regulators to study AI-driven market manipulation and its implications for financial market stability. Over the last several decades, the private sector’s development and use of AI have increased dramatically. Greater computing power and more data have led to AI use in nearly every sector, especially in the financial and retail sectors. For example, the SEC recently approved Nasdaq’s new AI trading system, which uses a reinforcement learning algorithm to facilitate effective AI trading for investors. Moreover, according to the 2024 HSGAC Majority Committee Staff Report on hedge funds’ use of AI in trading, there has been increased use and reliance on AI in the financial market, particularly by hedge funds, to inform or determine trading decisions.
In retail markets such as e-commerce, gasoline, and housing rentals, AI pricing algorithms have been widely adopted. AI collusion has emerged as a new potential antitrust challenge in these markets. Lawsuits have been filed against potential AI collusion, and Congress has been urged to reform antitrust laws to address this issue. Similarly, in financial markets, the SEC has warned about the risk of AI-driven market manipulation, which can harm financial market stability and damage competition and efficiency. The 2024 HSGAC Majority Committee Staff Report echoed these concerns. Regulators have not yet clarified how existing regulatory frameworks apply to AI-powered trading or proposed new measures to address these challenges.
Understanding the specific mechanisms behind AI-driven market manipulation in trading is essential for regulators to design effective countermeasures. Motivated by the importance and urgency of this issue, our study provides the first step in examining whether concerns about AI-driven manipulation are real and pressing. We demonstrate within a scientific framework that AI collusion can robustly arise in financial markets through two distinct mechanisms. Our findings indicate that AI collusion can reduce market liquidity, diminish price informativeness, and widen mispricing, all of which can have adverse real consequences.
What impact do you foresee your research having on investors and financial regulators? Do your findings point to any recommendations for policy changes?
Itay Goldstein: Using a scientific framework and an experimental approach to study AI algorithms in trading, our research confirms Congress and regulators’ concerns about AI-driven market manipulation. We emphasize the risk of AI collusion and the concept of AI equilibrium, which extends beyond a single AI’s superior performance relative to humans. Our research not only reveals the mechanisms behind AI collusion but also demonstrates which mechanism dominates under different trading environments. We believe our findings and analyses can provide valuable insights to investors and regulators on how to address potential AI collusion in the markets.
For example, our study indicates which group of investors is likely to be exploited by AI collusion and which group is immune, depending on the underlying mechanism driving the AI collusion. Specifically, information-insensitive investors, such as retail investors who rely on reversal-type technical analysis, are the primary source from which AI traders, like hedge funds, derive collusive profits in a low-noise trading environment. Conversely, noise traders become the primary source of collusive profits for AI traders when the trading environment becomes very noisy. Guided by these findings, investors can adopt counteracting trading strategies to mitigate potential losses, and regulators can better understand which group of investors should be their primary focus for protection.
As another example, our study characterizes the factors that determine the capacity for AI collusion. Specifically, we find that the concentration of AI technologies, the monopoly of data, the homogenization of AI algorithms, the demand elasticity of information-insensitive investors, and the level of noise trading risk all increase the capacity for AI collusion. Guided by these findings, regulators can mitigate AI collusion and its adverse effects on market efficiency and stability by influencing these factors.
How does your study shape the way we should think about artificial intelligence in finance?
Itay Goldstein: Currently, most concerns and discussions about the potential risks of AI applications in the financial sector have focused on the technological aspects of AI-powered trading. For instance, in their official reports, both Congress and regulators have warned about the herding behavior caused by AI trading and its adverse effects on financial market stability. They emphasize that this issue primarily stems from the homogenization of AI trading algorithms. Regulators emphasize that homogenization can arise when speculators use similar foundational models. Common foundational models across a large user base have been a defining feature of AI technology and its associated regulations.
A recent report by the HSGAC Majority Committee Staff emphasizes another technological issue: the inherent complexity and lack of explainability in AI systems. Often referred to as “black boxes,” these systems’ decision-making processes are intricate and sometimes impossible to understand or explain. This opacity poses significant challenges for compliance, making it difficult for hedge funds to provide adequate disclosures to clients and fully explain their trading decisions.
In our paper, we demonstrate that while homogenization is instrumental, it is not necessary for AI collusion to occur, regardless of the mechanism. Our focus extends beyond technological issues to highlight a critical insight: understanding the interaction among AI algorithms within the same market is crucial. Specifically, we need to consider how one AI trader’s learning and trading decisions are influenced by the learning and actions of other AI traders.
In simple terms, it is essential to comprehend the properties of AI equilibrium involving multiple intelligent machines, fully accounting for their complex interactions. We show that the homogenization of AI algorithms in the financial market can arise for reasons beyond technological considerations; it can result from AI collusion in the equilibrium formed through sufficient interactions.
View the study “AI-Powered Trading, Algorithmic Collusion, and Price Efficiency” on the Jacobs Levy Center’s SSRN page.
Read about this research on Knowledge at Wharton.
Listen to “The Impact of AI on the Finance Sector” on Wharton Business Daily.
Learn more about Professors Itay Goldstein and Winston Wei Dou.