The Price of Innovation: Are Your Systems Putting You at Risk?
The Antitrust Risks of AI: Navigating Algorithmic Decision-Making and Compliance
-
April 04, 2025
-
In recent years, the use of artificial intelligence and algorithms in business operations has surged, transforming industries from consumer finance to retail goods. Among the most significant developments is the widespread adoption of pricing algorithms, which enable companies to adjust prices dynamically based on market conditions, consumer behavior and competitor strategies.1
While these technologies promote efficiency and competitive advantages, they also raise critical concerns. Data privacy risks emerge as algorithms rely on vast amounts of consumer information, often with limited transparency. Consumer protection issues arise when pricing strategies lead to discrimination or exploit behavioral biases. Equally concerning is the potential for algorithmic collusion—where independent companies’ pricing algorithms, without explicit agreement, leverage non-public competitively sensitive information or otherwise learn to coordinate prices in ways that harm competition. As government enforcers and civil plaintiffs raise these challenges in the court system, the need for greater oversight and accountability in algorithmic decision-making is becoming an increasingly important risk management priority for businesses.
AI and Algorithms
The rise of Artificial Intelligence (“AI”) has unlocked tremendous opportunities for innovation, and its impact has been well-documented. Today, terms like Artificial Intelligence (“AI”), Machine Learning (“ML”), Natural Language Processing (“NLP”) and Deep Learning (“DL”) are commonplace, discussed by mathematicians and grade school children alike. As AI capabilities become central to corporate strategy, decisions regarding algorithmic decision-making have taken precedence across industries.
It’s easy to understand why. AI-driven capabilities – such as demand forecasting, gaining deeper insights into price elasticity and applying dynamic pricing – have the potential to fundamentally reshape how a company operates. However, innovation is not void of risk. It challenges established norms and explores uncharted territory in the pursuit of discovering new and improved methods or solutions. As companies continue to adopt AI-driven strategies, it’s imperative that they understand the associated risks – the blind spots and subtle biases – including anticompetitive behavior.
Framework to Addressing Algorithmic Risk
To comprehensively understand risks rooted in an algorithmic decision-making process, companies should adopt an interdisciplinary approach – one that integrates business, technical and legal perspectives. This holistic approach ensures that business impacts, legal risks and technical underpinnings are understood and pave the way for both informed decision making during initial deployment and continuous monitoring once these tools are in production. It also provides organizations with a structured framework for remediation in instances where an algorithmic decision-making process is challenged by regulators, civil plaintiffs or internal stakeholders.
What Are Some of the Risks?
Misalignment of Technology and Outcome
Stakeholders: Business, Technical, Legal
The potential for algorithmic decision-making processes to optimize for unintended objectives, leading to decisions that deviate from business goals or legal safeguards. Algorithms are based on ingested data, predefined inputs and specific and objectives. If these do not accurately reflect the problem statement, then the suggested outcomes will miss the mark and lead to unintended consequences ranging from poor performance to reputational damage and regulatory scrutiny.
Biases and Inefficiencies
Stakeholders: Technical
The potential for algorithmic decision-making processes to be trained or incomplete or historically biased data or have an inflexible model design that misallocates resources in its tunnel vision to achieve its singular-focused goal. If not addressed, the “solution” not only proves inaccurate; it results in lost time and money and can lead to the continuation or amplification of existing biases and issues that undermine business credibility.
Anticompetitive Considerations
Stakeholders: Business, Technical, Legal
The potential for algorithm decision-making processes to be designed or inadvertently function in a way that restricts competition or promotes collusion. This can include algorithms relying on competitively sensitive information to formulate recommendations and software consistently recommending higher prices. Businesses need to maintain independent control of their new solutions or risk blindly following a system it doesn't understand in to regulatory scrutiny.
At a minimum, an effective algorithmic risk management framework includes:
- Stakeholder Interviews: Conversations with business leaders, compliance officers and data scientists offer qualitative insights, the chance to validate technical findings and practical perspectives on how each algorithm operates in the real world.
- Technical Documentation: Careful examination provides critical context about system design, underlying assumptions, intended functionality and limitations of the algorithm.
- Code Review: This involves using software tools that facilitate analysis of the algorithm's structure and logic to help uncover potential flaws, biases or inefficiencies in the code.
- Data Lineage Assessment: Tracing the flow of data from its source to the point of decision-making ensures transparency in how data transforms through various systems.
- Exploratory Data Analysis: Identifies issues with data quality and informs teams of the range of values for each input. This should be done in a dedicated analytics environment.
- Statistical Techniques: Helps evaluate the fairness, accuracy and performance of the decision-making process. This is vital for analyzing the outputs and with the above tools and creates a comprehensive approach that ensures each step contributes to a robust understanding of the algorithm’s impact.
Applying the Framework to Algorithmic Anticompetitive Risks
As businesses begin to more actively integrate AI into their business operations, enforcement agencies and private plaintiffs have begun to scrutinize the impact of these technologies on competition. When evaluating the antitrust risks associated with algorithmic decision making and other advanced technologies, this framework helps determine whether the software reflects legitimate, unilateral business decisions or, as argued by civil plaintiffs and the DOJ in certain industries,2 serves as a proxy for coordinated activity among competitors. These arguments and related court decisions have focused on the combining of sensitive non-public pricing and supply information, the extent to which companies delegate decision-making authority to the algorithm and the impact of the use of the algorithm on prices to consumers. A particular focus has been algorithms embedded in third-party pricing software.
Proper antitrust risk mitigation would therefore involve an understanding of the data inputs and outputs to the pricing optimization software, the extent to which the software has access to competitively sensitive information and whether that information plays a role in the pricing recommendations made.
Lastly, in evaluating the performance of the software, companies should be mindful of any trends in pricing, margin or supply that do not align with current market dynamics. Such trends may be indicative of unintended alignment with other software users.
Conclusion
We are at a fundamental inflection point where the opportunities for innovation offered by AI are immense. But so are the risks. A comprehensive compliance framework allows companies to obtain a complete, fact-based understanding of the risks associated with its algorithmic decision-making processes – including that of algorithmic anticompetitive behavior. It enables companies to embrace and invest in algorithmic decision-making capabilities knowing the associated risks are being properly managed.
Footnotes:
1: See Fed. Trade Comm’n, “Issue Spotlight: The Rise of Surveillance Pricing” (Jan. 17, 2025).
2: See Press Release, Fed. Trade Comm’n, “FTC and DOJ File Statement of Interest in Hotel Room Algorithmic Price-Fixing Case” (Mar. 28,2004).
Published
April 04, 2025
Key Contacts
Senior Managing Director
Senior Managing Director
Managing Director
Senior Director