AI Pricing Engines Force Regulators To Rewrite Merger And Cartel Playbooks

AI Pricing Engines Force Regulators To Rewrite Merger And Cartel Playbooks

Pricing algorithms have moved from the server room to the courtroom. What competition authorities once treated as back-end infrastructure now sits at the center of merger reviews and cartel prosecutions. Regulators increasingly frame market power in terms of who controls the software that sets prices, ranks offers, and steers demand.

Algorithms Amplify Coordination Risk

Competition authorities in the United States, European Union, United Kingdom, Canada, and Asia-Pacific are treating these tools as both potential evidence of coordination and as assets that can entrench market power. The OECD’s 2023 algorithmic competition roundtable, the CMA’s 2021 algorithms paper, and the Competition Bureau of Canada’s 2025 discussion paper on algorithmic pricing set out a common message: algorithms can sharpen competition, but they can also make collusion easier to sustain and dominance harder to dislodge.

Most modern pricing engines ingest streams of competitor data, customer behavior, and demand forecasts, then generate price updates at machine speed. The risk that multiple firms will outsource their pricing to the same vendor, or train models on the same underlying data, underpins many of the algorithmic collusion concerns summarized in OECD background notes and law review surveys on algorithmic collusion.

Competition authorities focus on two distinct mechanisms: explicit collusion, where algorithms implement a prior human agreement to fix prices, and tacit coordination, where competitors independently adopt algorithmic pricing that produces anticompetitive outcomes through shared data and common vendor logic.

Early enforcement established basic principles. In 2015, the DOJ brought its first criminal prosecution targeting algorithmic pricing in United States v. Topkins, where the defendant and competitors fixed prices for posters sold through Amazon Marketplace using coordinated pricing algorithms. The defendant pleaded guilty to one count of price fixing and agreed to pay a criminal fine. Assistant Attorney General Bill Baer stated that the Division would “not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms.”

Recommendation engines and ranking systems raise a different set of questions. When a platform uses algorithms to preference its own products or steer users toward certain offers, those design choices go to the heart of dominance and foreclosure analysis. The EU Digital Markets Act already treats self-preferencing and ranking practices as core gatekeeper obligations, and competition commentators now treat ranking logic as a live theory of harm rather than an implementation detail.

Allocation algorithms sit alongside pricing and recommendation tools as part of the same enforcement story. Matching drivers to riders, assigning inventory across warehouses, or routing orders across marketplaces can all be legitimate optimization. They can also be vehicles for exclusionary conduct or subtle coordination when multiple firms rely on the same optimization service.

Merger Reviews Target Data Access

Merger control has absorbed these concerns in two main ways. First, agencies ask whether combining firms will gain control of pricing or recommendation engines that competitors also rely on. Second, they look at how mergers change access to the data and compute that make those engines work.

Across recent guidance and public speeches, agencies have flagged deals where an acquirer would own a key pricing vendor used by rivals, or where a platform’s marketplace data is a critical input to AI models that shape competition. Theories of harm that once focused narrowly on market share now incorporate access to granular transaction data, logs of user behavior, and the ability to run large-scale simulations on that data.

In practice, that shifts the evidentiary record in merger review. Authorities increasingly seek model documentation, data schemas, and internal testing reports that show how algorithms react to competitor behavior. The Competition Bureau’s consultation on algorithmic pricing outlines the kinds of information it expects from firms using pricing engines, and similar expectations appear in OECD working documents and European policy briefs.

Counsel preparing merger filings that involve large datasets, optimization tools, or recommendation engines now have to decide how much to disclose voluntarily about model objectives, governance, and change management. Under-disclosure can erode credibility if agencies later uncover internal documents showing an intent to soften competition through algorithmic design.

RealPage Litigation Tests Cartel Theory

The most concrete test of algorithmic cartel theories so far comes from rental housing. In August 2024, the U.S. Department of Justice and multiple state attorneys general filed a civil antitrust suit against RealPage, alleging that its revenue management software facilitated an unlawful price-fixing scheme among landlords. The complaint argues that landlords shared granular leasing data and relied on common algorithmic recommendations in ways that suppressed competition.


According to a December 2024 analysis by the White House Council of Economic Advisers, anticompetitive pricing costs renters in algorithm-utilizing buildings an average of $70 per month, with total costs to renters in 2023 estimated at $3.8 billion. The analysis found that at least 10% of all rental units use RealPage’s products to help determine rent prices.

Coverage in outlets such as Wired and subsequent analyses by law firms and academics frame the case as a turning point. Instead of treating software as a passive tool, the DOJ alleges that RealPage’s algorithms, combined with landlord adoption and an auto-accept mechanism, functioned as the hub of a hub-and-spoke cartel. That theory has influenced commentary on other sectors where competitors share data with a common pricing vendor.

In November 2025, the DOJ announced a proposed settlement with RealPage, accompanied by detailed commentary from competition practitioners. Client alerts, including analyses by firms such as Wilson Sonsini and Mintz, emphasize that the settlement imposes limits on RealPage’s use of real-time data, logging obligations, and restrictions designed to reduce the risk of coordinated pricing through the platform.

Elsewhere, authorities have cited hotel pricing algorithms, online travel platforms, and e-commerce dynamic pricing as areas of concern. The CMA’s algorithms report, the EU’s note on algorithmic competition, and case law surveys published in European competition journals all treat algorithmic alignments of prices as a practical enforcement problem rather than a hypothetical one.

State Laws Ban Coordination Software

Legislators have begun to write algorithmic coordination directly into statute. In New York, Senate Bill S.7882 makes it unlawful to provide software or algorithmic devices with a coordinating function to residential landlords, effectively banning rent-setting tools that facilitate price coordination. Law firm alerts, including an analysis by Morgan Lewis, describe the law as one of the first state-level bans on certain forms of algorithmic pricing. Governor Kathy Hochul signed the measure into law on October 16, 2025, with the law taking effect on December 15, 2025.

The statute is already the subject of constitutional and preemption challenges. RealPage has sued to block the law, arguing in public filings and press statements that the ban overreaches and chills lawful software. The lawsuit alleges that New York’s law is unconstitutional and imposes an invalid prior restraint on speech.

At the same time, municipal governments are passing their own restrictions. Local bans on AI-enabled rent-setting tools, including ordinances in cities such as San Francisco, Philadelphia, Berkeley, San Diego, Minneapolis, Providence, Jersey City, Seattle, and Hoboken, reflect a broader willingness to treat pricing engines as regulatory targets. For counsel advising landlords and property managers, that combination of federal enforcement and state or municipal bans has turned algorithmic pricing into a compliance priority in real estate.

Settlements Restrict Data Sharing

As cases move from theory to settlement, remedies design has become the quiet center of the AI-competition story. In the RealPage context, the proposed DOJ settlement limits the use of certain types of nonpublic data, imposes transparency obligations, and requires documentation of algorithmic changes.

OECD and national reports outline a menu of potential remedies for algorithmic collusion concerns. These include prohibitions on sharing sensitive data through common vendors, requirements to introduce randomness into pricing outputs, restrictions on certain optimization objectives, and obligations to maintain logs and version histories of key algorithmic systems. Some authorities also contemplate structural remedies when control of a pricing platform or recommendation engine is itself a source of market power.

In digital markets, regulatory frameworks such as the EU Digital Markets Act and the UK’s Digital Markets, Competition and Consumers Act incorporate algorithmic transparency and self-preferencing bans as ex ante obligations. That means some of the remedy tools for gatekeeper platforms, such as ranking transparency and non-discrimination duties, are now baked into statute rather than negotiated case by case.

Global Agencies Coordinate Enforcement

Beyond individual cases, competition agencies are coordinating on AI strategy. In July 2024, U.S., EU, and UK agencies signed a joint statement on effective AI competition, committing to closer scrutiny of AI-related mergers and partnerships and to sharing analytical tools. In October 2024, a G7 competition summit in Italy led to a joint declaration on AI risks, including algorithmic collusion and market concentration.

Other regulators are building their own frameworks. New Zealand’s competition and consumer regulator has published guidance on AI and pricing algorithms, while academic and policy papers in Australia and Europe explore how existing antitrust rules apply to autonomous pricing agents. The 2023 OECD roundtable on algorithmic competition, along with national notes submitted to that roundtable, has become a shared reference point.

For multinational clients, this convergence does not mean identical rules in every jurisdiction. It does mean that arguments about AI and competition law rarely stay local. A theory of harm or remedy discussed in one jurisdiction can appear in consultations, market studies, or working papers in another within a year.

Counsel Must Inventory Algorithm Use

For transactional and antitrust teams, the first task is to locate where algorithms sit in the business. That means building inventories of pricing engines, recommendation systems, and allocation tools, identifying which ones are supplied by third parties, and mapping what data they ingest. Agencies are already asking for these details in merger control and market studies.

Documentation comes next. Competition authorities that once asked mainly for price lists and internal emails now expect firms to explain model objectives, governance processes, and change-control procedures. Submissions to the Canadian Competition Bureau’s consultation and OECD roundtables stress the importance of being able to demonstrate that algorithms are designed and monitored to avoid anti-competitive outcomes.

Vendor contracts are another pressure point. The RealPage litigation shows how quickly a common pricing vendor can become central to a cartel theory. Counsel are revisiting agreements with software providers to address data sharing, exclusivity, audit rights, and the ability to switch tools if risk profiles change. In some sectors, using a widely adopted provider may now require more robust internal safeguards or alternative pricing strategies.

Finally, remedies planning has become part of deal strategy. When a transaction involves a platform that hosts pricing or recommendation tools, or a merger would concentrate critical datasets, parties should anticipate questions about both structural and behavioral remedies. That includes how the firm would log and govern future model updates, not just how it will behave at closing.

Sources

This article was prepared for educational and informational purposes only. It does not constitute legal advice and should not be relied upon as such. All cases, regulations, and sources cited are publicly available through court filings and reputable media outlets. Readers should consult professional counsel for specific legal or compliance questions related to AI use.

See also: Recalibrating Competence: Updating Model Rule 1.1 for the Machine Era

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *