News

AI and Competition Law

Image
Image

The rapid advancement into the digital age has resulted in the emergence of transformative artificial intelligence (AI) tools that are capable of revolutionising industries. In this article, we set out how competition law considerations may be relevant to the development and utilisation of AI systems by market participants and examine how competition authorities are anticipating and preparing to address the new challenges posed by the widespread adoption of AI systems.

Introduction

The rapid advancement into the digital age has resulted in the emergence of transformative artificial intelligence (AI) tools that are capable of revolutionising industries. While the use of AI is increasingly commonplace in the automation of business processes, the development of cognitive insight and the delivery of enhanced customer engagement, the exponential growth of new branches of AI, particularly generative AI (which fosters the generation of new content, as opposed to manipulating existing content) has given us a glimpse of the true power and potential of AI as a resource. While efforts are being made by the EU to regulate the market placement, utilisation and liability of AI systems through the AI Act1 and AI Liability Directive2, more recent initiatives taken by some competition authorities, including the UK Competition and Markets Authority (CMA) and the US Federal Trade Commission (FTC), focus on understanding the interplay between AI and competition law.

On 4 May 2023, the CMA launched a review3 of artificial intelligence models to establish guiding principles for their future utilisation. Similarly, in a blog post of 29 June 20234, the FTC highlighted the competition issues associated with generative AI and subsequently opened an investigation in this space under consumer protection laws. These initiatives highlight the complex challenges that AI tools present for competition regulators. This article sets out how competition law considerations may be relevant to the development and utilisation of AI systems by market participants. It also examines how competition authorities are anticipating and preparing to address the new challenges posed by the widespread adoption of AI systems.

I. AI Building Blocks and Competition Law

The essential building blocks of AI systems include data, computational resources and engineering talent. With this in mind, authorities are likely to keep a close eye on established or incumbent companies that control any of these AI “inputs” as those firms would have the power to significantly impact the development and deployment of AI. Broadly speaking, regulators have expressed theories running along themes of foreclosure, self-preferencing and discriminatory behaviour.

Data is fed into AI systems so that they can learn. The volume and quality of data fed into AI systems play a direct role in improving the learning process. Companies that have gathered large and diverse datasets over a number of years may therefore have an advantage in developing and deploying AI products: vast datasets readily available for AI training. While this may aid innovation efforts and create superior AI offerings for customers, competition enforcement will seek to ensure that barriers to entry are not erected so that the development of new AI systems is not hindered. After all, smaller AI developers, too, need access to data to develop their product offerings. There is then also the issue of legitimate data collection. In addition to the IP and copyright considerations that may arise when data is mined, firms gathering and processing data also need to ensure that data collection does not infringe competition law. As recently confirmed by the EU Court of Justice, an infringement of data protection rules - such as those requiring justifications or consent for the processing of personal data - may be an abuse of dominance if carried out by a dominant firm.5

Computational resources are an equally important input to AI. The development and running of AI systems rely heavily on processing power. These are, however, in short supply. One element exacerbating the scarcity is the increasing demand for server chips and the increasingly low supplies available. While some AI firms are looking to workarounds such as different types of chips or using older technology, the spotlight is on the expansion of existing capacities and potential new entrants to remedy the shortfall.

The other element picked up by regulators and closely linked to computational resources is cloud computing, where there is a concern that an inability to access the cloud will hinder the development of AI and the emergence of new AI players. Authorities around the world6 have shown their interest in understanding the challenges for companies to access cloud, migrate workloads to the cloud or switch between cloud providers. One approach being taken at EU level to address these challenges is through the adoption of ex-ante regulation, as proposed by the European Commission. The European Commission’s draft Data Act, for example, if adopted, could change how cloud providers operate and price their services in the EU - ultimately requiring providers to remove fees associated with porting data out of a cloud.7 Coupled with the EU Digital Markets Act8 that has just come into force, one can expect that the activities of hyperscalers will be closely monitored and regulated for some time to come.

Competition authorities have already expressed interest in the labour market and in particular as regards skilled workers.9 One concern expressed by the authorities is that the shortage of engineering talent can be compounded by non-compete clauses that may have the effect of discouraging workers from freely moving to actual or potential competitors. In the battleground for human resources, companies with large talent pools will need to be aware of the risks of restricting workers’ freedom of choice or otherwise risk falling foul of, amongst other things, competition laws.

Not only will competition law impact the building blocks of AI, it will also impact the building process of AI itself. While open-source models may lead the charge in broadening AI accessibility, authorities have expressed scepticism, with the US FTC particularly calling out the potential for misuse through “open-first, closed later” tactics, where open-source principles are initially embraced to attract businesses but are then closed off through commercial and technological practices.

Competition law is also relevant in the context of agreements where a single set of technological practices are made and standards are adopted. While standard-setting is not in itself problematic, the creation of the standard needs to comply with competition law rules that ensure that standard setting activity does not result in collusion between competitors, or foreclosure from the market. Softer initiatives such as the adoption of codes of conduct or best practices are also likely to be carefully scrutinised by the authorities to ensure fair competition.

II. AI Utilisation and Competition

i. AI and collusive conduct

The outlook for AI and competition enforcement against collusive practices present a mixed landscape. On one hand, the proliferation of algorithmic pricing tools is not new. Already in 2017, the European Commission’s E-commerce Sector Inquiry revealed that about two-thirds of EU e-commerce retailers were employing software that automatically adjusts prices in response to those of competitors.10 In the wake of Covid-19, it is likely that this number has considerably increased as many companies moved to, and then thrived on, online retailing.

On several occasions, regulators have established that pricing collusion, facilitated through algorithms, can be caught under competition law. In 2016, the CMA fined several undertakings for agreeing not to undercut each other’s prices for posters and frames sold online and implemented this agreement by using automated pricing software.11 In 2018, the European Commission also fined certain consumer electronics companies for imposing fixed or minimum resale prices on their online retailers which were monitored by the use of sophisticated monitoring tools allowing them to intervene swiftly in case of price decreases.12 In antitrust practices involving algorithms, the European Commission has already signalled its intent to hold liable companies employing that technology. Several years ago,13 Margrethe Vestager, European Commission Executive Vice-President and Commissioner for Competition, cautioned that “[…] when [businesses] decide to use an automated system, they will be held responsible for what it does […] So they had better know how that system works.

Collusion not facilitated by AI, but rather, decided on and implemented by AI presents a more complex puzzle for regulators. Simply put, there may come a point in time where AI not only executes the agenda but determines the agenda. AI-driven conduct may sit on the blurred lines between tacit and express collusion. Whilst in previous cases algorithms were facilitators of the illegal conduct in question, with the advent of new and more powerful AI technology incorporating machine learning and autonomous decision-making, commentators have turned to the question of whether parallel pricing movements implemented by AI would be caught under the rules prohibiting anti-competitive agreements.

A distinction would need to be drawn between genuinely independent AI conduct, say by two separate AI systems employed by different firms that results in parallel market behaviour, and parallel market behaviour that results from some form of communication or signalling between those two separate AI systems. As to the former, in the absence of concertation and the knowing substitution of cooperation for the risks of competition, actions taken by autonomous AI systems would not meet the necessary conditions required under the current legal framework in the EU to be prohibited. Genuine AI-driven cases hallmarked by the absence of an underpinning anti-competitive agreement or concerted practice would therefore represent an enforcement gap for competition authorities that until now have based themselves on a body of case law permitting independently-determined parallel behaviour. In the case of the latter, where parallel behaviour is the result of some form of communication or signalling between separate AI systems, i.e. the AI systems are engaged in a form of collusion, the law prohibiting anti-competitive agreements would arguably apply: an agreement, albeit one made between bots, exists. The challenge for the authorities, of course, would be in distinguishing between these two scenarios in the first place. The deployment of black box AI - systems so opaque that the modus operandi and decision-making logic are impenetrable by humans - may make it impossible to ascertain whether parallel market behaviour by AI is a result of intelligent and independent price adaptation, or some form of understanding between the AI systems. Where only the outcome (parallel pricing patterns) is visible to the authority, identifying and proving the existence of an infringement of the competition rules will be a significant enforcement challenge.     

Regulators will also grapple with the question of determining imputability, and ultimately liability. Some argue that it would be harsh to hold firms liable for actions that they would have had no awareness of and - perhaps more importantly - no possible control over, although others say that would not be so different to the approach under some competition regimes where liability can be imputed to a firm where the conduct was carried out by a single (even rogue) employee. Outside the world of competition law, in the realm of damages actions in civil liability cases involving AI, the European Commission has acknowledged that bringing liability claims against AI companies under national legal systems may be difficult where causation needs to be established. The proposals under the EU’s draft AI Liability Directive seek to make it simpler for claimants, by introducing, in certain circumstances, rebuttable presumptions for the causal link between the alleged defectiveness of the product and the damage.

ii. AI and abusive conduct

Some of the competition concerns raised by regulators relate to the fact that AI may be utilised by dominant firms to implement anticompetitive strategies.  For example, regulators have warned that AI can be deployed to implement predatory pricing strategies, with AI being used to quickly analyse pricing data and determine a competitor’s response to changes in the market. Dominant firms may also use AI integrated into consumer-facing products to exclude competitors or nudge customers in a certain direction, for example, toward their own offerings, without their customers’ knowledge. The fact that AI can also be used to collect information on consumers such as preferences, brand loyalty and purchasing patterns, and provide individually tailored prices based on estimates of the consumer’s willingness to pay has not escaped scrutiny. Aware that in the hands of dominant firms AI could make anticompetitive discrimination easier to achieve, regulators are likely to keep an eye on such technologies.

Absent some form of monitoring, AI may also result in abuses occurring without harm being intended. AI systems employed, say, to maximise profits may not have the sensitivity to appreciate the nuances of competition law or the special responsibility bestowed on dominant firms. Non-discrimination under EU competition law, for example, is not as simple as requiring identical treatment. Dominant firms employing AI will be expected by regulators to have close oversight over how their AI is used and bear the consequences of their AI tools on the market.

iii. AI and merger control

While regulators all over the world have acknowledged that AI has the potential to change the competitive landscape of today and bring about new challengers to the existing incumbents in many industries, they also expressed some reservations that this potential may never be reached, because AI firms may be bought out by the existing big players, either for integration or as a means of thwarting an upcoming threat. Regulators are therefore keen to ensure that M&A activity in the AI sector is not instrumentalized as a tool to reinforce the existing market power of large incumbents. One can expect that authorities will scrutinise potential “killer acquisitions” involving AI firms in nascent spaces, with close examinations of both horizontal and non-horizontal theories of harm.

Jurisdictional questions have also arisen and are likely to continue to be debated. Absent a change in legislation, a large number, if not most merger regimes rely on local revenues or local assets in order to establish the jurisdiction over the transaction. This can pose a challenge for authorities when dealing with mergers involving nascent technologies. Hence, in some jurisdictions the legal test has been revised to enable authorities to review deals based on their transaction value: the idea being that low or non-existent revenues or assets in a particular jurisdiction do not adequately reflect the significance of the business to competition on the market. At the EU level, the outright revision of the legal test has been rejected by the European Commission in favour of a far-reaching interpretation of the existing law: the European Commission asserts, and this has been supported by the General Court in Luxembourg (pending appeal before the Court of Justice), that it has the ability to review transactions that fall below their jurisdictional thresholds where those transactions are referred to it by national EU Member State authorities, and even where those national merger thresholds themselves are not met. Deals that involve AI will need to factor in the possibility of delays if certainty over jurisdiction cannot be secured at the outset.

III. AI and Competition Law Detection and Enforcement

AI tools are also being developed and deployed for competition compliance and monitoring.14 Firms can already use AI to identify high compliance risk teams within their organisations, detect abusive behaviour by suppliers, or automate cartel screening.

Regulators are also adapting to change: there has recently been a gearing up on human resources and knowledge capital, with authorities arming themselves with data scientists, software engineers and computer programmers to work alongside traditional case teams. This is a huge shift from traditional staffing models once monopolised by legal staff. Authorities are also leaning towards AI-driven detection and enforcement tools. These initiatives include, for example, the development of data-mining and statistical tests to detect suspicious bidding patterns in public procurement markets, the deployment of price-monitoring software and the use of AI-powered trackers for merger activities. AI will therefore also revolutionise how enforcers work.

This, too, can raise questions. Taken further, AI could also be used to predict market failures and allow for forward-looking intervention. In the extreme, competition enforcement could move closer to the concept of precrime in science fiction, i.e. the idea that crime can be predicted before it happens and corrections can be made before a crime occurs. Not only would important questions about discrimination arise - predictive AI may lead to specific industries or firms being targeted because the underlying data may be skewed toward or against certain sectors - wider concerns about competition enforcement policy, transparency and fairness would need to be addressed. The use of AI by authorities may also result in confirmation biases, for example, if exculpatory facts are being overlooked by systems trained to identify incriminating evidence, leading to questions about due process and administrative propriety. Here, even the strongest proponents of AI would recognise the need for caution.

 

Authored by May Lyn Yuen and Victoria Cuidin.

 

 

References

Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, COM/2021/206 final, available here.

2 Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), COM/2022/496 final, available here.

CMA press release “CMA launches initial review of artificial intelligence models”, 4 May 2023, available here.

4 Blog post by Staff in the Bureau of Competition & Office of Technology on “Generative AI Raises Competition Concerns”, 29 June 2023, available here.

Case C-252/21, Conditions générales d’utilisation d’un réseau social, available here.

Netherlands Authority for Consumers and Markets, “Market Study Cloud services”, Case no. ACM/21/050317 / Document no. ACM/INT/440323, 5 August 2022, available here; FTC, Solicitation for Public Comments on the Business Practices of Cloud Computing Providers, 22 March 2023, available here; CMA, “Consultation: Cloud services market study - Interim report”, 5 April 2023, available here.

Proposal for a Regulation of the European Parliament and of the Council on harmonised rules on fair access to and use of data (Data Act), COM/2022/68 final, available here. Through its proposed Data Act, the European Commission intends to require cloud providers to, amongst other things, remove obstacles to customer switching between cloud providers, including through the phasing out of egress fees that are associated with switching.

Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), OJ L 265, 12.10.2022, available here.

Romanian Competition Authority press release “The Competition Council has opened an investigation on the labour force market” January 2022, available here.

10 Report from the Commission to the Council and the European Parliament, Final report on the E-commerce Sector Inquiry, COM/2017/0229 final, available here.

11 CMA press release “CMA issues final decision in online cartel case”, available here.

12 Commission press release, “Antitrust: Commission fines four consumer electronics manufacturers for fixing online resale prices”, IP/18/4601, available here.

13 Speech by Commissioner Margrethe Vestager at the Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017.

14 OECD, Data Screening Tools for Competition Investigations, OECD Competition Policy Roundtable Background Note, 2022, available here.

Search

Register now to receive personalized content and more!