2024-2025 Global AI Trends Guide
The U.S. Department of Justice’s Antitrust Division (DOJ) and Stanford University’s Institute for Economic Policy Research recently hosted a public workshop on the state of competition in industries building or using artificial intelligence (AI). The workshop featured an array of researchers, industry leaders, and government officials from the U.S. and abroad. Speakers highlighted the swiftly changing competitive dynamics across the entire AI value chain, from chips to end-users, and shared the latest insights from industry and government on this rapidly evolving technology. The panels and keynotes were wide-ranging but focused largely on four overarching themes: (1) competition in AI inputs, (2) IP rights and AI training, (3) consumer protection and data privacy, and (4) trust, provenance, safety, and security.
Panelists discussed potential competition concerns about the perceived consolidation of control over AI inputs, particularly training data and chips, among a small number of firms. Echoing earlier regulator statements, one speaker remarked that access to data is particularly important for AI model development. Relatedly, panelists noted the importance of access to chips and other advanced hardware—another area heavily impacted by AI’s growth. Addressing the technological and economic challenges of creating the robust networking architectures needed to develop advanced AI models, some panelists called for an ecosystem with multiple players in the hardware and chips markets to ensure interoperability between hardware and software.
AI is also being leveraged to improve efficiencies, drive technological developments, and solve complex problems. One industry panelist highlighted how a platform approach could be used to rapidly integrate new AI services and models. Another panelist discussed efforts to use engineering vision systems to monitor human and water production, citing AI’s ability to create quantifiable insights that can enhance decisionmaking and operational efficiency. Another panelist noted that vertical integration could also result in more seamless, efficient, and cost-effective systems.
Federal Trade Commission (FTC) Deputy Chief Technology Alex Gaynor raised questions about the competitive dynamics within the AI supply chain, particularly as it concerns vertical integration. The FTC is conducting a 6(b) study to understand the competitive dynamics between AI developers and cloud services providers.
Senator Amy Klobuchar also spoke about competition issues. She suggested another dimension of consolidation: cloud computing services providers preferencing their own proprietary AI models. She recommended surveillance of the AI supply chain for anticompetitive behavior and discussed the proposed American Innovation and Choice Online Act, which would seek to restrict purported self-preferencing practices by large digital platforms.
Karen Croxson, Chief Data, Technology and Insight Officer of the UK Competition and Market Authority, discussed her view of three interconnected risks to fair competition: that incumbents (1) may constrain access to data, (2) engage in anticompetitive conduct on downstream markets essential for AI model deployment, and (3) form partnerships involving key suppliers. Ms. Croxson argued that these risks show how important it is to ensure access to critical inputs to maintain a competitive AI marketplace.
As AI technology continues to evolve, so too is the debate over defining fair use of public data for training models. AI developers often use publicly available works to build and train these models. Media and entertainment industry representatives at the workshop voiced concerns about the lack of bargaining power over the use of copyright-protected information to train AI inputs. They emphasized the continued need for artists to generate content and raised concerns about how AI-generated content could compete against human-generated content for revenue.
Panelists discussed how various licensing frameworks could be used for training AI models. Some panelists noted that, while collective licensing seen in the music industry allows for efficiencies in negotiating contracts with many small businesses, it does not match the structure of negotiations with larger corporations developing or deploying AI. Some entertainment industry leaders expressed ways the DOJ could protect creators’ rights, including ensuring competition is not consolidated across a few companies, by clarifying that copyrighted publicly available information should not be used under the fair use doctrine, and creating an avenue for rights holders to negotiate with large companies.
Kathi Vidal, Director of the U.S. Patent and Trademark Office (USPTO), stated that one of the USPTO’s primary goals is to ensure accessible entrance into the AI marketplace. Ms. Vidal noted that the USPTO and the Biden Administration focused on human-centered approaches in their AI policy considerations. For example, the USPTO recently proposed inventorship guidance for AI-assisted innovations, measuring what they will patent by the amount of human contribution. Vidal stated that further guidance under patent eligibility is forthcoming, but copyright issues are more complicated.
Consumer protection and data privacy are also hot-button issues for AI, particularly for AI systems integrated into sensitive sectors like healthcare and financial services. One panelist pointed out that, while privacy is critical, the de-identification of data can limit the benefits of the potential data insights, such as improving diagnostics, expediting drug research and development, and streamlining administrative processes. Panelists agreed that striking a balance is essential to promoting competition and consumer welfare.
Standards and frameworks can also help guide the responsible use of AI. The National Institute of Standards and Technology (NIST) is developing voluntary AI standards for companies to implement for the responsible development and deployment of AI. In light of these privacy and consumer protection concerns, NIST is working to establish high-level policies to ensure that AI systems are reliable, accurate, and non-discriminatory. One speaker argued that the responsibility of AI developers and deployers should not end with their immediate role, suggesting that transparency and reporting requirements throughout the AI value chain must ensure accountability at every juncture. Another panelist added that these obligations are particularly complex in the context of large language models, in which ongoing maintenance and updates can lead to inconsistent system behavior.
A representative of the CFPB commented that there should be no exceptions for AI or complex technologies in enforcing federal consumer financial protection laws. For example, businesses making credit decisions must provide clear and specific reasons for any adverse decisions against a consumer, regardless of the complexity of the algorithm employed. This enforcement approach ensures that consumers are protected, even when decisions are made by “black box” AI systems.
Speakers also discussed trust and provenance in AI and their effect on national security. Věra Jourová, European Commission Vice President for Values and Transparency, discussed the EU’s goals of consumer security and a competitive market. EU leaders reached an agreement with G7 leaders regarding guiding principles for further security measures in the EU AI Act. Ms. Jourová also flagged concerns surrounding Russia, China, and other actors’ use of digital tools to spread disinformation and conduct foreign interference exercises. One speaker disagreed with the EU’s regulatory approach, noting how attempts to prefigure regulation for evolving industries will stall innovation and hamper the development of a strategically important domestic AI sector.
The AI and competition workshop highlighted the dynamism of this rapidly evolving field. As AI technologies continue to transform industries and redefine competitive landscapes, it is crucial for stakeholders—ranging from tech developers to policymakers—to engage in ongoing dialogue.
Hogan Lovells continues to monitor global regulatory and policy developments across the AI ecosystem. Our cross-sector AI team is dedicated to helping clients stay ahead of the curve and informed about these conversations.
Authored by Edith Ramirez, Mark Brennan, Chris Fitzpatrick, and Ryan Thompson.
Ellen Jin and Aaniyah Hicks, summer associates in our Washington, D.C. office, contributed to this article.