Hogan Lovells 2024 Election Impact and Congressional Outlook Report
On Friday 12 July, the European Union published the final text of its long-anticipated AI Act in the Official Journal, marking a key milestone in the implementation of this transformative piece of digital regulation.
This date is significant for two reasons. Firstly, while the overall framework and substantive requirements of the AI Act (the “Act”) have been known for some months, the release of the final text confirms the precise wording of each provision. This will be vital for organisations in interpreting their obligations under the regulation in the coming months and years.
Secondly, and most importantly, the official version confirms when the various parts of the Act will take effect.
The regulation enters into force 20 days after publication, on 1 August 2024. At this point organisations are granted a period in which to prepare for the Act’s application.
There are four phases in which the Act will be implemented over the course of the next three years, which are as follows (see Art. 113 of the Act):
Prohibited AI practices (2 February 2025) – prohibitions on a range of AI practices that are considered by the EU to pose an unacceptable level of risk will take effect.
General-purpose AI systems (2 August 2025) – specific requirements on the providers of general-purpose AI systems (GPAIs) will take effect, which will impact many of the most sophisticated LLMs and foundation models that are being placed on the market.
General application date of the Act (2 August 2026) – this is the official date on which all remaining provisions of the Act will apply (except those mentioned below). This includes all of the obligations relating to high-risk AI systems that are referred to in Annex III of the regulation, such as systems being used in relation to, for example, biometrics, education, employment, insurance, financial services and critical infrastructure.
High-risk AI system requirements for product and safety components (2 August 2027) – all remaining obligations relating to high-risk AI systems that are products or safety components already governed by existing EU harmonisation legislation, such as medical devices, machinery and radio equipment.
For the majority of organisations that are not involved in the development of GPAIs, the most immediate key dates will be 2 February 2025 and 2 August 2026. With most companies having approximately two years to ensure that their AI compliance programs are fully implemented. For some kinds of AI systems and general-purpose AI models which are already placed on the market before certain dates, the deadlines for compliance are more generous.
The Act introduces a comprehensive cross-sector framework for the development, deployment and distribution of AI systems.
The extent to which an organisation that falls within the territorial scope of the Act is subject to its requirements is predominantly determined with reference to two key factors: the nature and purpose of the relevant AI system and the role that the organisation plays in the supply-chain.
Not all AI systems are subject to comprehensive regulation. Instead, the primary focus of the Act is on imposing obligations on companies that are responsible for the development, deployment and distribution of ‘high-risk’ AI systems. These are AI systems that are intended to be used for a particular purpose that is expressly listed within the regulation and that the EU considers to likely result in a high-risk.
In addition to the prohibitions on certain specific AI practices, a separate framework also exists for the providers of GPAIs, who will generally be responsible for developing upstream AI models that can be configured and deployed for a wide variety of purposes. Equally, there are some more limited transparency and AI literacy requirements that apply to many providers and deployers of AI systems in general, irrespective of whether the system is classified as ‘high-risk’.
Consistent with other EU digital regulation, the potential penalties for infringing the Act are significant, with organisations facing fines of up to €35m or 7% annual worldwide turnover (whichever is the higher) for what are considered to be the most egregious offenses. The specific sanctioning regime for high-risk systems will be laid down by each EU country, so some degree of divergences is expected.
EU authorities that are competent under the regulation are also being granted considerable additional powers, including the right to access to source code, documentation and training data sets. Additionally, such authorities can perform evaluations of AI systems, require rectifications to be made and ultimately require that they are recalled or withdrawn from being sold or made available within the EU.
In order to prepare for the Act’s application, organisations should take the following four steps as a priority:
Create an AI inventory – including the details of all AI systems that the organisation currently develops, deploys and distributes.
Perform an applicability assessment for the Act – which assesses the potential impact of the Act on the organisation (as a provider, deployer or distributor/importer) and the obligations that will likely apply as a result.
Undertake a gap analysis – which compares the current governance measures that the organisation already has in place with those that are required under the Act, with a list of recommended remediations.
Implement a compliance program – the program should take into account the findings of the AI inventory, applicability assessment and gap analysis and ensure that appropriate measures are in place by the relevant implementation date under the Act.
Authored by Eduardo Ustaran, Dan Whitehead, Michael Thiesen, and Juan Ramon Robles.