2024-2025 Global AI Trends Guide
Nowadays, artificial intelligence (“AI”) is all around us. Previously, we took a look at the approaches being proposed to regulate AI at a pan-EU level and in the UK.
In this part 2 article, we take a look at what mandatory requirements businesses involved in the supply of AI products can expect in France, the Netherlands and the US. We also share our thoughts on what businesses can start to do to prepare for (the inevitable upcoming) regulatory frameworks surrounding AI, sooner rather than later.
While currently there is no national regulation of AI in France1, it is noteworthy that the French Government is taking active steps to make the country a world leader of computer technologies.
As in the UK, a National Strategy for AI (otherwise known as “SNIA”) was launched in March 2018 to cover the period from then to 2025. The first phase of the SNIA, launched in 2018, was aimed at creating conditions surrounding the development of the AI sector. The second phase, underway since 8 November 2022, is intended to increase the number of talents trained in this field and to transform AI research and development efforts into economic successes.
This work has prompted a number of institutions to call on the French government to keep some form of an AI regulatory framework in mind.
By way of example, the French Administrative Supreme Court (“Conseil d'Etat”) issued a study on 31 March 2022 commissioned by the Prime Minister on the use of AI by public administrations in respect of general and public services.
While this study was not specifically targeted at private companies, the French Administrative Supreme Court used it to call on the French government to anticipate a form of AI-specific regulation prior to the introduction of the European-wide AI legislation expected in the near future. In particular, the study warns against any "brutal application" of any new law in this area, given that this would not only entail significant economic cost but also runs the risk of increased non-compliance, and ultimately undesirable competition between French and European private operators. Instead, according to the French Administrative Supreme Court, the aim for now is to draw up guidelines (which can be updated and amended as time goes on) and in turn can be used to form the drafting of any future national AI-related legislation.
Another French public institution, the National Consultative Commission on Human Rights (“CNCDH”), is also calling for the development of a legal framework for the use of AI aimed at both the private and public sectors. The CNCDH has in fact issued recommendations for the formation of a binding AI-specific legal framework capable of guaranteeing the respect of individuals’ fundamental rights.
With a view to reconciling freedom and fundamental rights on the one hand, and innovation and public performance on the other, the French Administrative Supreme Court is thus proposing a far-reaching transformation of the role of the French data protection authority (“CNIL”), which is to become the regulatory authority for AI systems.
That said, the regulation of AI is already one of CNIL's main priorities, with such oversight being led by four main objectives:
Understanding the functioning of AI systems and their impact on people;
Enabling and guiding the development of AI that respects personal data;
Federating and supporting innovative players in the AI ecosystem in France and Europe; and
Auditing and controlling AI systems and protecting people.
To this end, the CNIL announced at the beginning of this year that it had set up a department specifically dedicated to AI. Five complaints have already been received and the CNIL has launched a control procedure by submitting an initial questionnaire to the concerned company to assess its compliance with personal data protection regulations.
The CNIL has made clear that it intends to introduce express rules to protect the personal data of European citizens, in order to contribute to the development of AI systems that respect privacy. Businesses involved in the supply of AI products to be used in France will therefore have to ensure that their products comply with any mandatory requirements introduced by the CNIL in this regard.
Similarly, in the Netherlands, there is currently no law or regulation in place which regulates AI. If, and to the extent AI is already being used in products, services and processes placed on the Dutch market, such AI system is regulated by the same framework that applies to the main product, service or process it forms part of.
That said, back in 2019, the Dutch Government, like other markets discussed within part 1 of this article and this part 2 article, issued a Strategic Action Plan for AI (known as “SAPAI”). The plan is based on the understanding that if the Netherlands wants to be a participate at the forefront of a global competing economy, it has to accelerate the development and use of AI. To do so, the SAPAI presents a wide range of policy initiatives and action plans to strengthen the competitiveness of the Netherlands in AI on the global market. In particular, these initiates are aimed at fostering AI in the economy via policies related to education, R&D and innovation, networking, regulation and infrastructure.
The SAPIA is based on three strategic pillars:
In order to achieve this goal, intensive cooperation between the public sector and private sector is required to create a necessary and crucial difference on the European and global playing field.
This pillar is aimed at:
This pillar relates to the protection of fundamental rights (such as trust, human rights, consumer protection and the safety of citizens) and the creation of appropriate legal and ethical frameworks.
The annual Dutch governmental budget for AI innovation and research has steadily increased over the last few years. In addition, in April 2021, an investment program was approved, aimed at maximising the possibilities of AI for the Dutch society and economy. To do so, the program will result in the investment of up to an additional EUR 276 million in development of AI in the upcoming years.
At the date of writing, the United States has not passed any national legislation to regulate AI. Instead, federal agencies have developed sector-specific - and often voluntary - guidance regarding the development, deployment, and use of AI. At state-level, things are a little different, with approximately ten individual states having passed laws governing the use of AI within their borders.
Beginning in late 2022 however, both the White House and Congress expressed their intent to develop a unified regulatory approach to AI that prioritizes trustworthy innovation and protects users’ privacy and civil rights. Federal agencies have subsequently issued new or updated guidance to reflect these priorities, and states have continued to consider AI-specific legislation.
In October 2022, the White House Office of Science and Technology Policy published a Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People (the “Blueprint”). Although non-binding, this white paper provides insight into the federal government’s goals for the future of AI in the United States. Similar to the approach taken by the UK Government, the Blueprint identifies five principles that will guide the development of AI policies in a manner that honors democratic values and protects civil rights, civil liberties, and privacy. The principles themselves are as follows: (1) safe and effective systems; (2) algorithm discrimination protections; (3) data privacy; (4) notice and explanation; and (5) human alternatives, consideration, and fallback. The Blueprint contains further details and explanations of each of these principles, and a Technical Companion to the Blueprint suggests implementation techniques and clarifies how the principles support federal agencies’ existing, sector-specific AI guidance.
In April 2023, senate leaders publicly proposed a four-part legislative framework for the regulation of AI but have not yet introduced formal legislation based on that framework. Though still vague, this fledgling framework aligns with the White House’s emphasis on ethical, safe, and trustworthy AI.
Although no formal rules or regulations specifically govern the use of AI, numerous federal agencies have clarified their interpretation of AI regulation under existing statutes. Like the White House’s Blueprint, these agencies balance efforts to encourage innovative, AI-powered products with attempts to honor individuals’ privacy and civil liberties. Examples include the following:
FTC – The FTC has issued guidance regarding its regulation of AI under the Equal Credit Reporting Act, the Fair Credit Report Act, and Section 5 of the Federal Trade Commission Act. In February 2023, the FTC updated its guidance in a blog post titled “Keep Your AI Claims in Check.” This updated guidance clarifies the FTC’s interpretation of deceptive claims. Specifically, the guidance emphasizes that organizations’ claims about AI products are deceptive when the claims lack scientific support or apply only to certain users in specific circumstances. Pursuant to the guidance, organizations should provide adequate proof to claim that an AI product performs a given task better than a product that does not use AI. Similarly, organizations should not claim that a product is AI-powered merely because the product used an AI tool during its development. Finally, organizations should be aware of reasonably foreseeable risks of their AI products and cannot escape responsibility for such risks by blaming third-party developers.
FDA – In September 2022, the FDA published Guidance for Clinical Decision Support Software, which offers FDA’s interpretation of the FDCA’s application to medical products that use clinical decision support software. This guidance does not specifically reference AI, but may apply to AI where clinical decision support software uses AI to recommend decisions. Also in April 2023, the FDA published its Draft Guidance establishing its tentative approach to regulating AI-enabled medical devices in accordance with the White House’s Blueprint. Although the Draft Guidance is non-binding and remains up for public comment until August 30, 2023, it proposes FDA guidance for AI or machine learning-based medical devices that a manufacturer intends to modify either by software or by human modification. In accordance with the principles created by the Blueprint, the Draft Guidance emphasizes the need for clearer communications to users about an AI product’s performance regarding race, ethnicity, disease severity, gender, age, and geographical considerations. It also proposes pre-determined change control plans to guide the modification of AI-enabled software.
DOC, NAIAC – In April 2022, the Department of Commerce created its National Artificial Intelligence Advisory Committee, (“NAIAC”), to advise the president on issues related to AI. In January 2023, NIST issued its Artificial Intelligence Risk Management Framework (the “Framework”). This voluntary, sector-specific resource seeks to help organizations design, develop, deploy, and use trustworthy and responsible AI systems. The Framework offers four tenets - Govern, Map, Measure, and Manage - collectively designed to promote the development of AI that is trustworthy, valid, reliable, secure, accountable, and responsibly designed. The Framework then goes on to divide each tenet into subsections and suggests best practices that organizations can adopt at each stage.
As of June 2023, 33 states, the District of Columbia, and Puerto Rico have considered bills related to AI, and 10 states have successfully passed laws governing AI. California has emerged as a leader in AI regulation by passing legislation to monitor the current uses of AI and accordingly regulate its future development. California’s efforts echo the federal government’s efforts to regulate the development of AI and offer insight into how states are implementing their own regulations. See, e.g., California AB 302 (a law requiring the Department of Technology to identify and report to the California Legislature all “high -risk automated decisions systems” in use by state agencies); and California AB 331 (which beginning 2025, will require organizations deploying AI to conduct annual impact assessments of their AI tools).
While comprehensive AI regulation is yet to arrive, it is clearly fast approaching. Companies involved in the supply of AI products and/or deploying AI systems in the EU, UK and the US should therefore:
Keep an eye out for any sector-specific guidance that may be published by regulators in e.g. France and the UK ahead of any mandatory legislation.
Familiarise themselves with any legislative proposals to regulate AI (including e.g. the EU’s AI Act, as well as those that may be introduced at both federal and state level in the US) and begin to consider how their current uses may be classified and the obligations that would in turn fall to them.
Ensure awareness of other AI-related proposals, including e.g. the EU’s draft AI Liability Directive, which includes provisions on the responsibilities and obligations of actors in the AI supply chain.
Our Global Products Law practice is fully across all aspects of AI regulation, product safety, compliance and potential liability risks, and we have both industry-sector knowledge and a commercial focused approach to support you with your AI legal needs. We are actively monitoring developments in this area and encourage businesses to get in touch with any questions.
Authored by Valerie Kenyon, Magdalena Bakowska, Vicki Kooner, Christine Gateau, Lauren Colton, Cléa Dessault, Manon Cordewener, and Julie R. Schindel.