News

The UK AI Summit and the global race to regulate AI

Image
Image

This week’s UK AI Safety Summit is once again bringing the topic of AI regulation to the forefront of public debate. Amongst the hype and steady stream of news stories, something very important is happening. The AI Safety Summit represents an ambitious initial attempt by governments, industry, academics and experts to establish a broad consensus for rules that could govern the future of our economy and society in the age of AI.

The Summit comes at a time when policymakers across the globe are racing to introduce proposals for the regulation of AI technology. It also provides a useful opportunity to assess what the current regulatory landscape is and how it is likely to evolve in the future.

The regulatory landscape

Arguably the most significant and ambitious proposal that has been made to date is the EU AI Act. This regulation was first introduced in 2021, before large language models went mainstream.

At the time, policymakers envisaged the law would establish a comprehensive set of prescriptive product safety requirements. These were, and continue to be, particularly focused on discrete AI use-cases that are perceived to pose high or unacceptable risks to individuals. Examples include facial recognition applications, medical devices, semi-autonomous vehicles and recruitment tools.

However, the recent rise of popular generative AI tools has already resulted in the EU’s original plans needing to be adjusted. In June, the European Parliament proposed an additional framework of rules that would specifically apply to developers of ‘foundation models’ that are able to be used for a range of general purposes.

Concerns about legislators being unable to keep up with the rapid pace of change in AI is arguably one of the main reasons why the UK government has taken a more cautious approach to introducing new regulation. Rather than proposing a new AI law, the government has instead announced plans for a non-statutory set of core high-level principles which will apply to organisations on a cross-sector basis.

These principles are focused on key issues such as fairness, accountability and liability, and are intended to form the basis for existing UK regulators to develop their own detailed guidelines. Such guidelines are expected to address how organisations should be developing and deploying AI systems in a manner that adheres with existing laws in fields such as privacy, competition, consumer protection and financial regulation.

Meanwhile, in the United States, the Biden administration has made headlines with the publication of a new sweeping Executive Order (EO) on AI safety. The EO directs various US federal agencies to set new standards for AI safety and security, safeguard Americans’ privacy, advance equity and civil rights, support consumers and worker and promote innovation and competition. This makes it increasingly likely that the US will introduce enforceable rules on AI safety before the EU or UK.

Similarly initiatives have also been initiated in many other jurisdictions, including China where the government has moved ahead to implement specific rules relating to generative AI technology.

 

Reaching a global consensus

Yet while jurisdictions such as the EU, UK, US and China look to progress their own policy initiatives in AI, there is also a growing need for countries across the globe to reach a broad consensus on how to regulate the technology.

AI systems operate without boundaries. A chatbot or image generation tool can be developed and then launched around the world instantaneously. The outputs from these tools can then be subsequently disseminated through social media and other online media at a rapid rate.

If a jurisdiction acts independently in introducing restrictions on the development of particular types of potentially harmful AI, then this may merely incentivise the development of that same technology elsewhere.

Efforts to create an international standard for the safe development and deployment of AI is to some extent being hampered by concerns about competitiveness. With countries such as the UK looking to avoid creating an overly burdensome set of rules that may stifle innovation and create barriers to entry into the market.

Equally, the accelerated pursuit of developing separate AI safety frameworks across major economies in parallel to each other, inevitably means that policymakers will establish differing approaches to addressing the same risks and challenges.

This race to regulate AI can be contrasted with the rise of global privacy laws over the past decade. In the case of privacy, the EU benefited from first-mover advantage in its implementation of the General Data Protection Regulation (GDPR). The GDPR is now seen as the gold-standard and many of its principles have since been adopted across other jurisdictions. By comparison, what we currently see is a range of competing regulatory proposals and it remains to be seen which of these may prevail.

However, there are also positive signs of international consensus. Analysis performed by Hogan Lovells of nine different AI proposals earlier this year emphasised that policymakers are broadly in agreement about the immediate risks that need to be addressed through regulation and the broad principles that should underpin new frameworks. International cooperation has also been demonstrated through the G7’s Hiroshima Process which has resulted in the publication of seven guiding principles relating to AI safety.

Yet there remain some fundamental areas of divergence. These range from determining which AI systems should be subject to the toughest safety standards, identifying how the regulatory burden should be split between the complex web of actors involved in a typical AI supply-chain and even establishing a common definition of what constitutes an ‘AI system’.

 

Frontier AI

Beyond the most immediate debates concerning the first wave of AI regulation, it is also vital to consider what comes next. The laws that are currently emerging across the globe are predominantly focused on mitigating the risks that policymakers have identified in the AI systems that are commonly available today.

Equally, much recent public debate has centred on the potentially systemic risks posed by Artificial General Intelligence and the rise of sentient beings. But what is often missed is the progress that comes before this, through the next wave of AI. This is what the UK government has referred to as ‘Frontier AI’ and forms the focus of this week’s AI Safety Summit.

In a new book, Deepmind’s co-founder Mustafa Suleyman refers to this next phase as ‘Artificial Capable Intelligence’ (ACI). ACI represents a future set of models that is not only able to respond to user prompts with computer-generated content, such as existing chatbots, but also perform a complex set of sequential tasks either in a physical or digital environment, based on those instructions. Imagine a single robot that is able to perform all of your day-to-day domestic housework and personal admin in accordance with your needs.

ACI and other future progress in this field will bring new challenges for policymakers. The EU’s struggle to ensure that the AI Act remains relevant in a world of generative AI highlights why current proposals will need to continuously evolve over the coming years in order to address the future capabilities of the technology. The hope is that this week’s AI Summit marks the beginning of this wider conversation.

 

Authored by Dan Whitehead.

Search

Register now to receive personalized content and more!