2024-2025 Global AI Trends Guide
On 29 March 2023, the UK government published its long-awaited white paper on its intended approach to regulating AI. The proposal seeks to strike a balance between the primary policy objective of creating a ‘pro-innovation’ environment for business and developing trustworthy AI that addresses the most significant risks to individuals and society.
Since the UK first published its policy outline on this topic in July last year, interest in the potential of AI technologies, particularly generative AI, has grown considerably. Nonetheless, the approach being put forward now is consistent with the original government position and remains broadly the same.
The regulatory framework is intended to be underpinned by five core principles, which are meant to govern the responsible development and use of AI. These include:
Safety, security and robustness – AI systems should function in a robust, secure and safe way throughout the AI lifecycle, with risks being continually identified, assessed and managed.
Appropriate transparency and explainability – AI systems should be appropriately transparent and explainable to allow for outputs from such applications to be interpretable and understandable.
Fairness – AI systems should not undermine the legal rights of individuals or organisations, result in discrimination or create unfair market outcomes.
Accountability and governance – Governance measures should be in place to ensure effective oversight of the supply and use of AI systems, with clear lines of accountability established across the AI lifecycle.
Contestability and redress – Where appropriate, users and other impacted third parties should be able to contest an AI decision or outcome that is harmful or creates material risk of harm.
It is currently envisaged that these principles will initially be placed on a non-statutory footing, meaning that the UK government does not intend to introduce new legislation at the current time. Instead, existing sector and domain-specific regulators such as the UK Information Commissioner’s Office (ICO), the Financial Conduct Authority (FCA), the Competition and Markets Authority (CMA) and the Medicines and Healthcare products Regulatory Agency (MHRA) will be required to adopt their own guidelines on how the principles should be implemented by relevant organisations in practice. Notably, the white paper confirms that there is no immediate intention to create a new AI regulator.
The principles and sector-specific guidance will also be supplemented with voluntary ‘AI assurance’ standards and toolkits, which are intended to assist with the responsible adoption of AI.
The UK’s approach to AI regulation can be clearly contrasted with the European Union’s proposed AI Act. While the UK looks to adopt a flexible and ‘proportionate’ approach to AI regulation, the EU has instead sought to implement a more prescriptive product-safety style law, which imposes a detailed set of technical and organisational requirements on developers and users of ‘high-risk’ AI systems.
Equally, the EU AI Act seeks to introduce a consistent set of governance standards that are intended to apply across industry sectors in a broadly static manner. Conversely, the UK is looking to rely on regulators in particular sectors and domains to develop their own approaches to compliance, with reference to a set of high-level centralised objectives, which are expected to be modified and updated an iterative basis as technology develops and risks evolve.
For many organisations that develop and use AI across multiple jurisdictions, the challenge will be in developing a consistent and sustainable global approach to AI governance and compliance which satisfies these diverging regulatory standards. At a practical level, the UK approach is likely to be regarded as a baseline level of regulatory obligations, which may be sufficiently broad to have global relevance, while the EU approach is likely to be seen as requiring significantly higher standards of compliance.
The UK government has indicated that it intends to move quickly in the adoption of the framework across relevant sectors and domains. Over the coming months, regulators will be provided with guidance on how they should be implementing the principles, and this will be undertaken in parallel to a public consultation which is due to run until 21 June 2023.
It is therefore possible that different regulators will start publishing guidelines in the next 6-12 months. In fact, some authorities, such as the UK ICO have already started to introduce extensive guidance in this area, with their most recent publication from earlier this month focusing on how the GDPR principle of fairness applies in the context of AI models.
For organisations, this rapid adoption of new standards means that it is necessary to be taking immediate steps to assess the potential impact of the regulatory framework on their business. It will be particularly important to consider the extent to which AI technologies are currently being developed and used in-house and whether the proposed principles are being suitably addressed through existing governance measures.
Authored by Dan Whitehead and Eduardo Ustaran.