2024-2025 Global AI Trends Guide
In February, the EU Commission announced its strategy for shaping the digital future of the bloc. This included the publication of its long-awaited white paper on the future of artificial intelligence, with proposals for introducing a regulatory framework to govern the adoption and application of AI in both the commercial and public realms.
The reforms come in response to growing concerns amongst the public and in the media about the potential harms that may be caused by autonomous machines, and follows on from work that has been undertaken by the Commission’s High Level Expert Group on Artificial Intelligence (AI HLEG).
Developing AI that is considered trustworthy by EU citizens is the foundation for the regulatory proposals, with the objective being to build an ‘ecosystem of trust.’ As digital technology becomes an ever more central aspect of people’s lives, the Commission argues that it is fundamental that citizens are able to trust it and that this is a prerequisite to its uptake. The proposed solution is to develop a proportionate and consistently applied regulatory framework that is fit for adoption across Europe.
The white paper identifies three primary categories of risk that its regulatory framework needs to address in order to prevent both material and immaterial harm being caused to individuals. These are: protecting the fundamental rights of individuals as laid down in the EU Charter (e.g., privacy and non-discrimination), ensuring the safety of AI applications and addressing issues surrounding the allocation of liability, and dividing responsibility for the effects and consequences of autonomous machines.
There are various challenges of proposing new regulation in this field. One of these is how existing laws that already govern areas such as data protection, product liability, and anti-discrimination will align with any new regulatory framework.
The Commission proposes to address this issue by taking a two-pronged approach, with existing EU laws being subject to review and potential modification in order to address issues specific to AI and then further supplemented by a new dedicated law.
Perhaps surprisingly, the new regulation that has been proposed is relatively limited in scope, particularly when compared to the more expansive ambitions put forward by the AI HLEG in a paper published in April 2019. The white paper advocates a risk-based approach, whereby AI use-cases are each assessed on a case-by-case basis to determine the potential risks posed to individuals and society. Those applications that would be deemed to be ‘high risk,’ taking into account the potential safety implications and threats to the fundamental rights of individuals, will be the only ones that become subject to the new regulation.
Where the proposed law would be deemed to apply, the Commission has proposed applying additional mandatory requirements, which would be split into six fields:
The white paper is subject to open consultation until 31 May 2020, following which it is likely that the Commission will put forward revised proposals.
Authored by Dan Whitehead