News

AI regulation in financial services in the EU and the UK: Governance and risk-management

Image
Image

Artificial intelligence (AI) and machine learning (ML) technology have started to transform financial services and bring about new business opportunities, but with risks. AI governance should be adopted by financial institutions to comply with existing and upcoming AI-related law and regulations in the EU and the UK, as well as principles, values and ethics that promote the responsible use of AI.

The adoption of AI in financial services is accelerating with a growing number of financial institutions integrating or embedding AI and machine learning technology into their product offering. The availability of Big Data, cloud-based hosting services, open-source AI software and enhanced infrastructure, such as the graphic processing units (GPUs), to train and develop more sophisticated AI systems all have contributed to the rising adoption of AI. Financial institutions and FinTech are either developing their own AI technology or relying on AI third party vendors for AI solutions.

AI has started to transform (and may certainly transform) business models of financial institutions. Service providers have started to offer AI as a Service (AIaaS), which is a cloud-based service offering AI outsourcing, and financial institutions are integrating AI and machine learning solutions into their supply chain of product offerings. More financial institutions are structuring their business models as not just simply B2B or B2C, but to B2B2C or B2B2B, frequently acting as an intermediary that procures AI solutions from third parties and offering them to their clients as part of a bundled product package.

Some common uses of AI/ML technology by financial institutions include: chatbots, robo-advisors, fraud and money laundering detection for the purpose of AML and KYC checks, assessing creditworthiness and affordability, and evaluating insurance risk. These services facilitated through AI and machine learning allow financial institutions to offer tailored and diverse products to their customers at a cost-efficient manner. The growing adoption and prevalence of AI however come with the rising concern for potential harms it could cause to consumers and financial institutions. Data bias, model risk, discriminatory results, risks to privacy and human dignity, lack of transparency and insufficient oversight are only a few examples of potential problems that may pose significant financial and reputational risks to financial institutions. To illustrate the severity, the maximum fine that could be imposed under the proposed EU AI Act is EUR 30 million or, in the case of companies, 6 percent of their worldwide annual turnover, whichever is higher.

 

Governance and Risk Management

As highly regulated entities, financial institutions need to be cognisant of the regulatory framework in the use of AI in their systems and products. This applies not only to how they utilise AI themselves but also to how their vendors employ the technology. It is important to understand the legal requirements for the use of AI extend beyond specific AI law and regulations. They also encompass existing regulations and regulatory guidance applicable to financial institutions, their vendors and the AI supply chain. These legal requirements often apply to financial institutions irrespective of whether they use, develop or procure AI systems. However, the legal landscape governing AI may not well understood by financial institutions and third party service providers. For instance, the Bank of England’s DP5/22 on Artificial Intelligence and Machine Learning clarifies that the approach of UK supervisory authorities primarily revolves around interpreting how the existing regulatory framework in the UK relates to AI. The UK approach aims to address any identified gaps in the regulatory framework while considering the overlaps within the existing sectoral rules, policies and principles within the UK financial services regulatory regime that apply to AI.  

Financial institutions can leverage their existing regulatory framework to ensure compliance with any existing or upcoming AI regulatory framework, with governance being one crucial area. They may have already established robust corporate and data governance framework to enable effective oversight and adherence to existing regulations. They can address any gaps by implementing supplementary measures as necessary. Various AI governance tools are available to support this process, including:

  • securing accurate and reliable data;

  • conducting tests prior to implementation of AI models or any model updates;

  • undertaking a human review of an output produced by AI models;

  • establishing internal governance committees or units;

  • making senior managers ultimately responsible for AI oversight;

  • conducting AI impact assessments prior to the development or deployment of AI systems;

  • drafting AI ethics and values statements or framework;

  • training employees with respect to AI ethics, laws and risks of using AI systems and machine learning;

  • record keeping; and

  • making efforts to uphold transparency.

If financial institutions have no governance framework, they should take steps to consider, incorporate and adhere to AI-related principles (such as fairness, safety, sustainability, accountability and transparency) and introduce measures necessary for the effective implementation of the principles at all stages of AI life cycle. Financial institutions can adopt both compliance-based approaches and ethics-based approaches as guiding principles. These approaches are interconnected and closely-related, but while the former primarily centres on developing systems and functions that comply with relevant law and regulations, the latter focuses primarily on preventing harm, respecting rights, or assuring safety. These two approaches complement each other and collectively contribute to the responsible use of AI.

Despite the challenges financial institutions may encounter when establishing and implementing AI governance measures, we expect more guidance will be made available by regulatory authorities in the near future in the form of voluntary or mandatory requirements. We have also outlined below why it might particularly be important for AI providers and users, whether located in the EU or outside the EU, to adopt AI governance and to assess if the upcoming EU AI Act and the UK AI regulatory framework might be of their relevance, on top of any existing law and regulations that may be applicable to them.

 

AI Legal Landscape in the EU

On 21 April 2021, the European Commission published its comprehensive proposals for the AI Act, a regulation of artificial intelligence relating to the development, deployment and use of AI systems across the EU.  The proposed legislation is currently under consideration by the EU's legislative bodies, and is expected to come into force towards the beginning of 2024. The regulatory framework adopts a risk-based approach to AI regulation and targets general purpose AI, including generative AI with a multitude of more specialized downstream applications.

AI governance requirements under the AI Act are designed to apply across various industry sectors and involve categorising AI applications into four risk categories, each of which has distinct constraints, obligations and regulatory expectations. The particularly relevant list of those AI applications that are labelled “high risk” includes a number from the financial service industry, such as credit scoring or creditworthiness evaluations. The governance requirements for those and others cover several aspects including ongoing risk management, data governance, knowledge and competence, accuracy, robustness and cybersecurity, transparency and provision of information to users and record-keeping.

Financial institutions that use, develop or procure AI systems should evaluate the potential applicability of the AI Act regardless of where they are located or established, due to the extra-territorial effect. The AI Act can apply to providers that place AI systems on the market or put them into service within the EU as well as providers and users of AI systems that are physically present or established in a third country, where the output produced by the system is used in the EU. Therefore, the scope of the AI Act extends beyond the EU, and financial institutions established outside the EU will still need to take appropriate precautions to comply with the forthcoming legislation. This means that they may need to consider adopting a common set of AI systems that comply with the AI Act or using different AI systems in different jurisdictions.

 

AI Legal Landscape in the UK

Unlike in the EU, the UK government is not planning to introduce any AI-specific legislation or put AI principles on a statutory footing at least in the near future. According to its recent white paper, the government has confirmed that its existing laws may be applicable to AI. Additionally, the UK will instead adopt a cross-cutting, context-specific and principles-based regulatory framework, which will be implemented by existing regulators, on a non-statutory basis. The framework focuses on the use of AI rather than the technology itself, and it will regulate AI based on the outcomes AI is likely to generate in particular applications rather than assigning rules or risk levels to entire sectors or technologies in order to uphold the principle of proportionality. As noted above, the framework is underpinned by the following five principles to promote and guide the responsible development and use of AI in all sectors of the economy:

  • Safety, security and robustness;

  • Appropriate transparency and explainability;

  • Fairness;

  • Accountability and governance; and

  • Contestability and redress.

It should be noted that financial institutions that plan to develop, provide, procure, or use AI systems need to consider ways to manage and control AI risks. From a compliance perspective, they should assess whether existing laws in the UK, such as the UK GDPR, PRA’s regulation on outsourcing and third party risk management, Equality Act 2010, consumer rights law, tort law, MiFID II, and regulatory guidance published by the FCA or PRA might be applicable, and devise ways to comply with relevant obligations. Additionally, it is important to ensure that their governance framework and risk mitigation measures align with the UK's new AI framework and any upcoming sector-specific guidance.

If existing laws in the UK apply to a party in the AI value chain, they will need to comply with the above guidance, laws, and regulations.

 

Next Steps

Various financial institutions already possess a robust governance structure guided by existing regulations such as GDPR and guidance issued by supervisory authorities. Considering that AI governance may require similar underlying principles, values and ethics, it is prudent for these institutions and their vendors to create a comprehensive inventory of current and forthcoming laws, regulations, and their corresponding requirements. Conducting a gap analysis enables them to identify any areas that necessitate adaptation within their existing governance structure to effectively address the new requirements pertaining to AI.

As previously discussed, it is important for financial institutions to remain informed about legal developments across various jurisdictions and any sector-specific guidance applicable to their operations. This ensures they stay up-to-date with new regulations and can proactively incorporate necessary adjustments into their governance framework.  

 

 

Authored by John Salmon, Leopold von Gerlach, and Daniel Lee.

Search

Register now to receive personalized content and more!