Insights and Analysis

AI and ESG - friends or foes?

Pink AI
Pink AI
AI1 and ESG2 have both been climbing the agenda in the UK financial services industry in recent years.  As firms – driven by customer, investor and regulatory expectations – prioritise sustainable and ethical practices and products, they are likely to turn to the efficiencies that AI offers.  AI can help to analyse vast datasets, optimise investment decisions, and assess risks more accurately, all while striving to uphold ESG standards.  However, AI might raise risks for firms’ ESG priorities when it comes to environmental impact, bias, data privacy, ethical use of technology, and transparency.

This article helps firms to navigate some of the benefits and challenges arising from the interplay between AI and ESG.
 

AI: The benefits for ESG

Meeting ESG goals and expectations, and doing so efficiently and consistently, is a key priority for firms.  Their customers, their investors and their regulators expect it.  It is almost inevitable that firms will turn to AI to help achieve this – and according to a recent FCA/Bank of England survey, some firms already are3.

Enhancing ESG performance: AI models can analyse vast amounts of data to help inform better investment decisions, monitor the performance of green financing products against targets, and assess firms' activities for sustainability reporting.  They can help to achieve social goals such as improving financial inclusion by providing alternative credit checks, or improving DEI strategies by screening candidates fairly and objectively.  And they can assist firms to monitor and comply with regulatory obligations, by scanning approved sources for updates and preparing summaries for human review.

Managing ESG risks: AI solutions can also help firms to identify and mitigate risks.  They can enhance transaction monitoring systems and controls to detect unusual activity, helping to prevent financial crime and improve governance.  And they can simulate different climate scenarios and their potential impact, enabling firms to predict market trends and adjust their strategies accordingly.

AI: The risks for ESG

Despite the undoubted positive impacts of AI, its use can present challenges that could hinder firms' ESG agendas.

Environmental impact: AI has the potential to create significant environmental impacts and hinder companies' efforts to be net zero.  It relies on large data centres that consume considerable electricity and, perhaps less well-known, require substantial water consumption to cool them down.  Any firm that uses AI needs to consider the effect on its reportable carbon emissions. Most firms must report emissions from in-house AI models and data centres (i.e. scope 1 or 2 emissions) and the FCA has encouraged firms voluntarily to report emissions from third parties' or outsourced AI tools (i.e. scope 3 emissions).

Greenwashing risks: If AI systems are flawed, or work from poor quality or out-of-date data, this could lead to inaccurate conclusions about a firm’s environmental credentials.  If relied upon and published, this could expose a firm to regulatory, litigation, and reputational risks4.

Harmful bias: Flawed or mismanaged AI systems have the potential to create harmful biases, unintentionally favouring or discriminating against specific individuals or categories of individuals in recruitment processes or customer approval checks.  So rather than advancing social goals by improving financial inclusion and DEI strategies through AI, firms may inadvertently do the opposite.  This could lead to complaints, discrimination claims and regulatory enforcement action in more serious cases, particularly where consumers are adversely impacted by biased decision-making5.

Mishandling of data: Training AI models for use in customer contexts often requires extensive personal data, which can increase the risk of data privacy breaches and related claims by individuals or action by the ICO.

Transparency challenges: Transparency is one of the cornerstones of good governance, and AI can pose difficulties in this area.  Firms are well-used to explaining their systems and controls, processes and decision-making to regulators, but the complexity of AI algorithms and the lack of visibility into many AI training datasets can lead to challenges for firms in understanding and explaining AI's outcomes and processes.

How can firms balance the risks and rewards of AI and ESG?

The intersection of AI and ESG considerations presents a unique set of risks and rewards for financial institutions.  In addition to implementing robust AI and ESG governance frameworks, policies and procedures, firms should ensure that these operate together rather than independently, and that those responsible for their development and implementation work collaboratively within the firm's risk profile.  In particular, firms should:

  • Carefully balance their commitment to sustainability with goals of achieving innovation and gaining a competitive advantage through AI.
  • Assess the environmental impact of their in-house and outsourced AI usage and ensure this is properly reported.
  • Train their AI tools on diverse and representative data, to reduce the risks of inaccuracy, undesired outcomes and harmful bias.
  • Maintain appropriate human involvement with and oversight of AI at all stages, to increase transparency and explainability where possible and better identify any flawed AI output.
  • Ensure that their data privacy protocols are sufficiently robust to address the risks posed by AI.

Through this holistic approach firms can best ensure that they benefit from the enhancements to their ESG strategy that AI undoubtedly offers, while also addressing the challenges that it brings.

If you have any questions about this article, or any of the issues raised, please get in touch with one of the contacts listed.

Authored by Hannah Piper, Jennifer Dickey, Georgina Denton, and Sonali Patani.

References

  1. In this article we apply a broad definition of AI, consistent with that used in the new AI Act (on which see our updates here and here) – being any machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that infers, from the input it receives, how to generate outputs. 
  2. We expect firms are now familiar with the concept of ESG, which refers to various aspects of sustainability. The "E" captures issues such as climate change, pollution, deforestation and access to clean water; the "S" matters such as employment standards, diversity and inclusion, and data protection; and the "G" risk management, values, identifying and preventing corruption, and reporting standards and transparency.
  3. See “Artificial intelligence in UK financial services – 2024”, published on 21 November 2024, which indicates that approximately 5% of firms which responded to the FCA/Bank of England survey are currently using AI to optimise their carbon footprint estimation, with 15% more planning to use AI for this purpose over the next three years.
  4. See our article on the FCA’s new anti-greenwashing rule here.
  5. For example, the FCA might be able to bring enforcement action for breach of Principle 6 (which requires a firm to pay due regard to the interests of its customers and treat them fairly) or the Consumer Duty.

Search

Register now to receive personalized content and more!