In the last couple of years, generative artificial intelligence (AI) has emerged as a transformative tool for the advertising industry, promising unprecedented efficiency and cost savings and innovative ways of creating content. However, alongside its potential benefits, businesses are confronted with a host of new legal challenges. Key areas of concern include compliance with advertising regulations and consumer protection law as well as intellectual property issues. This article considers some of the key legal pitfalls businesses operating in the UK should bear in mind when integrating AI into their marketing strategies, providing insight on how to mitigate these risks effectively.

What is generative AI?

Generative AI refers to a type of AI that is designed to produce new content or data by learning from existing information. Generative AI models analyse patterns in the vast datasets they are trained on and generate output that resembles the training data without necessarily duplicating it.

How is generative AI used in advertising?

There are multiple ways in which generative AI is or can be used in advertising, including:

  • Brainstorming and ideation: Generative AI can suggest ideas and concepts for marketing campaigns and creative content.
  • Content creation and design: Generative AI can produce copy for advertisements and social media posts and scripts and storyboards for video content, as well as create visuals and graphics.
  • Personalisation and customer segmentation: By analysing vast amounts of consumer data, generative AI can help tailor advertisements to individual preferences and behaviours that resonate more with viewers.
  • Chatbots and virtual assistants: Businesses can provide information by way of an AI chatbot, making it possible for customers to ask questions about the business’s products or services and receive instant responses.

In each case, it is likely that human teams will continue to be heavily involved in the process and use generative AI as a tool rather than relinquish control completely. Indeed, human oversight is necessary to ensure compliance with the law.

How is generative AI regulated in the UK?

In March 2023, the Conservative government announced, through the publication of its AI White Paper, that it did not intend to introduce new AI-specific legislation. Instead, it presented a framework consisting of five non-binding, cross-sector principles, signifying its “pro-innovation” approach to AI regulation. The government also confirmed that there would be no AI regulator. Instead, existing regulators such as the Advertising Standards Authority (ASA) and the Competition and Markets Authority (CMA) would be required to interpret and apply the principles within their respective fields.

Ahead of the Labour government’s victory in the general election in July 2024, it stated that it would introduce new regulation focusing on companies developing “the most powerful AI models” and prohibiting the creation of sexually explicit deepfakes. So far, no steps have been taken to regulate AI. On this basis, AI is currently not subject to any specific legislation in the UK, and the extent to which AI will be specifically regulated going forward remains uncertain, although the Labour Party’s manifesto suggests that any forthcoming legislation will focus primarily on developers of AI models rather than users. That being said, this does not mean that there are no legal issues to consider when using AI. In contrast, the use of AI gives rise to new considerations under existing laws, some of which we consider in this article.

Legal risks to consider

Consider whether the use of AI in content creation has to be disclosed

Given that there are no AI-specific rules, there is no express requirement to disclose when content is created by AI in the UK. Likewise, there is no broader requirement to disclose the source or creator of marketing content.

However, there is a general prohibition on unfair commercial practices, including practices that omit information the consumer needs to take an informed transactional decision (e.g. to make a purchase), and practices that contravene the standard of care which a trader is expected to exercise towards consumers in accordance with honest market practice and good faith. In both cases, it must be shown that the practice causes or is likely to cause the average consumer to take a transactional decision they would not otherwise have taken. It may be argued that information confirming that marketing content has been produced by an AI tool is material information and/or that honest market practice requires businesses to disclose this. However, if an AI tool is used to create a script for a TV advert, it is likely to be difficult to prove that the average consumer’s decision to purchase the advertised product or service is impacted by a failure to disclose that the script was AI-generated. In contrast, where AI-generated images or videos are used to promote e.g. a skincare product and the use of AI is not disclosed, consumers may assume that the content reflects real-life results, which risks misleading consumers. Similarly, guidance issued by the Committee of Advertising Practice (CAP) provides that ads using AI-generated images to make efficacy claims may mislead if they do not accurately reflect the efficacy of the product (similar to a photo that is photoshopped or subjected to a social media filter). Marketers should consider whether consumers need to be made aware of the use of AI in advertising to make an informed purchasing decision. If the answer is yes, it should be disclosed.

Marketers should also consider any contractual requirements to disclose the use of AI, e.g. by a social media platform or the developer of the tool used to create the advertisement. Further, advertising agencies may choose to incorporate the principles for the use of generative AI in advertising, issued by the Incorporated Society of British Advertisers (ISBA) and the Institute of Practitioners in Advertising (IPA), in their contracts, which provide that advertisers and agencies should ensure that the use of AI is transparent where it features prominently in an ad and is unlikely to be obvious to consumers, and that AI should not be used in a manner that is likely to undermine public trust in advertising (for example, through the use of undisclosed deepfakes). 

Make sure that AI-generated content complies with advertising regulations and consumer protection law 

Marketers should bear in mind that AI-generated content is subject to the same advertising rules as any other content, and marketers will be responsible for any marketing communications and materials they publish. Guidance from CAP emphasises that even if marketing campaigns are entirely generated using automated methods, marketers still have primary responsibility to ensure that their advertisements are compliant with the CAP and BCAP Codes and consumer protection law. Among others, this is relevant for companies that make chatbots available to customers to assist with product queries. Even if the information provided by such chatbots in response to customers’ questions are based on data that is not fed to it by the company making the chatbot available, the company will be responsible for any misleading or otherwise non-compliant claims made by the chatbot. An example would be if the chatbot stated that a product is the best-selling of its type, which is generally only permitted under the CAP Code where it is supported by up-to-date, comparative evidence relating to market share and/or unit sales and consumers are provided with enough information explaining the basis of the claim to verify its accuracy for themselves.   

CAP has also highlighted the risk of some AI models amplifying biases present in the data they are trained on which could lead to socially irresponsible ads. Examples include AI tools portraying idealised body standards or depicting higher-paying occupations as male or having lighter skin. Marketers should be aware of this and sense check any AI-generated materials to ensure that they do not inadvertently portray stereotypes or biases that may be offensive, harmful or irresponsible.

Similarly, marketers must make sure that any claims are properly substantiated and not rely on AI tools for accurate information.

Consider what intellectual property risks might arise when using generative AI in marketing campaigns

Currently, where marketers use AI tools to generate content for marketing campaigns, there is no certainty that the marketer will own the copyright in the resulting content. Such content may not even be protected by copyright at all. While the UK is one of a handful of jurisdictions that has specific copyright rules for computer-generated works (i.e. works with no human author), such provisions are untested and may be inconsistent with the EU test for “original” copyright works, which requires human creative input for protection of certain works. This means that, until the provisions are either tested in court, or the government decides to legislate in this area, there is a significant degree of uncertainty around whether generative AI output (in particular text, music and images) is protectable.

Marketers will have a better chance of arguing that a work is protectable if they can show that there has been sufficient human creative input in the process, such that the output is being driven by the marketer and is not unpredictable (e.g. by using more sophisticated prompts, or a series of prompts, and refining the output). The position is less complex in relation to video content because there is no requirement for video content to pass the threshold for “originality”. However, this is an evolving area and it remains untested as to whether and how the usual rules apply, including in relation to AI-generated video content. There is therefore no guarantee that a particular output will be protectable. If a work is not protectable, it can be used freely by third parties, such as competitors, resulting in a dilution of unique, brand-related content.

Even if a work was found to be protectable by copyright under the UK rules for computer-generated works, or films, the owner will be the “person who made the necessary arrangements”. The person who made the arrangements may not always be clear and will depend on the circumstances of the particular project and AI tool. It could be, for example, the developer of the algorithm, the person who chose the relevant input(s) and parameters for the AI model, the person who chose the parameters for the input prompts or the person who funded the project. Further, if the AI tool has been licensed from a third party, the developer’s terms and conditions will apply. While some terms state that, as between the developer and the user, the user will own the IP, this is not always the case. As a result, there is a significant risk that, even if a work is protectable, it is owned by the developer, unless the terms and conditions state that it belongs to the marketer. 

Businesses should also be aware that the use of generative AI can give rise to potential infringement claims, if the output produced by the tool is sufficiently objectively similar to any third party IP. Some AI developers will indemnify users of their tools in relation to any IP claims by third parties, but others will not. Instead, they may put the onus clearly on the business to clear the rights in any output before use. Businesses will therefore need to be prepared to check terms and conditions carefully and carry out a copyright clearance exercise in relation to any AI-generated content that they want to include in an advertisement before using it.

Similarly, if a business uses an agency to create its advertising, the business should be aware that, even if the agency terms state that any copyright will belong to the business, it may be the case that there is no copyright in the content, if it has been created using AI, so any such assignments of rights will not transfer any IP. Businesses should ensure that agencies are contractually required to disclose the use of generative AI in its creation of content before using it, so the business can decide whether to go ahead. If the business does allow the agency to use AI-generated content, the business should ensure it includes indemnities to cover any claims of infringement by third parties.

Authored by Penelope Thornton, Micaela Bostrom, and Matthieu Bouharmont.

Next steps

The use of AI by businesses in their marketing and advertising activities has introduced a range of new legal issues under the existing legal frameworks. As these frameworks were not designed to address the complexities arising from AI-generated content, legal compliance and effective risk management have become more challenging for marketers looking to stay at the forefront of technological developments.

If you would like to know more about how to address and mitigate the risks of using generative AI in your marketing activities, please do contact us.

Search

Register now to receive personalized content and more!