2024-2025 Global AI Trends Guide
Being an innovative business—especially involving artificial intelligence (AI)—comes with its challenges and legal risks. As AI applications become more ubiquitous and continue to advance and become integrated into various business models and industries, businesses’ exposure to litigation is likely to increase. >These disputes can disrupt business, leading to significant financial and reputational damage, both locally and globally.
AI disputes arise from issues such as as algorithmic biases, data privacy concerns, intellectual property rights, and liability for AI-driven decisions. Understanding and navigating these disputes is crucial for ensuring that AI is developed and deployed responsibly, balancing innovation with accountability.
Companies and organizations can safeguard and mitigate against such risk by (i) ensuring that the AI application was programmed correctly; (ii) maintaining documentation to show that the AI input was correct, appropriate, and not corrupted; (iii) sufficiently supervising the AI application and its output; and (iv) establishing guardrails against users misusing the AI application.
This chapter will explore the key areas of contention and the evolving legal landscape surrounding AI applications.
We anticipate three categories of allegations by claimants with respect to AI applications:
Allegations of incorrect information created by generative AI, such as chatbots, image and voice generators
Allegations of interference with other IT systems resulting in downtime or financial loss, e.g. if an AI application allegedly made adverse investment decisions
Real-world accidents allegedly caused by AI applications, i.e. by autonomous vehicles, robots or AI-controlled industrial facilities
Potential parties against whom respective allegations might be made include, of course, the developer of the AI system, its operator (if different from the developer) and its user. In many cases, there may be also be a distributor involved who may also face respective allegations. Finally, many of the above will have insurance coverage which professional claimant firms are typically also aiming at. It has also been suggested to create a separate legal personality for AI systems – as an "e-person"1 – which could make the AI system itself a target as well. However, the concept of an e-person has so far only found few supporters, rendering a lawsuit against an AI application itself unlikely in the foreseeable future.
From a legal perspective, potential claimants may seek to rely on various areas of law to support their alleged claims regarding the use of AI:
Contractual basis: Claims may be based on contract law as there are usually contractual relationships between the developer, operator and users of AI systems. Especially in civil law jurisdictions, statutory law (e.g. regarding accessory obligations) may lead to duties beyond the letters of the implemented contracts and terms of use. However, this is also the area where the contracting parties can protect their interests in the simplest and most nuanced way by using sensible contractual clauses.1
Product liability basis: Users and third parties may try to raise claims on the basis of product liability laws. A prerequisite would be that AI systems are considered ‘products’ and contain a ‘defect’. In the EU, for example, in January 2024, the consolidated text for the Directive of the European parliament and of the council on liability for defective products was approved by the European Council. The proposal widens the scope of the existing EU Product Liability Directive and confirms software and products containing AI are “products”. However, mere information does most likely not qualify as a product. Furthermore, not any presumed 'defect' of an AI system also constitutes a 'defect in the properties of the product' as defined by product liability law. Product liability laws typically do not require a product to be "flawless" in general, they must rather be complying with a "justified expectation of safety". On 13 March 2024, the European Parliament adapted the consolidated EU AI Act, which prohibits AI systems that pose an “unacceptable risk” and increases requirements for “high-risk” AI systems. However, it does not contain specific liabil¬ity provisions. It is likely to become law in mid-2024 with various prohibitions and provisions coming into force at dif-ferent intervals from the end of 2024 to 2027. This is in parallel with the proposed EU AI Liability Directive on adapting non-contractual civil liability rules to artificial intelligence. 2
Tort law basis: Most legal systems provide for fault-based claims for damages (i.e. tort law) which claimants may try to also apply to AI systems. However, there are also proposals for specific liability regime for AI systems.3 In addition, claimants may alleged that purported issues with an AI system fall under specific rules, i.e. rules governing the liability of a person performing a dangerous activity (e.g. Art. 250 Italian Civil Code, Art. 1242 French Civil Code), the liability of parents/guardians etc. for minors/disabled persons (Art. 2047, 2058 Italian Civil Code), the liability of animal owners (Art. 833 German Civil Code), the liability of vehicle owners (Art. 7 German Road Traffic Act), or the operation of hazardous installations (sec. 25 et seq. German Nuclear Energy Act). However, these approaches are to criticized on the grounds that analogies do not take into account the fact that the above-mentioned rules are clear exceptions to the general rules of civil liability, which makes them unsuitable for analogous application.3
Regulatory basis:4 Claims may also be based on specific regulations, such as data protection or intellectual property (IP) law. For example, once personal data is processed by AI systems, data protection regulations (such as the GDPR in Europe) impose requirements that must be complied with. According to Art. 82 GDPR, any person who suffers pecuniary or non-pecuniary damage as a result of a breach of data protection requirements may bring non-contractual claims for such damage. In addition, regulations such as the GDPR provide for fines in the event of a breach. Moreover, claims brought under specific liability regimes, such as anti-discrimination laws (e.g. the German anti-discrimination law "AGG") or professional liability laws (e.g. liability of doctors or lawyers, but also D&O liability) are conceivable in connection with AI applications.
Insurance:5 The increased use of AI systems opens the door to insurances for AI products. Just like many countries are requiring car owners to take out insurance, there are voices calling for manufacturers and professional operators of AI systems to be required to take out specialized liability insurance. Such insurance goes beyond the typical coverage of cyber insurance, which generally does not cover bodily harm, brand damage and damage to physical property. In addition to reducing costs, specialized AI insurance could further encourage the development of best practices for companies using AI, as insurers often impose requirements.
Cryptocurrency: 6 The increasing incidence of cryptocurrency related disputes across the globe. For example, a cryptocurrency exchange in relation to a threatened joint venture dispute. These disputes are inherently cross-border by nature, and complex and potentially clashing laws of different jurisdictions may apply.
Online marketplaces: 7 Online marketplaces are also facing increased actions against fake reviews businesses. Fake online reviews influence billions of dollars of e-commerce spending annually in the US and in the UK.
Civil claims for damages typically require (1) some sort of breach of the law or a contract, (2) an element of fault, and (3) a causal link between the breach and the damage. The implementation and use of AI systems poses several open legal questions to potential claimants in this regard:
Burden of proof:9 One of the main challenges for alleged damage caused by AI applications is the burden of proof. In general, the injured party bears the burden of proof. However, regulators and legal commentators take the view that the injured party often lacks the necessary insight or information into the AI system. Against this background, the EU, for example, is working on an AI liability directive which aims to make it easier for the injured party to prove that an AI system has breached the law by providing for rights to information and establishing a rebuttable presumption of a causal link between the fault, the functioning of the AI system and the damage. Similar to regulatory rights to information, some courts may also somewhat shift the burden of proof to the party of the dispute that possesses further information, such as the developer of the respective AI application.
Attribution of fault:10 When claims are asserted in relation to AI systems, it is not always possible to attribute fault to a specific entity due to autonomous decision making, lack of knowledge of the potentially liable parties, and lack of subjective fault on the part of the AI application itself. Fault itself usually involves negligent or intentional behaviour, a concept that is not transferable to algorithm-based AI applications. As a solution, some propose either opening up the possibility of direct liability of the AI system by granting it legal personality (‘e-person’) or attributing the "fault" of the AI system to either the operator, the developer or the user of the AI system.
Standard of care:11 Finally, there is a lively legal debate about the applicable standard of care when operating AI systems. In general, a different standard of care is proposed depending on the risk affinity and functionality of a particular AI system, e.g. whether it is used for private or business purposes. It is also debated whether the standard should be that of an imaginary human being (“human-machine-comparison“) or a “machine-specific” standard of care. Furthermore, some argue that developers of AI systems should be required to update their products according to the current state of science and technology, resulting in a relatively high standard of care.
In defending against claims arising out of the use of AI applications, businesses should consider the following:
First, being able to show that the input (i.e. training material) was correct, appropriate for its intended purpose and not corrupted. This will allow the AI application to apply the correct principles to a new input. If not, the situation can be compared to a maths student who is not trained in multiplication, but only in addition and subtraction, and is then asked to solve a problem that requires multiplication.
Second, the AI application needs to be programmed correctly. While this of course applies to the "core" of the AI system, this is also highly relevant at the interfaces between different AI systems. An example would be a natural language processing AI application, like a chatbot, and an AI application that solves the actual problems based on specific input data. If the interface is implemented correctly, the problem-solving AI application can "understand" the question (like speaking the same languages). Otherwise, the problem-solving AI application may not understand or even misinterpret the question asked and deliver wrong results (like speaking different languages).
Third, the AI application needs be sufficiently supervised. So once correct input is inserted and the programming is correct, the AI application has to be supervised correctly to make sure that there are no grounds for allegations that the system is drawing incorrect or biased conclusions from statistical data.
Fourth, it is also important that users of AI systems are acting in an ethical way and according to the instructions. It is difficult to foresee any and all ways in which users may misuse AI systems. An example is when drivers trick their cars equipped with AI-features (such as steering assistants to help them stay in the lane) into driving fully autonomously by taping a beer can to the steering wheel (and thereby tricking the AI application into thinking that the driver’s hand is on the wheel), even though this is a clear violation of the car's instructions. It is, hence, important to establish guardrails against users misusing the AI application.
Disputes concerning AI systems can arise on the basis of a variety of legal concepts and that – as with most things in life – all aspects of AI systems, i.e. their development, operation and use, can be subject to legal claims. To avoid disputes and be fully prepared if they arise, organizations should:
Be informed: It is important to understand and always be aware that the development, operation and use of AI applications obviously does not happen in a space free from legal boundaries. In addition to regulatory law, in particular civil obligations – as discussed in this article – should be kept in mind.
Consider for all scenarios: Developers and operators of AI systems should contractually bind their customers to certain terms of use and clearly explain these rules in order to maximize their safeguards. Exclusions or limitations of liability can be a further element to navigate and reduce risks in contractual relationships.10
Risk mitigation starts at the beginning: When developing and training AI systems, sufficient testing and the selection of the training material are not only critical to the success of the AI application, they are key to risk mitigation.
Plan ahead: Where possible, the work and output of AI systems should be logged to be able to defend oneself on the basis of these log files if a dispute arises.
Stay alert: In any case, all individuals and organizations involved should closely monitor and evaluate the performance of AI systems at all times, keeping in mind that – by definition – it won't be possible to monitor all parts of the process. Often, monitoring will be limited to monitoring the output of an AI system.
Authored by Olaf Gaertner, Vassi Iliadis, Martin Strauch, Valerie Kenyon and Karishma Paroha.
1 Ebers, StichwortKommentar Legal Tech, 1. Aufl. 2023, Künstliche Intelligenz (KI), Rn. 49.
5 Ebers, StichwortKommentar Legal Tech, 1. Aufl. 2023, Künstliche Intelligenz (KI), Rn. 49.
10 Ebers, StichwortKommentar Legal Tech, 1. Aufl. 2023, Künstliche Intelligenz (KI), Rn. 49.