Insights and Analysis

Criminal law implications and compliance strategies for AI use AI washing black box high risk AI

Artificial Intelligence - robot touch human hand
Artificial Intelligence - robot touch human hand

The use of artificial intelligence (“AI”) offers companies a wide range of opportunities for increasing efficiency and innovation. It has become an integral part of everyday business operations. The dynamic technological development in the AI sector requires companies to keep up to date with the latest legal requirements in order to recognize criminal liability risks at an early stage and take countermeasures in good time. With the AI Act, the EU defines clear standards for dealing with AI applications in companies. Additionally, regulations are continuously emerging in other jurisdictions, particularly in the United States, which are becoming increasingly relevant for companies.

Criminal Law Risks

I-supported processes and AI-generated content and products give rise to new liability risks for companies. In addition to the areas of data protection law and copyright law, these primarily concern criminal law. Criminal law violations can lead to corporate liability for companies. In German law, this arises in particular from Sections 130 and 30 of the German Administrative Offenses Act (“OWiG”). Consequences can include fines of up to EUR 10 million and - often particularly relevant in practice - the disgorgement of profits generated by the offense.

In the following, risks from the areas of AI washing, AI as a black box and high-risk AI are examined in more detail. Subsequently, concrete pathways for risk mitigation will be presented. In this context, the specification of due diligence obligations under the European AI Act and regulations regarding AI usage in other jurisdictions are central.

AI Washing

It may come as a surprise how many products and services are currently using AI, according to company information. Companies that offer innovative products often advertise the use of AI as a buzzword. In doing so, they may be exposed to the risk of 'AI washing'. AI washing has parallels with greenwashing. While in greenwashing companies present themselves as more environmentally friendly than they actually are, in AI washing they advertise with exaggerated claims about the use of AI in relation to their product or service. The strategies are aimed at improving the company's public image and gaining a competitive advantage.

AI Washing can have serious legal consequences. The U.S. Securities and Exchange Commission (“SEC”) has already imposed fines on several companies due to misleading statements about their purported use of AI. In two cases for example, the SEC sanctioned investment advisors who falsely claimed to utilize AI to facilitate promising investments.

Under German criminal law, company employees could commit various criminal offenses through AI washing. In particular, fraud under Section 263 of the German Criminal Law Act (“StGB”), capital investment fraud under Section 264a StGB, misrepresentation under Section 331 German Commercial Act (“HGB”), or false advertising under Section 16 (1) Unfair Competition Act (“UWG”).

Whether, for example, fraud has been committed, depends in detail on whether the action creates a misconception among the company's customers about the use of AI in the product and whether customers are influenced in their purchasing decision as a result. If there is no use of AI, it is possible that advertising statements to this effect constitute fraudulent misrepresentation. Offenses committed by company employees in this regard can be attributed to the company and may result in liability of the company and its officers and directors.

In each case, it should be clarified through a technical analysis by AI experts whether an application genuinely qualifies as an AI system or merely constitutes a complex software tool. According to the definition in the AI Act, an AI system is a machine-supported system that works with varying degrees of autonomy and can adapt after its introduction. It uses inputs to generate outputs such as predictions, content, recommendations or decisions that affect physical or virtual environments. Accordingly, optimization and decision-making algorithms and machine learning algorithms, for example, fall under this definition. However, software systems that operate exclusively on the basis of rules are not covered.

AI as a Blackbox

It is often not transparent how AI systems make their decisions. In this sense, they may be seen as a "black box". If an AI system makes a decision that violates existing criminal laws, the question arises as to who is responsible for this.

One example may be accidents involving autonomous vehicles. When using incorrectly labeled training data in a deep learning-capable camera system, it may be unclear how this specifically affects the automated decision and whether a subsequent accident may be based on it. The central issue would be the necessary proof of causality and objective attribution in German criminal law. When using AI, it may need to be proven in the event of damage that the AI's decision was actually the cause of the damage that occurred. If an AI system leads to a harmful result, the question may also arise whether the action of the AI can be attributed to a company employee and thus ultimately to the company.

AI high-risk areas

AI in high-risk areas refers to AI systems that are used in critical infrastructures, healthcare or the financial sector, for example. In these areas, sensitive legal interests such as human life and health, as well as the protection of sensitive data, depend on the flawless functioning of the systems, meaning that errors can have serious consequences.

The risks posed by AI in these areas are manifold. They range from technical errors, which can have a significant impact, to potential discrimination or the misuse of data. A faulty AI diagnostic system could result in incorrect treatment in the healthcare sector. Data topics may be in the center of potential shortcomings in the areas of critical infrastructure and financial services.

Risk mitigation for the use of AI

Overall, the use of AI requires careful legal analysis and proactive risk mitigation measures in order to avoid consequences under criminal law. A lack of AI-related risk management with corresponding control systems can be decisive for the attribution of a criminal offense to the company as a breach of supervisory duties by the company's officers and directors.

The EU AI Act sets out a large number of requirements that companies use as a guideline when developing their control systems. The requirements include various obligations, such as documentation, transparency and information obligations. There is also an obligation to monitor and continuously adapt the AI systems and responsibility for the quality of the training data.

The AI Act follows a risk-based approach. This means that the higher the risk can be classified when using the AI system, the stricter the due diligence obligations to be fulfilled by the company. High-risk AI systems that are particularly relevant here must meet strict requirements. Among other things, a risk management system must be set up for them and they require human, effective supervision. Before they are placed on the market for the first time, they must also undergo a conformity assessment to ensure that they meet the requirements of the AI Act. Furthermore, automatic logging of processes and events in the system must take place and data quality must be ensured.

For companies with global operations in particular, it makes sense to also consider AI regulations in other jurisdictions. The establishment of appropriate control systems protects companies and their executive bodies from the criminal consequences of the risks of AI washing and the AI black box described above.

While UN Secretary-General António Guterres is in favor of establishing a global regulatory authority for AI and the G7 agreed at a summit in Japan in 2023 to tackle a joint strategy for regulating AI, other jurisdictions alongside the EU are already developing national regulations for the use of AI, which companies must take into account in their international business activities. California has enacted a law that obliges providers of generative AI to provide transparent documentation of their models' training data from 1 January 2026 on. Japan is working on its own national rules for AI compliance that may come into force in 2025.

In September 2024, the US Department of Justice (“DOJ”) updated its guidelines for evaluating corporate compliance programs with regard to protection against AI. The DOJ reviews whether companies are sufficiently protected against AI-supported crime. In particular, this includes risks such as incorrect authorizations and AI-generated documentation. In addition, the guidelines provide a basis for reviewing the effectiveness of companies' compliance systems in dealing with AI. This is intended specifically to clarify whether companies could be impaired in their ability to comply with applicable laws as a result of the usage of AI. The guidelines are meant to make it possible to check whether a company has carried out a risk assessment, for example, in order to mitigate risks when using AI. By updating its guidelines, the DOJ wants to encourage companies to recognize and correct errors caused by AI at an early stage. Companies with compliance programs that reflect these new developments will likely be sanctioned more leniently in the event of AI-related compliance incidents.

First AI-related regulations are also in place in various US states. At the end of September 2024, the Governor of California signed a law that obliges providers of generative AI to provide transparent documentation of their models' training data from 1 January 2026 on. This disclosure obligation applies to companies that provide generative AI for Californian users and thus also extends to European providers with customers in California. The requirement affects both developers of new AI technologies and those who significantly modify existing models. The law is part of comprehensive AI legislation in California that focuses on transparency, security and combating disinformation.

Conclusion

The use of AI in companies harbors criminal law risks, which under German law can lead to liability, including the disgorgement of profits on the part of the company. Companies and their executive bodies can protect themselves by setting up specific control mechanisms that are aligned with the applicable AI regulations in relevant jurisdictions. Companies should take timely measures to adapt their AI-related processes to the advancing regulatory environment. This includes effective risk management in which guidelines, processes and monitoring solutions are established.

Search

Register now to receive personalized content and more!