Hogan Lovells 2024 Election Impact and Congressional Outlook Report
15 November 2024
The EU Artificial Intelligence Regulation will create a framework for the use of artificial intelligence systems. Discrimination and discriminatory bias will be prohibited and subject to fines. Other AI systems will be subject to strong regulatory obligations. However, the Artificial Intelligence Regulation is still a draft and the prohibitions will not be enforceable for years. Does this mean that AI discrimination is not subject to fines or compensation obligations today? Not in the case of Spain, as highlighted in this publication, where discrimination (including when using AI systems) is currently subject to specific prohibitions with high fines and strong legal presumptions for monetary compensation.
Like most countries, Spain has a general anti-discrimination provision in its Constitution in the form of fundamental right. However, this fundamental right has recently been further developed by the Integral law for equal treatment and non-discrimination (“Non-Discrimination Act”) which includes many rules, legal presumptions for legal actions, and sanctions against discrimination.
In this regard, the Non-Discrimination Act may apply to scenarios of discrimination arising from the use of artificial intelligence (“AI”) and processing of data at high scale. AI is not the main focus of the law, but it is one of the areas of concern of the legislator.
The Non-Discrimination Act aims to guarantee and promote the right to equal treatment and non-discrimination, to respect the equal dignity of persons.
It has both a subjective and objective scope of application:
Although most obligations in this Act are applicable to the public sector, some are also applicable to private natural or legal persons residing, located or acting in Spanish territory, whatever their nationality, domicile or residence.
The main impact of this Act is the general prohibition of any provision, conduct, act, criterion or practice that violates the right to equality.
Discrimination is construed very widely: it includes (i) direct or indirect discrimination, (ii) discrimination by association and by mistake (e.g., a company considers that a person has a disease but he/she hasn’t), (iii) incitement, order or instruction to discriminate, (iv) retaliation, (v) failure to comply with affirmative action measures arising from statutory or treaty obligations, inaction, neglect of duty, or failure to perform duties.
Differentiation of treatment is not forbidden. However, when a person is subject to a “differentiated treatment”, the company taking the decision shall be in a position to demonstrate that the criteria for differentiation are:
Differentiated treatment will also be accepted when a law authorizes it or in the context of positive discrimination according to public policies.
When a person alleges discrimination and provides well-founded indicia of its existence, the defendant or the party to whom the discriminatory situation is imputed shall prove that there has been no discrimination by providing an objective and reasonable justification of the measures adopted and of their proportionality. That is, at the end, the burden of the proof generally lies with the “potential” discriminating entity.
This is just another reason for companies that use AI systems to have in place a robust AI governance policy evidencing that the AI system is not subject to bias and that it was trained with accurate and representative data.
In line with the last version of draft AI Regulation, where an AI system differentiates on the basis of personal attributes, companies should consider conducting a fundamental right impact assessment before making use of any AI system. The assessment provides a way for companies to demonstrate that the AI system does not breach the non-discrimination principle and that any possible differentiation is lawful.
Similarly, implementing a data governance policy to ensure that the training and validation of the AI system is as free of biases or errors as possible, and that the data used is accurate and sufficient, would allow companies to demonstrate that the non-discrimination principle has not been breached.
In addition to the above, under the General Data Protection Regulation (“GDPR”), if an AI system is susceptible to anti-discrimination claims, data protection obligations may apply to the processing of personal data. For instance, carrying out
The Non-discrimination Act applies to any sort of discrimination in several contexts, including artificial intelligence and massive data management. However, it does not differentiate between different categories of AI systems. Therefore, the rules on burden of proof, the possibility to treat people differently and the sanctioning regime apply regardless of the consideration of the AI system as a high risk AI system or foundation system under the AI Regulation.
In other words, AI systems that do not qualify as high risk still need to comply with the Non-discrimination Act.
Another notable difference between the scope of the AI Regulation and the Non-discrimination Act is that the Non-discrimination Act applies to situations of actual (or incitement of actual) discrimination. It does not apply directly to the training, validation, or data governance rules for AI systems. However, implementing a proper governance system and conducting a fundamental right impact assessment are suitable measures to prove that the AI system does not discriminate.
The Non-Discrimination Act establishes a regime of infringements and penalties for non-compliance entailing fines from EUR 300 to EUR 500,000 (fines for discrimination would not be lower than EUR 10,001). In very serious cases, non-compliance may result in the closure of the establishment in which the discrimination has occurred or the cessation of the economic or professional activity carried out by the offending person for a maximum period of five years.
However, please note that the Act specifically foresees that such a regime may be subject to specific development and classification, within the scope of its competences, by regional legislation (in which case those shall prevail).
Additionally, the following consequences may also arise from a breach of the Non-Discrimination Act:
Authored by Gonzalo Gallego, Juan Ramon Robles, and Clara Lazaro.