Human intelligence draws on experience, emotion and intuition. Artificial intelligence (AI), on the other hand, processes vast amounts of data in fractions of a second. Human intelligence thinks ahead, draws conclusions and weighs up the legal and moral consequences. Artificial intelligence, on the other hand, acts exactly as it has been programmed and only takes legal stumbling blocks into account if the human has anticipated them. Human intelligence can adapt flexibly to unforeseen situations, while AI stubbornly reproduces existing patterns – even if these violate laws or ethical standards. This shows how important AI compliance is.
In addition to the AI Act, which came into force in 2024, the use of AI systems can violate numerous other legal provisions in various laws. The density of regulation is constantly increasing and with it the risk of compliance violations, sanctions and lawsuits.
The General Data Protection Regulation (GDPR) applies to the processing of personal data. This requires a legal basis for the transfer of personal data, i.e. either a law or the consent of the data owner. Further special requirements apply to the transfer of data to third countries.
If information is entered into large language models such as ChatGPT, Copilot and Google Gemini, it is transmitted to the operators’ servers and processed there to create the required texts. The AI also uses and stores the information for training purposes. The servers are often located in the USA, meaning that the data leaves the EU. Do users possibly consent to data processing by entering the information into the chatbot? Probably not. This is because it is difficult for users to understand what happens to their data when they enter it. And they do not know whether the processing complies with the provisions of the GDPR. Consent is obviously lacking if the person does not enter their own data into the chatbot, but rather other people’s.
The AI therefore regularly processes data without the required legal basis. This is a problem for both the operator of the AI system and the users.
Generative AI, which creates texts, images or videos, uses both the materials entered by users and content from the internet. The tools are currently unable to differentiate between protected and non-protected content. The AI processes the content into new works. However, the copyrights of the original authors may continue to exist in individual cases even after processing. This is the case with pure reproductions and translations of copyrighted works. It is also critical from a copyright perspective if an AI reproduces a song lyric that is not yet in the public domain or rewrites a scene from an existing screenplay. If the output is closely based on the original work, the adaptation cannot be used freely, especially if recognizable characters with their own copyright protection are involved. The creation of specialist texts by generative AI is less problematic, as according to the ECJ, specialist texts must meet high requirements in order to enjoy copyright protection.
The question of who is the author of the AI-generated works may also need to be clarified in individual cases. In particular, whether a copyright can arise in the output of the AI at all. Under German copyright law, only a human being can be the creator of a work and thus the author. The ECJ also requires a free creative decision for the creation of a copyright. Works created solely by AI are unlikely to meet this requirement.
AI is also increasingly taking over HR activities, for example in recruiting. If it is used to select applicants, it may be guided by the characteristics of candidates hired in the past. If the majority of these were white men, the AI tool may prefer male candidates with white skin color due to a lack of other programming. In Germany, this would be a clear violation of the General Equal Treatment Act (AGG). In general, artificial intelligence has a high potential for discrimination. It is therefore essential that companies anticipate such violations and program the AI accordingly.
The works council also has a say in the use of AI in the company. At least if the employer prescribes its use or provides its own AI systems.
The biggest regulatory challenge at the moment is probably the AI Act, which came into force on August 1, 2024, and the associated AI Liability Directive. The provisions of the AI Act will take effect in stages; the first requirements will already apply from February 2025. The fines could amount to up to 35 million euros or 7 percent of total annual global turnover – more than under the GDPR.
The AI Act affects almost all companies that market, offer or use AI systems in the EU. Suppliers, importers, distributors and users of AI products and services in the EU are obliged to comply.
The regulation takes a risk-based approach and divides AI systems into risk classes. The decisive factor here is the type of application. Systems with unacceptable risk are banned, while high-risk systems must meet strict requirements. Special requirements apply to general purpose AI (GPAI) models. These models, which can fulfill a variety of tasks, are divided into ordinary and systemically risky GPAI models.
Anyone acquiring a company in the age of AI is essentially buying a black box and will only find out later whether it contains a treasure or a ticking time bomb. Unless buyers and investors pay sufficient attention to AI compliance during due diligence. Before making a purchase decision, they should ensure that the AI technologies used in the target are compliant with applicable law.
AI systems usually do not act on the basis of individual human decisions, but on the basis of complex algorithms that are not always fully comprehensible. So who is liable for incorrect AI decisions?
Existing national liability rules do not seem to fit here. In particular, the provisions on fault-based liability are not suitable for assessing liability claims for damage caused by AI. The new Product Liability Directive does not change this much either, as AI systems are often not so transparent that errors in programming or development can be proven. A new EU directive could make it easier to provide evidence. The planned AI Liability Directive (AI Liability Directive) is intended to regulate non-contractual liability for damage caused by the use of artificial intelligence across Europe. Art. 4 of the proposal regulates the burden of proof. Under certain circumstances, a causal link between the defendant’s fault and the result produced by the AI system is to be presumed.
However, the further course of the AI Liability Directive process is currently uncertain. It is currently not foreseeable whether and in what timeframe such a directive will be adopted. It is therefore not possible to predict when a binding liability regulation for artificial intelligence will actually exist at EU level.
In addition to legal risks, AI tools often also raise ethical questions. The consequence of unethical behavior is reputational damage.
Companies should develop AI strategies and corresponding governance structures at an early stage in order to adequately assess all legal and ethical aspects when using AI systems and thus avoid risks. To this end, lawyers should ideally work hand in hand with compliance and IT experts, data scientists and cyber security specialists.
Further articles on the topic
The AI Act is coming: EU wants to get a grip on AI risks
AI Act and generative AI: what companies should know now
Use ChatGPT at work? Not always a good idea
AI and employment law: what the AI Act means for HR
Why AI compliance is part of every due diligence process
Why the unique EU AI Act is so impactful for compliance
Using artificial intelligence responsibly
Partner
Head of Technology Law
THE SQUAIRE Am Flughafen
60549 Frankfurt am Main
Tel.: +49-69-951195770
fheynike@kpmg-law.com
Senior Manager
Fuhlentwiete 5
20355 Hamburg
Tel.: +49 40 360994-5483
danieltaraz@kpmg-law.com
© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.
KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.