Search
Contact
Symbolbild zu KI-Compliance: Fiberoptik
18.02.2025 | KPMG Law Insights

AI compliance: important legal aspects at a glance

Human intelligence draws on experience, emotion and intuition. Artificial intelligence (AI), on the other hand, processes vast amounts of data in fractions of a second. Human intelligence thinks ahead, draws conclusions and weighs up the legal and moral consequences. Artificial intelligence, on the other hand, acts exactly as it has been programmed and only takes legal stumbling blocks into account if the human has anticipated them. Human intelligence can adapt flexibly to unforeseen situations, while AI stubbornly reproduces existing patterns – even if these violate laws or ethical standards. This shows how important AI compliance is.
In addition to the AI Act, which came into force in 2024, the use of AI systems can violate numerous other legal provisions in various laws. The density of regulation is constantly increasing and with it the risk of compliance violations, sanctions and lawsuits.

Data protection: be careful when entering information

The General Data Protection Regulation (GDPR) applies to the processing of personal data. This requires a legal basis for the transfer of personal data, i.e. either a law or the consent of the data owner. Further special requirements apply to the transfer of data to third countries.
If information is entered into large language models such as ChatGPT, Copilot and Google Gemini, it is transmitted to the operators’ servers and processed there to create the required texts. The AI also uses and stores the information for training purposes. The servers are often located in the USA, meaning that the data leaves the EU. Do users possibly consent to data processing by entering the information into the chatbot? Probably not. This is because it is difficult for users to understand what happens to their data when they enter it. And they do not know whether the processing complies with the provisions of the GDPR. Consent is obviously lacking if the person does not enter their own data into the chatbot, but rather other people’s.
The AI therefore regularly processes data without the required legal basis. This is a problem for both the operator of the AI system and the users.

Intellectual property: AI also makes use of protected works

Generative AI, which creates texts, images or videos, uses both the materials entered by users and content from the internet. The tools are currently unable to differentiate between protected and non-protected content. The AI processes the content into new works. However, the copyrights of the original authors may continue to exist in individual cases even after processing. This is the case with pure reproductions and translations of copyrighted works. It is also critical from a copyright perspective if an AI reproduces a song lyric that is not yet in the public domain or rewrites a scene from an existing screenplay. If the output is closely based on the original work, the adaptation cannot be used freely, especially if recognizable characters with their own copyright protection are involved. The creation of specialist texts by generative AI is less problematic, as according to the ECJ, specialist texts must meet high requirements in order to enjoy copyright protection.
The question of who is the author of the AI-generated works may also need to be clarified in individual cases. In particular, whether a copyright can arise in the output of the AI at all. Under German copyright law, only a human being can be the creator of a work and thus the author. The ECJ also requires a free creative decision for the creation of a copyright. Works created solely by AI are unlikely to meet this requirement.

Employment law: When AI acts in a racist manner

AI is also increasingly taking over HR activities, for example in recruiting. If it is used to select applicants, it may be guided by the characteristics of candidates hired in the past. If the majority of these were white men, the AI tool may prefer male candidates with white skin color due to a lack of other programming. In Germany, this would be a clear violation of the General Equal Treatment Act (AGG). In general, artificial intelligence has a high potential for discrimination. It is therefore essential that companies anticipate such violations and program the AI accordingly.
The works council also has a say in the use of AI in the company. At least if the employer prescribes its use or provides its own AI systems.

The AI Act is a major legal challenge

The biggest regulatory challenge at the moment is probably the AI Act, which came into force on August 1, 2024, and the associated AI Liability Directive. The provisions of the AI Act will take effect in stages; the first requirements will already apply from February 2025. The fines could amount to up to 35 million euros or 7 percent of total annual global turnover – more than under the GDPR.
The AI Act affects almost all companies that market, offer or use AI systems in the EU. Suppliers, importers, distributors and users of AI products and services in the EU are obliged to comply.
The regulation takes a risk-based approach and divides AI systems into risk classes. The decisive factor here is the type of application. Systems with unacceptable risk are banned, while high-risk systems must meet strict requirements. Special requirements apply to general purpose AI (GPAI) models. These models, which can fulfill a variety of tasks, are divided into ordinary and systemically risky GPAI models.

AI compliance belongs in due diligence

Anyone acquiring a company in the age of AI is essentially buying a black box and will only find out later whether it contains a treasure or a ticking time bomb. Unless buyers and investors pay sufficient attention to AI compliance during due diligence. Before making a purchase decision, they should ensure that the AI technologies used in the target are compliant with applicable law.

Who is liable for incorrect AI decisions?

AI systems usually do not act on the basis of individual human decisions, but on the basis of complex algorithms that are not always fully comprehensible. So who is liable for incorrect AI decisions?
Existing national liability rules do not seem to fit here. In particular, the provisions on fault-based liability are not suitable for assessing liability claims for damage caused by AI. The new Product Liability Directive does not change this much either, as AI systems are often not so transparent that errors in programming or development can be proven. A new EU directive could make it easier to provide evidence. The planned AI Liability Directive (AI Liability Directive) is intended to regulate non-contractual liability for damage caused by the use of artificial intelligence across Europe. Art. 4 of the proposal regulates the burden of proof. Under certain circumstances, a causal link between the defendant’s fault and the result produced by the AI system is to be presumed.
However, the further course of the AI Liability Directive process is currently uncertain. It is currently not foreseeable whether and in what timeframe such a directive will be adopted. It is therefore not possible to predict when a binding liability regulation for artificial intelligence will actually exist at EU level.

Ethical aspects also play a role

In addition to legal risks, AI tools often also raise ethical questions. The consequence of unethical behavior is reputational damage.
Companies should develop AI strategies and corresponding governance structures at an early stage in order to adequately assess all legal and ethical aspects when using AI systems and thus avoid risks. To this end, lawyers should ideally work hand in hand with compliance and IT experts, data scientists and cyber security specialists.

 

Further articles on the topic

The AI Act is coming: EU wants to get a grip on AI risks
AI Act and generative AI: what companies should know now
Use ChatGPT at work? Not always a good idea
AI and employment law: what the AI Act means for HR
Why AI compliance is part of every due diligence process
Why the unique EU AI Act is so impactful for compliance
Using artificial intelligence responsibly

 

 

Explore #more

17.04.2025 | KPMG Law Insights

What the coalition agreement means for the financial sector

The coalition agreement between the CDU/CSU and SPD also has an impact on the financial sector. Here is an overview. Increasing the energy supply The…

17.04.2025 | KPMG Law Insights

AWG amendment provides for tougher penalties for sanction violations

Due to the ongoing Russian war of aggression against Ukraine, the EU wants to make it easier to prosecute violations of EU sanctions. The corresponding…

16.04.2025 | KPMG Law Insights

What the new digitization plans in the coalition agreement mean

The coalition agreement shows how the future government wants to shape Germany’s digital future. What do the plans mean for companies in concrete terms? Here…

14.04.2025 | KPMG Law Insights

How the new coalition wants to accelerate investment in infrastructure

The coalition agreement between the CDU/CSU and SPD marks a fundamental new beginning in German infrastructure policy. In view of a considerable investment backlog, the…

14.04.2025 | KPMG Law Insights

Coalition agreement 2025 and NKWS: Booster for environmental and planning law?

In the current coalition agreement, environmental and planning law is mentioned at various points throughout the coalition agreement, highlighting its great importance. However, the…

11.04.2025 | KPMG Law Insights

What’s next for foreign trade? The plans in the 2025 coalition agreement

Foreign trade and foreign trade have become particularly explosive in view of the new US tariffs. The CDU/CSU and SPD have agreed on the following…

11.04.2025 | KPMG Law Insights

Coalition agreement 2025: What the plans mean for the economy

The CDU/CSU and SPD have agreed on a coalition agreement. The central theme is the renewal of the promise of the social market economy. The…

10.04.2025 | KPMG Law Insights

Coalition agreement 2025: Housing construction on the move

In the coalition agreement, the CDU/CSU and SPD have agreed comprehensive reform plans in the area of housing construction. The aim is to speed…

10.04.2025 | KPMG Law Insights

Energy in the 2025 coalition agreement: what the future government is planning

In the coalition agreement, the CDU/CSU and SPD commit to the German and European climate targets and Germany’s climate neutrality by 2045. To this…

10.04.2025 | KPMG Law Insights

Focus on labor law – this is what the 2025 coalition agreement provides for

The CDU/CSU and SPD agreed on a coalition agreement on April 9, 2025. The overarching title of the paper is “Responsibility for Germany”. On 146…

Contact

Francois Heynike, LL.M. (Stellenbosch)

Partner
Head of Technology Law

THE SQUAIRE Am Flughafen
60549 Frankfurt am Main

Tel.: +49-69-951195770
fheynike@kpmg-law.com

Dr. Daniel Taraz

Senior Manager

Fuhlentwiete 5
20355 Hamburg

Tel.: +49 40 360994-5483
danieltaraz@kpmg-law.com

© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.

 KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.

Scroll