Search
Contact
Symbolbild zu KI in Hochschulen und Forschung: Frau mit Tablet steht vor Säulen
20.03.2025 | KPMG Law Insights

AI Act: This applies to AI in universities and research

Artificial intelligence (AI) offers numerous opportunities for research, teaching and administration, but also raises complex legal issues. The European Union’s AI Regulation(AI Act) also has a significant impact on universities and the academic sector. The first provisions of the AI Act have been in force since February 2, 2025.

Universities use AI in these areas

AI is particularly relevant for universities and research institutions in four areas:

Use in university administration: Universities can use AI-supported systems to increase efficiency in administration, for example for the automated analysis of study progression or in HR.

Use by students: Students now use chatbots, language models or image generation systems when preparing written assignments, and in some cases also for written exams and other examinations.

Use in research: In the scientific field, AI is often used for data analysis, pattern recognition or modeling scientific theories.

Development or further development of AI: Universities and research institutions are also involved in the development of new AI systems, AI models and algorithms.

What the AI Act means for universities and scientific institutions

Under the AI Act, universities and scientific institutions must ensure that they meet the legal requirements for transparency, data protection and security. At the same time, scientific freedom and innovative research should not be excessively restricted.

Universities and scientific institutions are obliged to develop internal guidelines on the use of AI, train employees and set up interdisciplinary expert committees to deal with the responsible implementation of AI technologies.

The AI Regulation follows a risk-based approach. Applications are regulated differently depending on the risk level:

  • Prohibited AI: Certain AI technologies that are associated with high risks to fundamental rights (e.g. real-time biometric identification in public spaces) are prohibited.
  • High-risk AI: Systems that are used in sensitive areas such as critical infrastructure, education or employment are subject to strict requirements in terms of transparency, data protection and security.
  • Low risk: AI applications with low risk only have to fulfill general requirements.

Universities must expect that AI-supported assessment and selection procedures or systems for decision-making, for example admission procedures, will be classified as “high-risk” and therefore subject to the comprehensive compliance requirements of the AI Act.

Privileging of research institutions

The AI Act expressly does not apply to AI systems or AI models that are developed for the sole purpose of scientific research and development as long as they are not placed on the market or put into operation (Article 2 (8) of the AI Act). The background to this exception is that the EU wishes to promote innovation, respect the freedom of science and not undermine research and development activities. Recital 25 of the AI Act emphasizes that AI systems developed in the context of basic research, experimental development or scientific testing are not subject to the regular requirements of the Regulation. Even if the requirements of the AI Act do not apply, ethical and scientific integrity standards must be maintained and it must be ensured that researchers use AI responsibly.

The research privilege only applies for the period in which an AI system or AI model is used exclusively for research, testing or development purposes. As soon as one of these conditions is no longer met, the regular provisions of the AI Act apply. An AI system or model is deemed to be “placed on the market” as soon as it is made available on the market for the first time (see also the definition in Art. 3 para. 9 AI Act). Commercially usable AI products or services that are sold, licensed or passed on to external users fall under the regular requirements of the AI Act. Research institutions that pass on or publish an AI model as a finished product must then ensure that it complies with the requirements of the regulation.

An AI system or model is “put into operation” when it is actually used for purposes other than pure research and testing (see also the definition in Art. 3 para. 10 AI Act). An AI system or model that is tested or used in a real environment with real user data is then no longer covered by the research privilege.

Here is an example: A university develops an AI for the automated assessment of examinations. As long as this AI is tested in a test environment, the research privilege applies. However, if it is used in a real examination assessment, it is considered to be “put into operation” and is subject to the AI Act.

Requirements for transparency, security and data protection

The AI Act sets binding regulations for the use of AI applications and defines specific requirements with regard to transparency, security and data protection. These requirements primarily concern AI systems, not all AI models in general.

  • Transparency: Universities and scientific institutions must ensure that users are clearly informed about the use of AI. This applies to AI systems that are integrated into decision-making processes, for example automatic applicant selection and AI-supported examination assessments. AI models as such, for example a trained model that is used internally for research, are not directly subject to these transparency obligations unless they are part of an AI system.
  • Safety: AI applications must be developed and operated in such a way that they do not pose any risks to people or data. This requires regular safety checks and risk analyses. This applies to AI systems that are actively used, especially high-risk systems such as medical diagnostic tools. There are only safety requirements for AI models if they are general purpose AI models (GPAI) with systemic risk. In these cases, risk assessment obligations apply.
  • Data protection: The GDPR applies to all AI applications that process personal data, i.e. both AI systems and AI models if they are trained or used with such data. Universities must take measures to anonymize or pseudonymize when AI models or AI systems work with sensitive data.

 

Requirements for high-risk AI

High-risk AI systems are subject to particularly strict requirements, including

  • Risk management: Systematic risk management must be established in order to identify and minimize potential risks at an early stage.
  • Data quality and fairness: Universities and research institutions must ensure that training and test data are of high quality and do not promote bias or discrimination.
  • Human monitoring: It must be ensured that critical decisions are not fully automated and that people can intervene in the decision-making process.
  • Robustness and security: AI systems must be protected against external attacks and manipulation, and regular security checks are required.
  • Documentation requirements: Universities and scientific institutions must keep detailed records of the functioning and decisions of AI systems in order to be able to demonstrate transparency in the event of regulatory audits.

 

Universities should adapt the examination law

Another important point that primarily affects universities is the adaptation of examination law. In view of the requirement for transparency, it is essential that universities clearly define where and under what conditions the use of AI is permitted. This applies in particular to examinations where the use of AI technologies may not always be verifiable. Universities should issue regulations that are clearly comprehensible for students and enable clear handling for examiners.
Existing examination regulations should be revised to ensure the integrity of examinations. One possibility is the introduction of oral explanations or disputations for unsupervised examination formats, particularly for final theses.

These persons are responsible

University management, specialist departments and users of AI applications have a joint duty to ensure compliance with legal requirements. This includes

  • Institutional responsibility: Universities must ensure that AI applications comply with the applicable legal framework and are regularly reviewed.
  • Individual responsibility: Employees, researchers and students who use or develop AI applications should have a sufficient level of knowledge and competence to recognize and minimize potential risks.
  • Liability issues: In the event of incorrect decisions by AI systems, responsibilities must be legally analyzed and clearly regulated, especially with regard to data protection violations or discriminatory decisions.

 

Recommendations for universities and scientific institutions

  • Universities and scientific institutions should check which AI applications fall under the provisions of the AI Act and take appropriate compliance measures.
  • It is important to be actively involved in the discussion on regulatory framework conditions in order to ensure a balance between protective mechanisms and freedom of research.
  • Legal, ethical and technical experts should work together on sustainable AI solutions for the university and research sector.
  • Teachers, researchers and administrative staff should be informed about the legal implications of AI.

Explore #more

06.10.2025 | KPMG Law Insights

What the Green Claims Directive means for companies – an overview

With the Green Claims Directive, the EU will introduce extensive regulations on the requirements for permissible environmental claims. The aim is to prevent greenwashing so…

03.10.2025 | Deal Notifications

KPMG Law and KPMG support the restructuring of Groupe CAT in Germany

KPMG Law Rechtsanwaltsgesellschaft (KPMG Law) and KPMG AG Wirtschaftsprüfungsgesellschaft (KPMG) advised Groupe CAT on comprehensive restructuring measures with a cross-service team. Over a period of…

02.10.2025 | Deal Notifications

KPMG Law advises Epitype GmbH and MDG Molecular Diagnostics Group GmbH on the acquisition of significant assets of oncgnostics GmbH

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) provided comprehensive legal advice to Epitype GmbH, a company of the Dresden-based MDG Group, on the formation and subsequent…

02.10.2025 | In the media

KPMG Law Statement in ZEIT for entrepreneurs: We’ll take the 500 billion!

German construction companies are asking themselves: how quickly will the money come from the government? And they are worried that only the giants will benefit.…

01.10.2025 | KPMG Law Insights

Federal Network Agency reforms special network charges for industry and commerce

The Federal Network Agency is planning a fundamental reform of the special network charges for energy-intensive companies. Any change to the current privilege regime entails…

30.09.2025 | In the media

KPMG Law dominates the top 100 list of the new law firm monitor with eight lawyers

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) occupies an outstanding sixth place in the overall evaluation of the TOP 100 law firms in the current diruj…

29.09.2025 | KPMG Law Insights

MiSpeL draft: New funding for energy storage systems and charging points

On September 18, 2025, the Federal Network Agency published a draft for the “Market integration of storage systems and charging points” (MiSpeL for short). For…

29.09.2025 | KPMG Law Insights

Organizing the transformation and spin-off of corporate real estate with legal certainty

When real estate portfolios are to be transformed or spun off, the economic success depends heavily on the legal preparation. Complex legal issues often arise,…

25.09.2025 | KPMG Law Insights

MaGo update – roadmap for implementing the new requirements

On 14 July 2025, BaFin revised the circular “Minimum requirements for the business organization of insurance companies under Solvency II” (MaGo for SII-VU) and published…

25.09.2025 | KPMG Law Insights

Foundation register – launch to be postponed from 2026 to 2028

The reform of foundation law, which came into force in July 2023, created a nationwide foundation register based on the commercial register. This was actually

Contact

Dr. Jannike Ehlers

Senior Associate

Fuhlentwiete 5
20355 Hamburg

Tel.: +49 (0)40 360994-5021
jannikeluiseehlers@kpmg-law.com

© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.

 KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.

Scroll