Search
Contact
Symbolbild zu KI in Hochschulen und Forschung: Frau mit Tablet steht vor Säulen
20.03.2025 | KPMG Law Insights

AI Act: This applies to AI in universities and research

Artificial intelligence (AI) offers numerous opportunities for research, teaching and administration, but also raises complex legal issues. The European Union’s AI Regulation(AI Act) also has a significant impact on universities and the academic sector. The first provisions of the AI Act have been in force since February 2, 2025.

Universities use AI in these areas

AI is particularly relevant for universities and research institutions in four areas:

Use in university administration: Universities can use AI-supported systems to increase efficiency in administration, for example for the automated analysis of study progression or in HR.

Use by students: Students now use chatbots, language models or image generation systems when preparing written assignments, and in some cases also for written exams and other examinations.

Use in research: In the scientific field, AI is often used for data analysis, pattern recognition or modeling scientific theories.

Development or further development of AI: Universities and research institutions are also involved in the development of new AI systems, AI models and algorithms.

What the AI Act means for universities and scientific institutions

Under the AI Act, universities and scientific institutions must ensure that they meet the legal requirements for transparency, data protection and security. At the same time, scientific freedom and innovative research should not be excessively restricted.

Universities and scientific institutions are obliged to develop internal guidelines on the use of AI, train employees and set up interdisciplinary expert committees to deal with the responsible implementation of AI technologies.

The AI Regulation follows a risk-based approach. Applications are regulated differently depending on the risk level:

  • Prohibited AI: Certain AI technologies that are associated with high risks to fundamental rights (e.g. real-time biometric identification in public spaces) are prohibited.
  • High-risk AI: Systems that are used in sensitive areas such as critical infrastructure, education or employment are subject to strict requirements in terms of transparency, data protection and security.
  • Low risk: AI applications with low risk only have to fulfill general requirements.

Universities must expect that AI-supported assessment and selection procedures or systems for decision-making, for example admission procedures, will be classified as “high-risk” and therefore subject to the comprehensive compliance requirements of the AI Act.

Privileging of research institutions

The AI Act expressly does not apply to AI systems or AI models that are developed for the sole purpose of scientific research and development as long as they are not placed on the market or put into operation (Article 2 (8) of the AI Act). The background to this exception is that the EU wishes to promote innovation, respect the freedom of science and not undermine research and development activities. Recital 25 of the AI Act emphasizes that AI systems developed in the context of basic research, experimental development or scientific testing are not subject to the regular requirements of the Regulation. Even if the requirements of the AI Act do not apply, ethical and scientific integrity standards must be maintained and it must be ensured that researchers use AI responsibly.

The research privilege only applies for the period in which an AI system or AI model is used exclusively for research, testing or development purposes. As soon as one of these conditions is no longer met, the regular provisions of the AI Act apply. An AI system or model is deemed to be “placed on the market” as soon as it is made available on the market for the first time (see also the definition in Art. 3 para. 9 AI Act). Commercially usable AI products or services that are sold, licensed or passed on to external users fall under the regular requirements of the AI Act. Research institutions that pass on or publish an AI model as a finished product must then ensure that it complies with the requirements of the regulation.

An AI system or model is “put into operation” when it is actually used for purposes other than pure research and testing (see also the definition in Art. 3 para. 10 AI Act). An AI system or model that is tested or used in a real environment with real user data is then no longer covered by the research privilege.

Here is an example: A university develops an AI for the automated assessment of examinations. As long as this AI is tested in a test environment, the research privilege applies. However, if it is used in a real examination assessment, it is considered to be “put into operation” and is subject to the AI Act.

Requirements for transparency, security and data protection

The AI Act sets binding regulations for the use of AI applications and defines specific requirements with regard to transparency, security and data protection. These requirements primarily concern AI systems, not all AI models in general.

  • Transparency: Universities and scientific institutions must ensure that users are clearly informed about the use of AI. This applies to AI systems that are integrated into decision-making processes, for example automatic applicant selection and AI-supported examination assessments. AI models as such, for example a trained model that is used internally for research, are not directly subject to these transparency obligations unless they are part of an AI system.
  • Safety: AI applications must be developed and operated in such a way that they do not pose any risks to people or data. This requires regular safety checks and risk analyses. This applies to AI systems that are actively used, especially high-risk systems such as medical diagnostic tools. There are only safety requirements for AI models if they are general purpose AI models (GPAI) with systemic risk. In these cases, risk assessment obligations apply.
  • Data protection: The GDPR applies to all AI applications that process personal data, i.e. both AI systems and AI models if they are trained or used with such data. Universities must take measures to anonymize or pseudonymize when AI models or AI systems work with sensitive data.

 

Requirements for high-risk AI

High-risk AI systems are subject to particularly strict requirements, including

  • Risk management: Systematic risk management must be established in order to identify and minimize potential risks at an early stage.
  • Data quality and fairness: Universities and research institutions must ensure that training and test data are of high quality and do not promote bias or discrimination.
  • Human monitoring: It must be ensured that critical decisions are not fully automated and that people can intervene in the decision-making process.
  • Robustness and security: AI systems must be protected against external attacks and manipulation, and regular security checks are required.
  • Documentation requirements: Universities and scientific institutions must keep detailed records of the functioning and decisions of AI systems in order to be able to demonstrate transparency in the event of regulatory audits.

 

Universities should adapt the examination law

Another important point that primarily affects universities is the adaptation of examination law. In view of the requirement for transparency, it is essential that universities clearly define where and under what conditions the use of AI is permitted. This applies in particular to examinations where the use of AI technologies may not always be verifiable. Universities should issue regulations that are clearly comprehensible for students and enable clear handling for examiners.
Existing examination regulations should be revised to ensure the integrity of examinations. One possibility is the introduction of oral explanations or disputations for unsupervised examination formats, particularly for final theses.

These persons are responsible

University management, specialist departments and users of AI applications have a joint duty to ensure compliance with legal requirements. This includes

  • Institutional responsibility: Universities must ensure that AI applications comply with the applicable legal framework and are regularly reviewed.
  • Individual responsibility: Employees, researchers and students who use or develop AI applications should have a sufficient level of knowledge and competence to recognize and minimize potential risks.
  • Liability issues: In the event of incorrect decisions by AI systems, responsibilities must be legally analyzed and clearly regulated, especially with regard to data protection violations or discriminatory decisions.

 

Recommendations for universities and scientific institutions

  • Universities and scientific institutions should check which AI applications fall under the provisions of the AI Act and take appropriate compliance measures.
  • It is important to be actively involved in the discussion on regulatory framework conditions in order to ensure a balance between protective mechanisms and freedom of research.
  • Legal, ethical and technical experts should work together on sustainable AI solutions for the university and research sector.
  • Teachers, researchers and administrative staff should be informed about the legal implications of AI.

Explore #more

07.08.2025 | KPMG Law Insights

NIS2: How energy suppliers must protect themselves against cyber attacks

In July 2025, the Military Counterintelligence Service reported a significant increase in spying attempts and disruptive measures by the Russian secret service, according to media…

06.08.2025 | KPMG Law Insights

Tax havens: When business relationships trigger criminal proceedings

A German tech company had been paying license fees to a contractual partner in Panama for years without ever having any problems. However, few people

06.08.2025 | Deal Notifications

KPMG Law, KPMG in Germany and KPMG in Switzerland advised Bureau Veritas on the acquisition of Dornier Hinneburg and its Swiss subsidiary Hinneburg Swiss

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) together with KPMG AG Wirtschaftsprüfungsgesellschaft (KPMG) and KPMG AG Switzerland advised Bureau Veritas group (Bureau Veritas) on the acquisition…

05.08.2025 | Deal Notifications

KPMG Law advises Athagoras Holding GmbH on the acquisition of IGES Group

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) provided legal advice to Athagoras Holding GmbH, a platform of the Munich-based PE firm Greenpeak Partners, on the acquisition…

05.08.2025 | In the media

Wirtschaftswoche honors KPMG Law as top law firm in public procurement law

The current ranking of the Handelsblatt Research Institute in cooperation with WirtschaftsWoche has selected the top law firms and top lawyers in the legal fields…

04.08.2025 | Deal Notifications

KPMG Law and KPMG AG advise NMP Germany on the acquisition of DESMA Schuhmaschinen GmbH

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) has provided legal advice to NMP Germany GmbH (NMP) on the acquisition of DESMA Schuhmaschinen GmbH (DESMA). KPMG Law…

02.08.2025 | In the media

KPMG Law expert in the Rheinische Post on the topic of influencer tax evasion

The North Rhine-Westphalian State Office for Combating Financial Crime (LBF NRW) is currently evaluating a data package. It is said to contain 6000 data records.…

31.07.2025 | KPMG Law Insights

Modernizing the state and reducing bureaucracy: the plans in the 2025 coalition agreement

The coalition has set itself ambitious goals in the areas of bureaucracy reduction, state modernization and modern justice. And for good reason: comprehensive structural reforms…

31.07.2025 | KPMG Law Insights

AI in insurance companies – exploiting opportunities, managing risks

Insurance companies can use artificial intelligence (AI) to make their processes considerably more efficient. At the same time, special compliance requirements apply to the financial…

31.07.2025 | In the media

KPMG Law expert in Handelsblatt: New EU regulation affects 370,000 companies

At the end of the year, the EU will ban products associated with the destruction of forests. The hopes of many importers, who had hoped…

Contact

Dr. Jannike Ehlers

Senior Associate

Fuhlentwiete 5
20355 Hamburg

Tel.: +49 (0)40 360994-5021
jannikeluiseehlers@kpmg-law.com

© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.

 KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.

Scroll