Artificial intelligence (AI) offers numerous opportunities for research, teaching and administration, but also raises complex legal issues. The European Union’s AI Regulation(AI Act) also has a significant impact on universities and the academic sector. The first provisions of the AI Act have been in force since February 2, 2025.
AI is particularly relevant for universities and research institutions in four areas:
Use in university administration: Universities can use AI-supported systems to increase efficiency in administration, for example for the automated analysis of study progression or in HR.
Use by students: Students now use chatbots, language models or image generation systems when preparing written assignments, and in some cases also for written exams and other examinations.
Use in research: In the scientific field, AI is often used for data analysis, pattern recognition or modeling scientific theories.
Development or further development of AI: Universities and research institutions are also involved in the development of new AI systems, AI models and algorithms.
Under the AI Act, universities and scientific institutions must ensure that they meet the legal requirements for transparency, data protection and security. At the same time, scientific freedom and innovative research should not be excessively restricted.
Universities and scientific institutions are obliged to develop internal guidelines on the use of AI, train employees and set up interdisciplinary expert committees to deal with the responsible implementation of AI technologies.
The AI Regulation follows a risk-based approach. Applications are regulated differently depending on the risk level:
Universities must expect that AI-supported assessment and selection procedures or systems for decision-making, for example admission procedures, will be classified as “high-risk” and therefore subject to the comprehensive compliance requirements of the AI Act.
The AI Act expressly does not apply to AI systems or AI models that are developed for the sole purpose of scientific research and development as long as they are not placed on the market or put into operation (Article 2 (8) of the AI Act). The background to this exception is that the EU wishes to promote innovation, respect the freedom of science and not undermine research and development activities. Recital 25 of the AI Act emphasizes that AI systems developed in the context of basic research, experimental development or scientific testing are not subject to the regular requirements of the Regulation. Even if the requirements of the AI Act do not apply, ethical and scientific integrity standards must be maintained and it must be ensured that researchers use AI responsibly.
The research privilege only applies for the period in which an AI system or AI model is used exclusively for research, testing or development purposes. As soon as one of these conditions is no longer met, the regular provisions of the AI Act apply. An AI system or model is deemed to be “placed on the market” as soon as it is made available on the market for the first time (see also the definition in Art. 3 para. 9 AI Act). Commercially usable AI products or services that are sold, licensed or passed on to external users fall under the regular requirements of the AI Act. Research institutions that pass on or publish an AI model as a finished product must then ensure that it complies with the requirements of the regulation.
An AI system or model is “put into operation” when it is actually used for purposes other than pure research and testing (see also the definition in Art. 3 para. 10 AI Act). An AI system or model that is tested or used in a real environment with real user data is then no longer covered by the research privilege.
Here is an example: A university develops an AI for the automated assessment of examinations. As long as this AI is tested in a test environment, the research privilege applies. However, if it is used in a real examination assessment, it is considered to be “put into operation” and is subject to the AI Act.
The AI Act sets binding regulations for the use of AI applications and defines specific requirements with regard to transparency, security and data protection. These requirements primarily concern AI systems, not all AI models in general.
High-risk AI systems are subject to particularly strict requirements, including
Another important point that primarily affects universities is the adaptation of examination law. In view of the requirement for transparency, it is essential that universities clearly define where and under what conditions the use of AI is permitted. This applies in particular to examinations where the use of AI technologies may not always be verifiable. Universities should issue regulations that are clearly comprehensible for students and enable clear handling for examiners.
Existing examination regulations should be revised to ensure the integrity of examinations. One possibility is the introduction of oral explanations or disputations for unsupervised examination formats, particularly for final theses.
University management, specialist departments and users of AI applications have a joint duty to ensure compliance with legal requirements. This includes
Senior Associate
Fuhlentwiete 5
20355 Hamburg
Tel.: +49 (0)40 360994-5021
jannikeluiseehlers@kpmg-law.com
© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.
KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.