Search
Contact
Symbolbild zu KI in Hochschulen und Forschung: Frau mit Tablet steht vor Säulen
20.03.2025 | KPMG Law Insights

AI Act: This applies to AI in universities and research

Artificial intelligence (AI) offers numerous opportunities for research, teaching and administration, but also raises complex legal issues. The European Union’s AI Regulation(AI Act) also has a significant impact on universities and the academic sector. The first provisions of the AI Act have been in force since February 2, 2025.

Universities use AI in these areas

AI is particularly relevant for universities and research institutions in four areas:

Use in university administration: Universities can use AI-supported systems to increase efficiency in administration, for example for the automated analysis of study progression or in HR.

Use by students: Students now use chatbots, language models or image generation systems when preparing written assignments, and in some cases also for written exams and other examinations.

Use in research: In the scientific field, AI is often used for data analysis, pattern recognition or modeling scientific theories.

Development or further development of AI: Universities and research institutions are also involved in the development of new AI systems, AI models and algorithms.

What the AI Act means for universities and scientific institutions

Under the AI Act, universities and scientific institutions must ensure that they meet the legal requirements for transparency, data protection and security. At the same time, scientific freedom and innovative research should not be excessively restricted.

Universities and scientific institutions are obliged to develop internal guidelines on the use of AI, train employees and set up interdisciplinary expert committees to deal with the responsible implementation of AI technologies.

The AI Regulation follows a risk-based approach. Applications are regulated differently depending on the risk level:

  • Prohibited AI: Certain AI technologies that are associated with high risks to fundamental rights (e.g. real-time biometric identification in public spaces) are prohibited.
  • High-risk AI: Systems that are used in sensitive areas such as critical infrastructure, education or employment are subject to strict requirements in terms of transparency, data protection and security.
  • Low risk: AI applications with low risk only have to fulfill general requirements.

Universities must expect that AI-supported assessment and selection procedures or systems for decision-making, for example admission procedures, will be classified as “high-risk” and therefore subject to the comprehensive compliance requirements of the AI Act.

Privileging of research institutions

The AI Act expressly does not apply to AI systems or AI models that are developed for the sole purpose of scientific research and development as long as they are not placed on the market or put into operation (Article 2 (8) of the AI Act). The background to this exception is that the EU wishes to promote innovation, respect the freedom of science and not undermine research and development activities. Recital 25 of the AI Act emphasizes that AI systems developed in the context of basic research, experimental development or scientific testing are not subject to the regular requirements of the Regulation. Even if the requirements of the AI Act do not apply, ethical and scientific integrity standards must be maintained and it must be ensured that researchers use AI responsibly.

The research privilege only applies for the period in which an AI system or AI model is used exclusively for research, testing or development purposes. As soon as one of these conditions is no longer met, the regular provisions of the AI Act apply. An AI system or model is deemed to be “placed on the market” as soon as it is made available on the market for the first time (see also the definition in Art. 3 para. 9 AI Act). Commercially usable AI products or services that are sold, licensed or passed on to external users fall under the regular requirements of the AI Act. Research institutions that pass on or publish an AI model as a finished product must then ensure that it complies with the requirements of the regulation.

An AI system or model is “put into operation” when it is actually used for purposes other than pure research and testing (see also the definition in Art. 3 para. 10 AI Act). An AI system or model that is tested or used in a real environment with real user data is then no longer covered by the research privilege.

Here is an example: A university develops an AI for the automated assessment of examinations. As long as this AI is tested in a test environment, the research privilege applies. However, if it is used in a real examination assessment, it is considered to be “put into operation” and is subject to the AI Act.

Requirements for transparency, security and data protection

The AI Act sets binding regulations for the use of AI applications and defines specific requirements with regard to transparency, security and data protection. These requirements primarily concern AI systems, not all AI models in general.

  • Transparency: Universities and scientific institutions must ensure that users are clearly informed about the use of AI. This applies to AI systems that are integrated into decision-making processes, for example automatic applicant selection and AI-supported examination assessments. AI models as such, for example a trained model that is used internally for research, are not directly subject to these transparency obligations unless they are part of an AI system.
  • Safety: AI applications must be developed and operated in such a way that they do not pose any risks to people or data. This requires regular safety checks and risk analyses. This applies to AI systems that are actively used, especially high-risk systems such as medical diagnostic tools. There are only safety requirements for AI models if they are general purpose AI models (GPAI) with systemic risk. In these cases, risk assessment obligations apply.
  • Data protection: The GDPR applies to all AI applications that process personal data, i.e. both AI systems and AI models if they are trained or used with such data. Universities must take measures to anonymize or pseudonymize when AI models or AI systems work with sensitive data.

 

Requirements for high-risk AI

High-risk AI systems are subject to particularly strict requirements, including

  • Risk management: Systematic risk management must be established in order to identify and minimize potential risks at an early stage.
  • Data quality and fairness: Universities and research institutions must ensure that training and test data are of high quality and do not promote bias or discrimination.
  • Human monitoring: It must be ensured that critical decisions are not fully automated and that people can intervene in the decision-making process.
  • Robustness and security: AI systems must be protected against external attacks and manipulation, and regular security checks are required.
  • Documentation requirements: Universities and scientific institutions must keep detailed records of the functioning and decisions of AI systems in order to be able to demonstrate transparency in the event of regulatory audits.

 

Universities should adapt the examination law

Another important point that primarily affects universities is the adaptation of examination law. In view of the requirement for transparency, it is essential that universities clearly define where and under what conditions the use of AI is permitted. This applies in particular to examinations where the use of AI technologies may not always be verifiable. Universities should issue regulations that are clearly comprehensible for students and enable clear handling for examiners.
Existing examination regulations should be revised to ensure the integrity of examinations. One possibility is the introduction of oral explanations or disputations for unsupervised examination formats, particularly for final theses.

These persons are responsible

University management, specialist departments and users of AI applications have a joint duty to ensure compliance with legal requirements. This includes

  • Institutional responsibility: Universities must ensure that AI applications comply with the applicable legal framework and are regularly reviewed.
  • Individual responsibility: Employees, researchers and students who use or develop AI applications should have a sufficient level of knowledge and competence to recognize and minimize potential risks.
  • Liability issues: In the event of incorrect decisions by AI systems, responsibilities must be legally analyzed and clearly regulated, especially with regard to data protection violations or discriminatory decisions.

 

Recommendations for universities and scientific institutions

  • Universities and scientific institutions should check which AI applications fall under the provisions of the AI Act and take appropriate compliance measures.
  • It is important to be actively involved in the discussion on regulatory framework conditions in order to ensure a balance between protective mechanisms and freedom of research.
  • Legal, ethical and technical experts should work together on sustainable AI solutions for the university and research sector.
  • Teachers, researchers and administrative staff should be informed about the legal implications of AI.

Explore #more

09.07.2025 | KPMG Law Insights

Restructuring with staff reductions: preparation is key

The downsizing or closure of a part of a company often also necessitates staff reductions. Depending on the number of employees affected, the works council…

08.07.2025 | Deal Notifications

KPMG Law advises Finish Finnfoam Group on the acquisition of the Phonotherm business of insolvent BOSIG Baukunststoffe GmbH

KPMG Law advised Finnfoam Group (Salo/Finland) on the acquisition of the business unit “Phonotherm” from BOSIG Baukunststoffe GmbH via the newly founded Warmotech GmbH as…

07.07.2025 | Deal Notifications

KPMG Law advises HEMRO International AG on the acquisition of Xenia Espresso GmbH

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) provided legal advice to HEMRO Group, a global manufacturer of coffee grinders and grinding technologies headquartered in Zurich, Switzerland,…

04.07.2025 | KPMG Law Insights

BGH clarifies the limits of the definition of customer installations

On July 3, 2025, the BGH published the reasons for its ruling of May 13, 2025 (case no. EnVR 83/20) and provided the eagerly awaited…

02.07.2025 | In the media

Guest article by Moritz Püstow on the special fund for infrastructure

The German government wants to invest 500 billion euros in infrastructure and climate neutrality. This creates new business opportunities for the construction industry – but…

01.07.2025 | Deal Notifications

KPMG Law advised Bosch on the multinational carve-out of the entire product business of Bosch Building Technologies to investor Triton

KPMG Law advises Robert Bosch on the carve-out of the building technologies division’s product business for security and communications technology (Bosch Building Technologies) in more…

27.06.2025 | KPMG Law Insights

Hospital restructuring: three steps out of the crisis

Many clinics see their existence threatened in the short or medium term. Other healthcare facilities are also experiencing economic difficulties. Inadequate remuneration structures, staff shortages,…

27.06.2025 | In the media

KPMG Law nominated at the PMN Awards

We are delighted to have been nominated directly in two categories at the PMN Awards 2025. Our “Extended Workbench” project was nominated in the…

25.06.2025 | KPMG Law Insights

Business Travel and Assignment in the USA: What you need to know about US immigration

The recent changes in US immigration rules are causing uncertainty worldwide. In particular, since the new US government took office, processes regarding entry into the…

11.06.2025 | KPMG Law Insights

Omnibus IV brings some simplifications, especially in product law

The EU Commission proposed the fourth omnibus package on May 21, 2025. Omnibus IV contains simplifications in relation to numerous product law requirements and…

Contact

Dr. Jannike Ehlers

Senior Associate

Fuhlentwiete 5
20355 Hamburg

Tel.: +49 (0)40 360994-5021
jannikeluiseehlers@kpmg-law.com

© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.

 KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.

Scroll