Search
Contact
Symbolbild zu KI in Hochschulen und Forschung: Frau mit Tablet steht vor Säulen
20.03.2025 | KPMG Law Insights

AI Act: This applies to AI in universities and research

Artificial intelligence (AI) offers numerous opportunities for research, teaching and administration, but also raises complex legal issues. The European Union’s AI Regulation(AI Act) also has a significant impact on universities and the academic sector. The first provisions of the AI Act have been in force since February 2, 2025.

Universities use AI in these areas

AI is particularly relevant for universities and research institutions in four areas:

Use in university administration: Universities can use AI-supported systems to increase efficiency in administration, for example for the automated analysis of study progression or in HR.

Use by students: Students now use chatbots, language models or image generation systems when preparing written assignments, and in some cases also for written exams and other examinations.

Use in research: In the scientific field, AI is often used for data analysis, pattern recognition or modeling scientific theories.

Development or further development of AI: Universities and research institutions are also involved in the development of new AI systems, AI models and algorithms.

What the AI Act means for universities and scientific institutions

Under the AI Act, universities and scientific institutions must ensure that they meet the legal requirements for transparency, data protection and security. At the same time, scientific freedom and innovative research should not be excessively restricted.

Universities and scientific institutions are obliged to develop internal guidelines on the use of AI, train employees and set up interdisciplinary expert committees to deal with the responsible implementation of AI technologies.

The AI Regulation follows a risk-based approach. Applications are regulated differently depending on the risk level:

  • Prohibited AI: Certain AI technologies that are associated with high risks to fundamental rights (e.g. real-time biometric identification in public spaces) are prohibited.
  • High-risk AI: Systems that are used in sensitive areas such as critical infrastructure, education or employment are subject to strict requirements in terms of transparency, data protection and security.
  • Low risk: AI applications with low risk only have to fulfill general requirements.

Universities must expect that AI-supported assessment and selection procedures or systems for decision-making, for example admission procedures, will be classified as “high-risk” and therefore subject to the comprehensive compliance requirements of the AI Act.

Privileging of research institutions

The AI Act expressly does not apply to AI systems or AI models that are developed for the sole purpose of scientific research and development as long as they are not placed on the market or put into operation (Article 2 (8) of the AI Act). The background to this exception is that the EU wishes to promote innovation, respect the freedom of science and not undermine research and development activities. Recital 25 of the AI Act emphasizes that AI systems developed in the context of basic research, experimental development or scientific testing are not subject to the regular requirements of the Regulation. Even if the requirements of the AI Act do not apply, ethical and scientific integrity standards must be maintained and it must be ensured that researchers use AI responsibly.

The research privilege only applies for the period in which an AI system or AI model is used exclusively for research, testing or development purposes. As soon as one of these conditions is no longer met, the regular provisions of the AI Act apply. An AI system or model is deemed to be “placed on the market” as soon as it is made available on the market for the first time (see also the definition in Art. 3 para. 9 AI Act). Commercially usable AI products or services that are sold, licensed or passed on to external users fall under the regular requirements of the AI Act. Research institutions that pass on or publish an AI model as a finished product must then ensure that it complies with the requirements of the regulation.

An AI system or model is “put into operation” when it is actually used for purposes other than pure research and testing (see also the definition in Art. 3 para. 10 AI Act). An AI system or model that is tested or used in a real environment with real user data is then no longer covered by the research privilege.

Here is an example: A university develops an AI for the automated assessment of examinations. As long as this AI is tested in a test environment, the research privilege applies. However, if it is used in a real examination assessment, it is considered to be “put into operation” and is subject to the AI Act.

Requirements for transparency, security and data protection

The AI Act sets binding regulations for the use of AI applications and defines specific requirements with regard to transparency, security and data protection. These requirements primarily concern AI systems, not all AI models in general.

  • Transparency: Universities and scientific institutions must ensure that users are clearly informed about the use of AI. This applies to AI systems that are integrated into decision-making processes, for example automatic applicant selection and AI-supported examination assessments. AI models as such, for example a trained model that is used internally for research, are not directly subject to these transparency obligations unless they are part of an AI system.
  • Safety: AI applications must be developed and operated in such a way that they do not pose any risks to people or data. This requires regular safety checks and risk analyses. This applies to AI systems that are actively used, especially high-risk systems such as medical diagnostic tools. There are only safety requirements for AI models if they are general purpose AI models (GPAI) with systemic risk. In these cases, risk assessment obligations apply.
  • Data protection: The GDPR applies to all AI applications that process personal data, i.e. both AI systems and AI models if they are trained or used with such data. Universities must take measures to anonymize or pseudonymize when AI models or AI systems work with sensitive data.

 

Requirements for high-risk AI

High-risk AI systems are subject to particularly strict requirements, including

  • Risk management: Systematic risk management must be established in order to identify and minimize potential risks at an early stage.
  • Data quality and fairness: Universities and research institutions must ensure that training and test data are of high quality and do not promote bias or discrimination.
  • Human monitoring: It must be ensured that critical decisions are not fully automated and that people can intervene in the decision-making process.
  • Robustness and security: AI systems must be protected against external attacks and manipulation, and regular security checks are required.
  • Documentation requirements: Universities and scientific institutions must keep detailed records of the functioning and decisions of AI systems in order to be able to demonstrate transparency in the event of regulatory audits.

 

Universities should adapt the examination law

Another important point that primarily affects universities is the adaptation of examination law. In view of the requirement for transparency, it is essential that universities clearly define where and under what conditions the use of AI is permitted. This applies in particular to examinations where the use of AI technologies may not always be verifiable. Universities should issue regulations that are clearly comprehensible for students and enable clear handling for examiners.
Existing examination regulations should be revised to ensure the integrity of examinations. One possibility is the introduction of oral explanations or disputations for unsupervised examination formats, particularly for final theses.

These persons are responsible

University management, specialist departments and users of AI applications have a joint duty to ensure compliance with legal requirements. This includes

  • Institutional responsibility: Universities must ensure that AI applications comply with the applicable legal framework and are regularly reviewed.
  • Individual responsibility: Employees, researchers and students who use or develop AI applications should have a sufficient level of knowledge and competence to recognize and minimize potential risks.
  • Liability issues: In the event of incorrect decisions by AI systems, responsibilities must be legally analyzed and clearly regulated, especially with regard to data protection violations or discriminatory decisions.

 

Recommendations for universities and scientific institutions

  • Universities and scientific institutions should check which AI applications fall under the provisions of the AI Act and take appropriate compliance measures.
  • It is important to be actively involved in the discussion on regulatory framework conditions in order to ensure a balance between protective mechanisms and freedom of research.
  • Legal, ethical and technical experts should work together on sustainable AI solutions for the university and research sector.
  • Teachers, researchers and administrative staff should be informed about the legal implications of AI.

Explore #more

24.03.2025 | KPMG Law Insights

Product piracy in online retail: these are the latest tricks

Product piracy is also flourishing with the growth in online trade. A major problem for brand owners, but also a challenge for online marketplaces and…

24.03.2025 | Deal Notifications

KPMG Law advises Munich Airport on the sale of aerogate München Gesellschaft für Luftverkehrsabfertigungen mbH

KPMG Law Rechtsanwaltsgesellschaft mbH (KPMG Law) provided legal advice to Flughafen München GmbH (FMG) on the sale of its subsidiary aerogate München Gesellschaft für Luftverkehrsabfertigungen…

21.03.2025 | KPMG Law Insights

Special infrastructure assets: how the administration manages to implement projects quickly

The special infrastructure fund creates the opportunity to catch up on years of investment backlog. There is a need for urgency. Defence capability, economic growth…

19.03.2025 | In the media

BUJ/KPMG Law Summit Transformation

The Bundesverband der Unternehmensjuristinnen und Unternehmensjuristen e.V. (BUJ) and KPMG Law cordially invite you to the BUJ Summit Transformation on May 28, 2025 in Frankfurt…

18.03.2025 | In the media

KPMG Law Statement in the German transport magazine DVZ: Planning at a crawl; DIHK sees great potential for faster traffic route construction

The Chamber of Commerce in Arnsberg regularly awards prizes to the worst state roads in the Hellweg-Sauerland region of Westphalia. A funny idea, if it…

13.03.2025 | KPMG Law Insights

ECJ tightens antitrust liability for information exchange

The ECJ (C-298/22) has recently set strict standards for the permissible exchange of information between companies. As a result, companies are now even more faced…

11.03.2025 | In the media

KPMG Law Interview with HAUFE: LkSG after the elections – everything new?

Many companies have made considerable efforts to implement the Supply Chain Due Diligence Act. The political discussion about its abolition is now causing uncertainty. KPMG…

07.03.2025 | In the media

Guest article in unternehmensjurist: Implementing the requirements of the BFSG correctly

The Barrier-Free Accessibility Reinforcement Act requires companies to offer certain products and services without barriers. The obligations vary depending on the role in business transactions.…

05.03.2025 | In the media

KPMG Law Statement in TextilWirtschaft: What the changes from Brussels mean for the fashion industry

It’s now official: the EU Commission will massively simplify the planned sustainability reporting. The Supply Chain Law Initiative examines the announced changes to the CSDDD…

28.02.2025 | In the media

KPMG LLP Launches KPMG Law US – The First Big Four Law Firm Serving The US Market

The Supreme Court of the US state of Arizona has granted KPMG US the license for KPMG Law US. As of February 27, 2025, KPMG…

Contact

Dr. Jannike Ehlers

Senior Associate

Fuhlentwiete 5
20355 Hamburg

Tel.: +49 (0)40 360994-5021
jannikeluiseehlers@kpmg-law.com

© 2024 KPMG Law Rechtsanwaltsgesellschaft mbH, associated with KPMG AG Wirtschaftsprüfungsgesellschaft, a public limited company under German law and a member of the global KPMG organisation of independent member firms affiliated with KPMG International Limited, a Private English Company Limited by Guarantee. All rights reserved. For more details on the structure of KPMG’s global organisation, please visit https://home.kpmg/governance.

 KPMG International does not provide services to clients. No member firm is authorised to bind or contract KPMG International or any other member firm to any third party, just as KPMG International is not authorised to bind or contract any other member firm.

Scroll