3.3 Risk levels of AI by the EU AI Act

Explore how the EU AI Act’s risk-based approach shapes leadership responsibilities in higher education. This unit examines how universities can distinguish between prohibited practices, high-risk systems, and lower-risk uses of AI—particularly in areas such as admissions, assessment, student progression, and monitoring—and how governance, procurement, and institutional oversight must prioritise compliance where potential impact on students’ rights and opportunities is greatest.

👉 Start with the video for a quick overview.

👉 Now, read the document to explore the topic in more depth.

Download PDF

👉 Finish with the task to reflect and apply what you’ve learned.

Choose one AI enabled use that is already deployed at your institution or is currently being discussed for adoption. Describe what the system is intended to do and where it touches the student journey, for example admission, assessment, academic progress monitoring, or student services. Then classify the use at a high level using the AI Act’s risk logic by answering three practical questions. First, could the system influence access, evaluation, or test conditions in a way that affects a student’s opportunities. Second, could it fall into an area that the regulation treats as high risk in education and vocational training. Third, if the system is not high risk, what transparency expectations should apply so that users understand when they are interacting with AI and what the system is responsible for. Conclude with one leadership action that would improve governance, such as clarifying institutional roles, requiring evidence from suppliers, or strengthening user communication and escalation pathways. Write 200 to 300 words.

Please note: Your responses are not stored on the platform. You can save your reflections locally by clicking the “Download text” button below.