Why It Matters
The high-risk category is where the EU AI Act has the most practical impact on businesses. Unlike prohibited AI (which is simply banned) or minimal-risk AI (which has few obligations), high-risk AI requires a comprehensive compliance framework — quality management, documentation, testing, monitoring, and often third-party conformity assessments. Getting the classification wrong can mean either unnecessary compliance costs or, worse, non-compliance with significant penalties.
What Makes an AI System High-Risk
The AI Act defines high-risk AI in two ways:
Annex I — Safety Components
AI systems that are safety components of regulated products (e.g., medical devices, vehicles, aviation, machinery, toys, lifts). These follow existing sector-specific conformity assessment procedures.
Annex III — Standalone High-Risk Areas
AI systems used in these eight areas:
- Biometric identification and categorization — remote biometric identification, emotion recognition
- Critical infrastructure — AI managing safety in roads, water, gas, electricity, digital infrastructure
- Education and vocational training — admissions, assessments, proctoring, learning analytics
- Employment and worker management — recruitment, CV screening, promotion, termination, task allocation, performance monitoring
- Essential services — credit scoring, insurance risk assessment, government benefit eligibility
- Law enforcement — risk assessment of individuals, polygraphs, evidence analysis
- Migration, asylum, border control — risk assessments, document verification, visa processing
- Justice and democratic processes — assisting judges in fact-finding, sentencing guidance
Compliance Requirements for High-Risk AI
Providers (developers) must implement:
- Risk management system — identify and mitigate risks throughout the AI lifecycle
- Data governance — training data must be relevant, representative, and free from errors
- Technical documentation — detailed records before the system is placed on the market
- Record-keeping — automatic logging of system operations for traceability
- Transparency — clear instructions for deployers, including capabilities and limitations
- Human oversight — design the system so humans can effectively oversee it
- Accuracy, robustness, cybersecurity — appropriate levels for the intended purpose
- Quality management system — documented processes for compliance
- Conformity assessment — self-assessment or third-party (depending on the area)
- EU Declaration of Conformity and CE marking
- Registration in the EU database
Deployer (User) Obligations
Organizations deploying high-risk AI must:
- Use the system according to the provider's instructions
- Ensure human oversight by qualified personnel
- Monitor performance and report malfunctions
- Conduct a fundamental rights impact assessment (for public bodies and certain private entities)
- Inform affected individuals about AI use
Key Regulation
- EU AI Act Articles 6–15 — high-risk classification and requirements
- Annex III — list of high-risk AI areas
- High-risk obligations apply from: August 2, 2026