The European Union's Artificial Intelligence Act became law in August 2024. Companies face fines up to €35 million or 7% of global revenue for violations. Prohibited AI practices have been banned since February 2, 2025. High-risk AI requirements become mandatory August 2, 2026.
Yet most organizations still don't understand which AI systems require compliance. This guide breaks down exactly what the EU AI Act is, who it applies to, and what you need to do before enforcement deadlines hit. You'll learn the risk categories, penalties, and practical steps to achieve compliance.
Reading time: 8 minutes
Unlike voluntary frameworks (such as NIST AI RMF in the USA), the EU AI Act carries legal force. Non-compliance triggers substantial financial penalties and market access restrictions.
Who Does the EU AI Act Apply To?
The Act covers all entities in the AI value chain operating in the EU market:
Providers
Organizations that develop or commission AI systems for market placement. This includes:
- AI developers creating systems for commercial use
- Companies customizing AI for business deployment
- Open-source contributors (under certain conditions)
Deployers
Organizations using AI systems in the EU, even if developed elsewhere. This includes:
- Businesses using AI for hiring, customer service, or operations
- Healthcare facilities using diagnostic AI
- Financial institutions using credit scoring algorithms
Importers and Distributors
Entities bringing non-EU AI systems into European markets or making them available to EU users.
Geographic Scope
The Act applies if:
- Your AI system's output is used in the EU (regardless of where you're based)
- You provide AI services to EU customers
- You deploy AI affecting people in the EU
Example: A US company using AI recruitment tools for its European office must comply. A Canadian provider selling AI chatbots to EU businesses must comply.
The Four Risk Categories Explained
The EU AI Act classifies AI systems into four risk levels. Your obligations depend on classification.
1. Unacceptable Risk (Prohibited AI)
Status: Banned from February 2, 2025
These AI practices are illegal in the EU:
- Behavioral manipulation: AI using subliminal techniques to materially distort behavior and cause harm
- Vulnerability exploitation: AI exploiting age, disability, or socio-economic vulnerabilities
- Social scoring: Government-run systems evaluating people based on behavior or personal characteristics
- Emotion recognition: AI inferring emotions in workplaces or schools (except medical/safety uses)
- Biometric categorization: Using sensitive characteristics (race, political opinions, sexual orientation)
- Real-time remote biometric identification: Law enforcement using live facial recognition in public (limited exceptions)
- Facial recognition database expansion: Scraping faces from internet or CCTV to expand databases
Penalty: €35 million or 7% of global annual turnover (whichever is higher)
Action required: Audit all AI systems immediately. Discontinue any prohibited uses before February 2, 2025.
2. High-Risk AI Systems
Status: Full requirements from August 2, 2026
High-risk AI systems must meet strict compliance standards before deployment.
Which AI systems are high-risk?
The Act lists specific use cases (Annex III):
Employment & HR:
- Recruitment and hiring algorithms
- Performance evaluation systems
- Task allocation and monitoring tools
- Promotion and termination decision support
Education & Training:
- Student assessment and evaluation
- Educational institution admission decisions
- Exam monitoring and proctoring systems
Essential Services:
- Credit scoring and creditworthiness evaluation
- Insurance pricing and risk assessment
- Emergency response dispatching
- Access to essential services (water, energy, healthcare)
Law Enforcement:
- Predictive policing systems
- Crime risk assessment tools
- Lie detection systems
- Evidence evaluation support
Critical Infrastructure:
- AI managing roads, water, gas, electricity
- Safety components in AI systems
Healthcare:
- Medical diagnosis support systems
- Patient triage and prioritization
- Medical device AI components
Requirements for high-risk AI:
Risk Management System
- Continuous risk identification and mitigation
- Testing throughout AI lifecycle
- Post-market monitoring
Data Governance
- Training data must be relevant, representative, error-free
- Bias detection and mitigation
- Data quality verification processes
Technical Documentation
- Detailed system description
- Development methodology
- Testing and validation results
- Intended purpose and limitations
Record-Keeping and Logging
- Automatic logging of events
- Traceability of AI decisions
- Audit trail maintenance
Transparency and User Information
- Clear instructions for use
- System capabilities and limitations
- Human oversight requirements
Human Oversight
- Humans can intervene in AI operations
- Override or stop AI decisions
- Monitor for anomalies
Accuracy, Robustness, and Cybersecurity
- Achieve appropriate accuracy levels
- Resilience against errors or attacks
- Cybersecurity measures
Quality Management System
- Compliance monitoring processes
- Change management procedures
- Incident reporting systems
Conformity Assessment
- Self-assessment or third-party audit (depending on system type)
- CE marking after compliance demonstration
- Registration in EU database
Penalty for non-compliance: €15 million or 3% of global annual turnover
These AI systems require transparency but face fewer restrictions.
Examples:
- Chatbots and conversational AI
- Content generation systems (text, images, video)
- Emotion recognition systems (outside workplace/education)
- Biometric categorization systems
- Deepfakes and synthetic media
Requirements:
- Disclosure: Users must know they're interacting with AI
- Labeling: AI-generated content must be clearly marked
- Detection: Deepfakes require technical detection measures
Example: Your customer service chatbot must inform users: "You're chatting with an AI assistant."
4. Minimal Risk (No Specific Requirements)
Most AI systems fall here: spam filters, AI-enabled games, recommendation algorithms.
These systems face no EU AI Act obligations beyond general product safety and consumer protection laws.
Organizations may voluntarily adopt codes of conduct to demonstrate responsible AI practices.
General-Purpose AI Models (GPAI)
Status: Requirements from August 2, 2025
GPAI systems (like ChatGPT, Claude, Gemini) have separate rules:
All GPAI providers must:
- Provide technical documentation
- Publish training data summaries
- Implement copyright compliance measures
- Maintain quality management systems
GPAI with systemic risk (models trained with >10^25 FLOPs) must additionally:
- Conduct model evaluations and adversarial testing
- Assess and mitigate systemic risks
- Report serious incidents
- Ensure cybersecurity protections
Penalty: €15 million or 3% of global annual turnover (or €7.5 million / 1.5% for SMEs)
Enforcement Timeline: Key Dates
The EU AI Act phases in over three years:
| Date | What Becomes Enforceable |
|---|---|
| August 1, 2024 | Act entered into force |
| February 2, 2025 | Prohibited AI practices banned ⚠️ |
| August 2, 2025 | GPAI obligations + transparency rules for limited-risk AI |
| August 2, 2026 | High-risk AI system requirements (majority of obligations) |
| August 1, 2027 | Full implementation complete |
Critical near-term deadline: February 2, 2025 (prohibited AI ban) is already in effect.
Main compliance deadline: August 2, 2026 (high-risk AI requirements) – less than 2 years away.
Penalties for Non-Compliance
The EU AI Act imposes tiered penalties based on violation severity:
| Violation Type | Maximum Fine |
|---|---|
| Prohibited AI practices | €35 million OR 7% of global annual turnover |
| High-risk AI non-compliance | €15 million OR 3% of global annual turnover |
| Supplying incorrect information | €7.5 million OR 1% of global annual turnover |
Fine calculation: Whichever amount is higher applies.
For SMEs and startups: Capped at percentage of turnover (can be lower than fixed amounts).
Example: A company with €2 billion annual revenue using prohibited emotion recognition in offices could face a €140 million fine (7% of turnover).
6 Steps to EU AI Act Compliance
Organizations should take these actions before August 2, 2026:
Step 1: Conduct an AI System Inventory
What to do:
- List every AI system your organization develops, deploys, or uses
- Include third-party AI tools (Salesforce AI, Microsoft Copilot, recruiting software)
- Document AI embedded in products or services
- Identify AI in supply chain operations
Output: Complete AI inventory with system names, purposes, and vendors
Step 2: Classify Systems by Risk Level
What to do:
- Compare each AI system against EU AI Act definitions
- Classify as: prohibited, high-risk, limited-risk, or minimal-risk
- Prioritize prohibited and high-risk systems
- Document classification rationale
Output: Risk classification matrix
Common mistakes:
- Assuming HR software isn't AI (many recruiting tools use AI)
- Overlooking AI in purchased platforms
- Misclassifying high-risk systems as low-risk
Step 3: Eliminate Prohibited AI Immediately
What to do:
- Stop using any prohibited AI systems
- Notify affected employees or customers
- Remove data collected through prohibited means
- Document discontinuation
Deadline: February 2, 2025 (already in effect)
Example: If you use emotion recognition software to monitor employee engagement, discontinue it now.
Step 4: Implement High-Risk AI Requirements
What to do:
- Establish risk management systems for each high-risk AI
- Implement data governance processes
- Create technical documentation
- Set up logging and record-keeping
- Design human oversight mechanisms
- Conduct conformity assessments
Deadline: August 2, 2026
Start now: Full implementation takes 12-18 months for complex organizations.
Step 5: Ensure Transparency for Limited-Risk AI
What to do:
- Add disclosure notices to chatbots and AI assistants
- Label AI-generated content (images, text, videos)
- Implement deepfake detection measures
- Update terms of service and privacy policies
Deadline: August 2, 2025
Example: Add "This conversation is with an AI" to chatbot interfaces.
Step 6: Train Staff on AI Literacy
What to do:
- Develop AI awareness training for all employees
- Provide specialized training for teams deploying AI
- Cover AI risks, limitations, and ethical considerations
- Document training completion
Requirement: Applies to all AI providers and deployers (from February 2, 2025)
Content areas:
- What is AI and how it works
- AI capabilities and limitations
- Potential risks and biases
- Organization's AI governance policies
- Compliance obligations under EU AI Act
EU AI Act vs GDPR: How They Work Together
If you handle personal data in AI systems, both regulations apply:
| EU AI Act | GDPR |
|---|---|
| Regulates AI systems and their safety | Regulates personal data processing |
| Risk-based approach | Rights-based approach |
| Applies to all AI (personal data or not) | Applies only when processing personal data |
| Focus: AI system safety and trustworthiness | Focus: Individual privacy and data protection |
Overlapping requirements:
- Data quality and accuracy (both require this)
- Transparency and disclosure (both require this)
- Automated decision-making (Article 22 GDPR + AI Act high-risk)
- Record-keeping and documentation
Best practice: Integrate AI Act and GDPR compliance into unified governance framework.
Common Questions About the EU AI Act
Do I need to comply if I'm not based in the EU?
Yes, if:
- Your AI system's output is used by people in the EU
- You provide AI services to EU organizations
- EU customers access your AI products
The Act has extraterritorial reach (like GDPR). Location of your company doesn't matter—what matters is where AI is used.
What about open-source AI models?
Open-source AI developers have reduced obligations unless they:
- Place the model on the market as a product
- Provide ongoing support or monetization
- Claim compliance with EU AI Act
If you deploy open-source AI in high-risk use cases, you become the "deployer" with full compliance responsibilities.
Can I use AI for recruitment in the EU?
Yes, but recruitment AI is high-risk. You must:
- Implement all high-risk AI requirements (risk management, documentation, human oversight, etc.)
- Conduct conformity assessment
- Register in EU database
- Provide candidates transparency about AI use
Many companies are switching to human-only recruitment until they can ensure full compliance.
How do I know if my AI system uses personal data (GDPR) vs falls under AI Act?
GDPR applies if AI processes personal data (names, emails, IP addresses, behavioral data, etc.)
EU AI Act applies regardless of data type, based on AI system risk level and use case
Most business AI triggers both: Customer service chatbots, HR tools, marketing automation all likely process personal data AND constitute AI systems.
What is a conformity assessment?
For high-risk AI, you must prove compliance through:
Self-assessment (most high-risk AI):
- Internal quality checks
- Testing and validation
- Documentation review
- Declaration of conformity
Third-party assessment (specific high-risk AI like biometrics):
- Notified body audit
- Independent testing
- Certification
After passing assessment, you affix CE marking and register in EU database.
Want expert guidance on conformity assessment? The conformity assessment process requires detailed technical documentation and may involve notified body audits for certain high-risk AI systems. Contact our compliance experts for consultation.
Real-World Impact: What Companies Are Doing Now
Organizations are taking different approaches to EU AI Act compliance:
Approach 1: AI Freeze
Some companies temporarily stopped deploying new AI in the EU until compliance frameworks are ready.
Approach 2: Third-Party Audits
Organizations hiring external consultants to assess AI systems and gap analysis.
Approach 3: Vendor Pressure
Businesses requiring AI vendors (Salesforce, Microsoft, Google) to provide compliance documentation and guarantees.
Approach 4: In-House Compliance Teams
Larger enterprises building dedicated AI governance teams combining legal, technical, and compliance expertise.
Most effective approach: Start now with AI inventory and risk classification. Don't wait until 2026 deadlines.
Resources for EU AI Act Compliance
Official EU Resources:
- Full text of EU AI Act (EUR-Lex)
- European AI Office
- EU AI Act FAQs and guidance documents are available on the European Commission website
Standards and Frameworks:
- ISO/IEC 42001:2023 (AI Management System)
- CEN-CENELEC standards for AI systems
- NIST AI Risk Management Framework (complementary)
Compliance Tools:
Organizations typically develop their own AI system inventory templates, risk classification matrices, and technical documentation frameworks based on EU AI Act requirements.
Key Takeaways
The EU AI Act is in force now. Prohibited AI practices have been banned since February 2, 2025.
High-risk AI requirements become mandatory August 2, 2026. You have less than 2 years to implement compliance measures.
Penalties are substantial. Up to €35 million or 7% of global revenue for serious violations.
Risk classification determines obligations. Not all AI requires the same level of compliance—prioritize high-risk systems.
Extraterritorial application. Non-EU companies must comply if AI affects EU users.
Start now with AI inventory. You can't achieve compliance if you don't know what AI systems you're using.
Conclusion: Prepare for EU AI Act Compliance Now
The EU AI Act is the world's most significant AI regulation. Prohibited AI practices have been banned since February 2, 2025. High-risk AI requirements become mandatory August 2, 2026.
Key deadlines:
- February 2, 2025: Prohibited AI practices banned (in effect now)
- August 2, 2025: GPAI and transparency requirements
- August 2, 2026: High-risk AI compliance required
Most important actions:
- Conduct AI system inventory immediately
- Classify systems by risk level (prohibited, high-risk, limited-risk, minimal-risk)
- Eliminate any prohibited AI uses
- Start implementing high-risk AI requirements (takes 12-18 months)
- Train staff on AI literacy and governance
Penalties are substantial: Up to €35 million or 7% of global revenue. Organizations that start compliance now will have competitive advantage while others scramble to meet deadlines.
Don't wait until 2026. Conduct your AI inventory this month, classify your systems, and develop your compliance roadmap.
Ready to implement EU AI Act compliance? Start with your AI system inventory this month. Classify each system by risk level, eliminate any prohibited uses immediately, and develop your compliance roadmap for high-risk AI systems before the August 2026 deadline.
Need help? Browse our compliance courses or contact our team for expert guidance.