EU AI Act Compliance Checklist: What Every Business Needs to Know

TL;DR:

  • The EU AI Act applies to any organization whose AI systems interact with people in the EU, regardless of where the organization is based
  • Compliance requirements scale with risk: prohibited practices are already banned, high-risk system obligations phase in through August 2027, and limited-risk transparency requirements take effect August 2026
  • High-risk AI systems require conformity assessments, technical documentation, quality management systems, human oversight provisions, and post-market monitoring
  • This checklist covers what you need to assess, document, and implement for each risk tier

The EU AI Act is the most comprehensive AI regulation in the world. It entered into force in August 2024 and is being implemented in phases through 2027. Unlike sector-specific regulations, the AI Act applies horizontally across industries, classifying AI systems by risk level and imposing obligations that scale with that classification.

The Act’s territorial scope is broad. It applies to providers of AI systems placed on the EU market, deployers of AI systems within the EU, and providers and deployers located outside the EU whose AI system outputs are used within the EU. If your AI system affects people in the EU, the Act likely applies to you regardless of where your organization is headquartered.

This checklist translates the Act’s requirements into actionable compliance steps. For the broader governance context, see our AI governance readiness guide. For a comprehensive approach to identifying and mitigating AI-specific risks (which the Act requires for high-risk systems), see our AI risk assessment framework.

Step 1: Classify Your AI Systems by Risk

The first compliance action is determining which of the Act’s risk tiers applies to each AI system your organization develops or deploys. This classification determines your obligations.

Prohibited AI Practices (in force since February 2025)

Verify that none of your AI systems fall into the prohibited category. These practices are banned entirely:

  • Social scoring systems that evaluate individuals based on social behavior or personality characteristics, leading to detrimental treatment
  • AI that exploits vulnerabilities of specific groups (age, disability, social or economic situation) to materially distort behavior in a way that causes significant harm
  • Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases
  • Emotion recognition in workplace or educational settings (with limited exceptions)
  • Biometric categorization systems that categorize individuals based on sensitive characteristics (race, political opinions, sexual orientation, religious beliefs)
  • Real-time remote biometric identification in public spaces for law enforcement (with narrow exceptions for serious crime)

If any of your AI systems fall into these categories, they must be discontinued. There is no compliance pathway for prohibited practices.

High-Risk AI Systems

High-risk classification applies to AI systems used in specified domains where errors could significantly affect individuals. Review each AI system against these categories:

Annex I (product safety legislation): AI systems that are safety components of products regulated under existing EU product safety legislation (medical devices, aviation, automotive, machinery, toys, marine equipment, rail, civil aviation).

Annex III (standalone high-risk systems):

  • Biometric identification and categorization of natural persons
  • Management and operation of critical infrastructure (energy, water, transport, digital)
  • Education and vocational training (access, assessment, monitoring)
  • Employment and worker management (recruitment, promotion, termination, task allocation, monitoring)
  • Access to essential services (credit scoring, insurance pricing, emergency services dispatch)
  • Law enforcement (risk assessment, evidence reliability, crime prediction)
  • Migration, asylum, and border control
  • Administration of justice and democratic processes

If an AI system falls into any Annex III category, it is presumptively high-risk. A limited exception exists: if the AI system does not pose a significant risk of harm to health, safety, or fundamental rights, the provider may argue it is not high-risk, but this requires documented justification.

Limited-Risk AI Systems

AI systems that interact directly with natural persons and are not classified as high-risk fall under limited-risk transparency obligations:

  • Chatbots and conversational AI (must disclose AI involvement to users)
  • Emotion recognition systems not prohibited (must inform subjects)
  • AI-generated content (deepfakes, synthetic text, generated images must be labeled as AI-generated)
  • Biometric categorization systems not prohibited (must inform subjects)

Minimal-Risk AI Systems

AI systems that don’t fall into any of the above categories (spam filters, AI-enabled video games, inventory management AI, internal productivity tools) have no specific obligations under the Act, though voluntary codes of practice are encouraged.

Step 2: Comply with High-Risk System Requirements

If any AI system is classified as high-risk, the following requirements apply. This is the most demanding compliance tier and the one where most organizations will need to invest.

Risk Management System (Article 9)

  • Establish a risk management system that operates throughout the AI system’s lifecycle (design, development, deployment, post-market)
  • Identify and analyze known and foreseeable risks
  • Estimate and evaluate risks that may emerge during intended use and reasonably foreseeable misuse
  • Adopt risk mitigation measures
  • Test the system to identify the most appropriate risk management measures

Seampoint’s four governance constraints (consequence of error, verification cost, accountability requirements, physical reality) provide a practical lens for this risk analysis. The AI readiness assessment framework covers how to apply these constraints at the task level.

Data Governance (Article 10)

  • Document training, validation, and testing data sets
  • Implement data governance practices covering data collection, preparation, and labeling
  • Assess data for relevance, representativeness, accuracy, and completeness
  • Evaluate and address potential biases in datasets
  • Consider characteristics specific to the geographic, contextual, behavioral, or functional setting of the AI system

Technical Documentation (Article 11)

  • Prepare technical documentation demonstrating compliance with high-risk requirements
  • Include: system description, development methodology, design specifications, monitoring and functioning details, risk management documentation, and applicable standards applied
  • Keep documentation current throughout the AI system’s lifecycle
  • Make documentation available to national competent authorities upon request

Record-Keeping and Logging (Article 12)

  • Implement automatic logging of events throughout the AI system’s operational lifetime
  • Ensure logs enable traceability of the system’s functioning
  • Retain logs for an appropriate period consistent with the system’s intended purpose and applicable legal obligations

Transparency and Provision of Information (Article 13)

  • Design the AI system so its operation is sufficiently transparent for deployers to interpret outputs and use them appropriately
  • Provide deployers with instructions for use that include: system capabilities and limitations, intended purpose, performance levels (accuracy, robustness, cybersecurity), known or foreseeable circumstances that may lead to risks, and human oversight measures

Human Oversight (Article 14)

  • Design the AI system to allow effective human oversight during use
  • Enable the human overseer to fully understand the AI system’s capabilities and limitations
  • Enable the human overseer to correctly interpret the system’s output
  • Enable the human overseer to decide not to use, override, or reverse the AI system’s output
  • Enable the human overseer to intervene in or halt the system’s operation

Accuracy, Robustness, and Cybersecurity (Article 15)

  • Achieve appropriate levels of accuracy, robustness, and cybersecurity throughout the lifecycle
  • Declare accuracy metrics in technical documentation and instructions for use
  • Implement measures to address errors, faults, and inconsistencies
  • Implement resilience against unauthorized third-party attempts to alter system use or performance (including adversarial inputs)

Conformity Assessment (Articles 40-49)

  • Determine whether your AI system requires third-party conformity assessment or can self-assess
  • For biometric systems: mandatory third-party assessment by a notified body
  • For other Annex III systems: self-assessment is generally permitted if harmonized standards or common specifications are followed
  • Affix CE marking upon successful conformity assessment
  • Register the AI system in the EU database for high-risk AI systems before placing on market

Post-Market Monitoring (Article 72)

  • Establish a post-market monitoring system proportionate to the nature of the AI technology and risks
  • Actively collect and analyze data on performance throughout the system’s lifetime
  • Report serious incidents and malfunctions to national market surveillance authorities

Step 3: Meet Transparency Obligations for Limited-Risk Systems

These requirements take effect August 2026:

  • AI systems interacting with natural persons: ensure users are informed they are interacting with an AI system (unless this is obvious from the circumstances)
  • Emotion recognition or biometric categorization: inform the persons exposed
  • AI-generated or manipulated content (deepfakes): clearly label the content as AI-generated, in a machine-readable format where technically feasible

Step 4: Establish Organizational Infrastructure

Regardless of specific system classifications, organizations deploying AI systems should establish:

AI Literacy (Article 4)

  • Ensure staff and other persons dealing with AI systems on your behalf have a sufficient level of AI literacy
  • Tailor AI literacy measures to the technical knowledge, experience, education, and training of the relevant staff and the context in which the AI systems are to be used

Quality Management System (for high-risk system providers)

  • Implement a quality management system that ensures compliance throughout the AI system’s lifecycle
  • Include: strategy for regulatory compliance, techniques for design and development, quality control procedures, data management procedures, record-keeping systems, corrective action processes, and post-market monitoring plans

Timeline Summary

DateWhat Takes Effect
August 2024AI Act enters into force
February 2025Prohibited AI practices banned
August 2025Governance rules and obligations for general-purpose AI models apply
August 2026Transparency obligations for limited-risk systems; high-risk system requirements for AI systems already regulated under existing product safety legislation
August 2027Full application of high-risk AI system requirements under Annex III

What to Do Now

The phased timeline creates urgency that isn’t always visible. Prohibited practice compliance was due in February 2025. General-purpose AI model obligations applied from August 2025. Transparency requirements take effect in August 2026. The Annex III high-risk requirements, while not fully enforceable until August 2027, require significant organizational investment in documentation, quality management, and monitoring systems that take months to implement.

Immediate actions (if not already completed):

  1. Inventory all AI systems in use across the organization, including vendor-provided AI features embedded in existing software
  2. Classify each system against the risk tiers above
  3. Verify that no prohibited practices are in use
  4. Begin technical documentation for any system classified as high-risk

Near-term actions (before August 2026):

  1. Implement transparency disclosures for limited-risk AI systems
  2. Develop the quality management system infrastructure for high-risk systems
  3. Assess conformity assessment requirements and timeline for each high-risk system
  4. Launch AI literacy programs for staff involved in AI system deployment and operation

Medium-term actions (before August 2027):

  1. Complete conformity assessments for all high-risk systems
  2. Register high-risk systems in the EU database
  3. Establish post-market monitoring systems
  4. Implement full record-keeping and logging capabilities

For organizations assessing their overall governance readiness, the AI governance readiness guide provides the broader framework. EU AI Act compliance is one component of governance readiness, not the entirety of it. The AI readiness assessment covers how governance (including regulatory compliance) integrates with data, workforce, infrastructure, and strategic readiness.

Frequently Asked Questions

Does the EU AI Act apply to us if we’re based outside the EU?

Yes, if your AI system’s outputs are used within the EU or if the AI system interacts with people located in the EU. The territorial scope mirrors GDPR: it follows the affected individuals, not the organization’s headquarters. A U.S. company whose AI-powered hiring tool evaluates candidates in EU member states is subject to the Act’s requirements.

Are AI features embedded in vendor software (ERP, CRM) our compliance responsibility?

It depends on your role. The vendor is the “provider” and bears primary compliance obligations (conformity assessment, technical documentation, CE marking). You are the “deployer” and bear obligations to use the system in accordance with its instructions, maintain human oversight, monitor performance, and report incidents. You cannot outsource your deployer obligations to the vendor. Verify that your vendors are EU AI Act compliant, and document your own deployer compliance.

What’s the penalty for non-compliance?

Fines scale with violation severity. Using prohibited AI practices: up to €35 million or 7% of global annual turnover (whichever is higher). Violating high-risk system obligations: up to €15 million or 3% of turnover. Providing incorrect, incomplete, or misleading information to authorities: up to €7.5 million or 1% of turnover. For SMEs and startups, the Act provides proportionate penalty caps.

How does the EU AI Act interact with GDPR?

They’re complementary. GDPR regulates personal data processing, including processing by AI systems. The AI Act regulates the AI systems themselves. An AI system that processes personal data must comply with both: GDPR for data handling, consent, and individual rights, and the AI Act for system design, oversight, and transparency. The AI Act explicitly references GDPR requirements and does not override them.

Can we use AI in hiring under the EU AI Act?

AI systems used in recruitment, screening, evaluation, or employment decisions are classified as high-risk under Annex III. They can be used, but they must comply with all high-risk requirements: risk management, data governance, technical documentation, human oversight, accuracy standards, and conformity assessment. This is one of the most compliance-intensive use cases for AI under the Act.

Assess readiness before you deploy

Seampoint maps AI opportunity and governance constraints at the task level so you invest where deployment is both capable and accountable.