Less than 5 months. That is how long you have until August 2, 2026 — the date when the EU AI Act's full requirements for high-risk AI systems take effect. Penalties reach €35 million or 7% of global annual turnover for the most serious violations. This guide covers everything you need: which AI systems are affected, what compliance requires article by article, how to prove compliance, and a realistic month-by-month roadmap.
What the EU AI Act Actually Regulates
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Published in the EU Official Journal on June 12, 2024, it entered into force on August 1, 2024.
Unlike the GDPR, which regulates what you do with personal data, the AI Act regulates AI systems themselves — how they are designed, trained, documented, monitored, and governed throughout their lifecycle.
Who it applies to:
- Any provider (developer, manufacturer) placing an AI system on the EU market or putting it into service in the EU
- Any deployer (operator) using an AI system within the EU in a professional context
- Providers and deployers established outside the EU when the output of their AI systems is used in the EU
A US, UK, or Canadian company building AI for European customers is subject to the Act. Market presence is not required — only market impact.
The Implementation Timeline: What Is Already in Effect
- August 1, 2024: Regulation enters into force
- February 2, 2025: Prohibited AI practices (Article 5) — already in effect and enforceable
- August 2, 2025: General-purpose AI (GPAI) model obligations — already in effect
- August 2, 2026: Full regulation applies — high-risk AI systems (Annex III), limited-risk transparency (Article 50), all remaining provisions
- August 2, 2027: High-risk AI that are safety components in products already regulated by EU harmonisation law (Annex I)
The prohibited AI provisions are not a future concern. They are current law. If your AI system engages in any practice listed under Article 5, you are already in violation.
Risk Classification: Where Does Your AI Sit?
Unacceptable Risk — Prohibited Since February 2, 2025
These practices are banned outright under Article 5 and carry the highest penalty tier:
- Social scoring by public authorities based on behaviour, social circumstances, or personal characteristics
- Real-time remote biometric identification in publicly accessible spaces by law enforcement (with narrow exceptions for investigating specific serious crimes with prior judicial authorisation)
- Biometric categorisation inferring sensitive attributes — race, political opinions, trade union membership, religious beliefs, sexual orientation, criminal record status — with limited law enforcement exceptions
- Subliminal or manipulative techniques that exploit psychological weaknesses or vulnerabilities to distort behaviour against a person's interests
- Emotion recognition in workplace or educational institution settings, except for specific safety or medical reasons
- Predictive policing based solely on profiling of individuals or assessment of personality traits
- Facial recognition databases built by scraping internet images or CCTV footage without specific legal basis
If your AI does any of these: stop immediately. These are not approaching deadlines — they are current violations.
High-Risk — Full Compliance Required by August 2, 2026
High-risk AI systems are enumerated in Annex III of the Act. They require a conformity assessment, full technical documentation, CE marking, and registration in the EU AI database before being placed on the market.
Limited Risk — Transparency Requirements (August 2, 2026)
Under Article 50, AI systems that interact with humans or generate content must disclose their nature:
- Conversational AI and chatbots must clearly inform users they are interacting with an AI system, unless this is obvious from context
- AI-generated synthetic media — deepfakes, synthetic voice, machine-generated text — must be labelled as artificially generated or manipulated
- Emotion recognition and biometric categorisation systems must inform the persons exposed that such a system is being used
Minimal Risk — No Mandatory Requirements
AI used for spam filtering, recommendation engines, internal analytics, predictive maintenance, and most internal tools faces no specific obligations. Voluntary codes of practice are encouraged.
The Complete Annex III: All Eight High-Risk Categories
If your AI system falls into any of these categories, full compliance obligations apply from August 2, 2026.
1. Biometric Systems
Remote biometric identification, biometric categorisation of natural persons, and emotion recognition systems (unless falling under the prohibited category). Exception: biometric verification confirming that an individual is who they claim to be (one-to-one matching) is excluded.
2. Critical Infrastructure
AI used as safety components in management of critical infrastructure networks and systems: roads, rail, aviation, shipping, energy grids, water distribution, gas, heating systems, and digital infrastructure. A "safety component" is one whose failure could endanger lives or essential services.
3. Education and Vocational Training
- Determining access to or admission into educational institutions
- Evaluating learning outcomes, including automated grading and proctoring
- Assessing the level of educational attainment, skills, or qualifications
- Monitoring prohibited student behaviour during exams
4. Employment, Workers Management, and Self-Employment
This is the broadest category for most enterprises:
- CV screening and candidate selection and sorting in recruitment processes
- Decisions on hiring, promotion, and termination
- Task allocation and performance monitoring of employees
- Monitoring, evaluation, or emotional behavioural assessment of employees
5. Essential Private and Public Services
- Credit scoring and creditworthiness assessment for individuals
- Life insurance and health insurance risk assessment based on individual profiling
- Emergency service dispatch and triage prioritisation
- Access to public services, benefits, and essential private services
6. Law Enforcement (Law Enforcement Authorities Only)
- Risk assessment tools evaluating individuals as victims or offenders of criminal offences
- Polygraphs and similar tools testing for truthfulness
- Assessment of the reliability of evidence in criminal proceedings
- Prediction of criminal or reoffending behaviour based on profiling
- Profiling of individuals in relation to criminal investigations
7. Migration, Asylum, and Border Control
- Risk assessments of persons for irregular migration or security risks at borders
- Verification of authenticity of travel documents
- Examination of asylum applications and determination of refugee status
- Visa and residence permit decisions
8. Administration of Justice and Democratic Processes
- AI assisting courts in researching, interpreting, or applying law to factual situations
- AI used to influence voters in elections or referenda, or electoral and census infrastructure
What High-Risk Compliance Requires: Articles 9–15
For every high-risk AI system, all of the following must be implemented, documented, and maintained:
Article 9 — Risk Management System
A continuous, documented risk management process covering the entire system lifecycle from design to decommissioning. This is not a one-time assessment. Required activities:
- Identify and analyse known and reasonably foreseeable risks to health, safety, and fundamental rights
- Estimate and evaluate risks that may arise in real-world conditions of use and in reasonably foreseeable misuse
- Adopt appropriate risk mitigation measures — prioritising design-based measures over operational ones
- Inform deployers of any residual risks after mitigation
Article 10 — Data and Data Governance
Training, validation, and testing datasets must be:
- Relevant — appropriate for the intended purpose
- Representative — reflecting the population and conditions in which the system will be used
- Sufficiently complete and free from errors to the extent technically feasible
- Subject to appropriate data governance practices covering data collection methodology, provenance, data preparation processes, and quality checks
You must examine datasets for possible biases that could affect the system's outputs and harm persons. Where personal data is used, appropriate data protection measures under the GDPR apply in addition.
Article 11 — Technical Documentation
Comprehensive technical documentation must be prepared before the system is placed on the market and kept up to date throughout its lifecycle. The documentation requirements are specified in Annex IV and include:
- General description, intended purpose, and the version history
- Design and development processes — architecture, design choices, key design decisions
- Training methodology, datasets used, validation and testing procedures and results
- Risk management documentation (Article 9 records)
- Instructions for use for deployers
Expect this to take several weeks per system from scratch.
Article 12 — Logging and Record-Keeping
High-risk AI systems must automatically log operations during their use to enable post-hoc auditing. Providers must retain technical documentation for 10 years after the system is placed on the market. Deployers must retain automatically generated logs for at least 6 months, unless other applicable law requires longer retention.
Article 13 — Transparency and Instructions for Use
Providers must supply deployers with clear, adequate instructions covering:
- Provider identity and contact details
- Capabilities and limitations of the system, including known biases and foreseeable misuse
- Conditions of expected performance, including accuracy levels and their validation
- Human oversight measures and how deployers should implement them
- Expected lifetime and any necessary maintenance
Article 14 — Human Oversight
High-risk AI systems must be designed and developed to enable effective human oversight during their use. The deployer must assign qualified, trained persons responsible for oversight who have the capacity and authority to:
- Fully understand the system's capabilities, limitations, and potential biases
- Monitor and detect anomalies, malfunctions, or unexpected behaviour
- Override, interrupt, or disregard the system's output when human judgement requires it
- Decide not to use the AI output in a specific case
This is not a rubber-stamp process. Meaningful oversight that can genuinely affect outcomes is required.
Article 15 — Accuracy, Robustness, and Cybersecurity
High-risk AI systems must achieve an appropriate level of:
- Accuracy: Declared in instructions for use; accuracy metrics must be based on validated test results, not claimed performance
- Robustness: Resilience to errors, faults, inconsistencies in inputs, and adversarial manipulation attempts
- Cybersecurity: Protection against unauthorised third-party access, tampering with training data, model poisoning, and adversarial examples
Conformity Assessment: Proving You Comply
Before placing a high-risk AI system on the EU market, providers must complete a conformity assessment to verify compliance with all applicable requirements.
Two routes exist:
Self-assessment (most high-risk systems): For the majority of Annex III categories, the provider conducts the conformity assessment internally. You verify compliance against each article, prepare the full technical documentation, sign a declaration of conformity, and affix the CE marking.
Third-party notified body assessment (specific systems): Required for AI systems used as safety components in products already subject to Union harmonisation legislation (Annex I — machinery, medical devices, etc.) that undergo mandatory third-party conformity assessment under that existing law, and where the AI component materially affects safety. Notified bodies are organisations independently accredited by EU member states.
After the assessment, providers must:
- Sign the EU declaration of conformity (Article 47)
- Affix CE marking to the system and its documentation (Article 48)
- Register the system in the EU AI database before market placement
EU AI Database Registration
The EU AI Act creates a public EU-wide database of high-risk AI systems, maintained by the EU AI Office. Providers must register before placing a high-risk AI system on the market.
Registration covers: provider identity, system name, intended purpose, geographical scope, conformity assessment pathway, and the reference to the declaration of conformity.
For certain high-risk AI systems deployed by public bodies in areas such as social benefits, law enforcement, or border control, the deployer must also register their use of the system in the database.
Post-Market Monitoring and Serious Incident Reporting
Compliance does not end at market placement.
Post-market monitoring (Article 72): Providers must establish a post-market monitoring system proportionate to the risk level of the system. This involves actively collecting and analysing performance data throughout the system's operational lifetime to identify any issues that did not emerge during conformity assessment.
Serious incident reporting (Article 73): When a serious incident occurs — an incident that directly or indirectly caused or could have caused death, serious health harm, or a serious breach of fundamental rights — providers must report to national market surveillance authorities. Deployers who become aware of a serious incident must report to the provider and, where applicable, to authorities directly.
GPAI Models: Already Under Obligation Since August 2025
General-purpose AI (GPAI) models — foundation models trained on broad data at scale for a wide range of tasks — have been subject to obligations since August 2, 2025.
All GPAI model providers must currently:
- Maintain technical documentation as specified in Annex XI
- Comply with EU copyright law for training data — including honouring rights reservation under Article 4(3) of the DSM Directive
- Publish a sufficiently detailed summary of training data used, for copyright compliance transparency
GPAI models with systemic risk — defined as models trained using more than 10^25 floating point operations (FLOPs) — carry additional obligations:
- Conduct model evaluations including adversarial testing (red-teaming) before and after market release
- Report serious incidents and near-misses to the AI Office
- Implement appropriate cybersecurity measures
- Report on energy consumption efficiency
If you build applications on top of GPAI models: You benefit from the upstream model provider's GPAI compliance. However, you remain fully responsible for compliance at the application level — if your application is a high-risk AI system under Annex III, you must complete the full high-risk compliance process for your system.
The Three-Tier Penalty Structure
The AI Act's penalty regime has three distinct tiers:
- Prohibited practices (Article 5): Up to €35 million or 7% of total worldwide annual turnover for the preceding financial year — whichever is higher
- Other violations (high-risk requirements, GPAI obligations, transparency obligations): Up to €15 million or 3% of total worldwide annual turnover — whichever is higher
- Incorrect, incomplete, or misleading information provided to notified bodies or national authorities: Up to €7.5 million or 1.5% of total worldwide annual turnover — whichever is higher
For SMEs and startups, the Act provides that national authorities must consider financial capacity and economic viability. The fine is capped at the lower of the absolute amount or the turnover percentage.
These are maximum amounts. Actual enforcement will depend on the infringement's severity, duration, and the level of cooperation with authorities.
Who Enforces the EU AI Act?
The AI Office (established within the European Commission, Directorate-General for Communications Networks, Content and Technology) is responsible for:
- Direct oversight and enforcement of GPAI model provider obligations
- Coordination of national enforcement across member states
- Maintaining the EU AI database
- Developing guidelines, standards, and delegated acts
- Issuing warnings, requesting access to model documentation, and ordering corrective action
National competent authorities: Each EU member state designates one or more national competent authorities to supervise enforcement of the Act within their jurisdiction. These are typically existing sector-specific regulators depending on the domain of the AI system.
National market surveillance authorities: Responsible for product-level enforcement for AI in physical products subject to existing EU product safety law.
Your 5-Month Compliance Roadmap: March to August 2026
March 2026 — Inventory and Triage
Conduct a comprehensive AI system inventory across your entire organisation:
- What AI systems are currently in use or development?
- Who owns each system? Who uses it? Who is affected by its outputs?
- What decisions does each system influence, directly or indirectly?
- What data does each system process?
Apply the Annex III checklist to every system identified. Most will be clearly minimal risk. Flag those in the eight high-risk categories and all borderline cases for deeper review.
Most organisations are surprised by this exercise. AI has been adopted function by function, team by team. Shadow AI deployments — tools adopted without central oversight — are common.
April 2026 — Gap Analysis for Each High-Risk System
For every confirmed high-risk system, run a structured gap assessment against Articles 9–15:
- Is a risk management process documented and ongoing?
- Is there a data governance record covering provenance, bias testing, and quality?
- Does technical documentation exist at the level required by Annex IV?
- Are operational logs captured and retained appropriately?
- Is there a genuine human oversight mechanism — not just a review step, but the ability to override?
- Are accuracy and robustness metrics declared and validated?
Document every gap. Prioritise by compliance criticality and remediation effort.
May 2026 — Technical Remediation
Implement the technical measures to close gaps:
Bias testing: Integrate tools such as Fairlearn, IBM AI Fairness 360 (AIF360), or Responsible AI Toolbox into your model validation pipeline. Test for demographic parity, equalised odds, and use-case-specific fairness metrics before deployment and on a scheduled basis post-deployment.
Explainability: Add SHAP (SHapley Additive exPlanations) or LIME to generate explanations for individual model outputs. This supports both Article 13 transparency and Article 14 human oversight.
Audit logging: Implement structured logging of model inputs, outputs, confidence scores, model version, and decision timestamps. Ensure logs are immutable, timestamped, and retained for the required periods.
Human oversight workflows: Define and implement review and escalation processes for automated decisions with significant impact. Ensure oversight personnel receive adequate training on the system's capabilities and limitations.
June 2026 — Documentation and Conformity Assessment
- Prepare technical documentation to Annex IV standard for each high-risk system
- Conduct the internal conformity assessment — verify compliance against each Article, document findings
- Draft and sign the EU declaration of conformity
- Initiate EU AI database registration
Allow at least four weeks for this phase per high-risk system. Documentation is the most time-intensive step for teams that have not built a compliance-first workflow.
July–August 2026 — Validation and Go-Live Readiness
- Independent internal review of all documentation completeness
- Final bias testing and accuracy validation against declared metrics
- Incident response and serious incident reporting procedures in place and tested
- Post-market monitoring system activated
- Staff responsible for human oversight trained and briefed
- CE marking applied to systems and documentation
Frequently Asked Questions
Does the EU AI Act apply to companies outside the EU? Yes. If your AI system is placed on the EU market, or if the output is used within the EU, the Act applies — regardless of where your company is headquartered. A US company providing AI-powered HR screening to European employers is subject to the Act.
Are large language models subject to the EU AI Act? Yes. LLMs meeting the GPAI definition have been under obligation since August 2025. If used in a high-risk application such as CV screening or credit assessment, the full high-risk compliance regime applies to the application layer regardless of the underlying model's compliance status.
What counts as a serious incident under Article 73? An incident that directly or indirectly leads to death, serious harm to the health of persons, serious damage to property or the environment, or a serious infringement of fundamental rights. Providers must have a reporting process in place before their system goes live.
Can we self-certify compliance for all high-risk systems? For most Annex III categories, yes — self-assessment by the provider is the default conformity route. Third-party notified body involvement is required only for high-risk AI that are safety components in products subject to mandatory third-party certification under existing EU product safety legislation (Annex I).
What if our AI system is built on a third-party foundation model? You as the downstream provider are responsible for compliance at the system level. The foundation model provider's GPAI compliance covers the model layer. Your conformity assessment must cover data governance for fine-tuning data, the risk management system, logging, human oversight, and the technical documentation specific to your application.
How should we classify borderline cases? When in doubt, classify conservatively as high-risk. A system later found to be high-risk when treated as minimal-risk exposes you to the €15 million/3% penalty tier. Reclassifying from high-risk to minimal-risk after a thorough documented assessment carries no penalty.
How Hyperion Consulting Can Help
EU AI Act compliance is primarily an engineering and organisational challenge. The legal requirements translate into specific technical obligations — bias testing pipelines, explainability tooling, audit logging architectures, conformity assessment processes, and technical documentation workflows.
AI Strategy Sprint: 2–4 week engagement delivering a complete AI system inventory, Annex III risk classification, Articles 9–15 gap analysis, and a prioritised compliance roadmap.
Production AI Systems: Technical implementation of compliance requirements — bias testing pipelines, explainability integration, audit logging, human oversight architectures, and technical documentation to Annex IV standard.
The August 2026 deadline is fixed in law. Fines scale with revenue, not organisation size. Every week of preparation now is worth multiple weeks of remediation in July 2026.
This guide reflects Regulation (EU) 2024/1689 as published in the Official Journal of the European Union on June 12, 2024, and obligations in effect as of March 2026. It is provided for informational purposes only and does not constitute legal advice. For advice specific to your jurisdiction, consult qualified legal counsel.
