August 2, 2026. That's when EU AI Act enforcement begins for high-risk AI systems. Fines: €35 million or 7% of global turnover. Whichever is higher. Prohibited AI practices are already banned since February 2025. Do you even know which of your AI systems are high-risk? Most organizations don't. That's Compliance Blindness — and it's the most expensive risk you're ignoring.
You have no inventory of AI systems. Biometric access control, hiring screening tools, credit scoring models, critical infrastructure AI — any of these makes you high-risk under EU AI Act Article 6. And you haven't checked.
Five documentation requirements are now law: technical documentation, conformity assessments, human oversight procedures, data governance records, and incident reporting protocols. Nobody in your organization owns any of them.
Legal says it's a Tech problem. Tech says it's a Legal problem. Meanwhile, EU AI Act enforcement begins August 2, 2026, and you haven't assigned a single responsible person.
Your AI vendors claim compliance. You can't verify it. Under the EU AI Act, deployer obligations exist independently of provider obligations. You are liable for what you deploy — regardless of what your vendor tells you.
EU AI Act compliance consulting that moves from assessment to audit-ready in weeks, not quarters. The same framework I used to build Aegis AI — a purpose-built compliance engine for EU AI Act and GDPR. Practical governance that integrates with your existing processes, not binder-filling exercises.
Complete inventory of every AI system — internal builds, vendor tools, embedded models. Classify each under EU AI Act Article 6: high-risk (biometric systems, hiring AI, credit scoring, critical infrastructure AI), limited-risk, or minimal-risk. You get clarity in 2 weeks.
Prioritized action plan mapping each high-risk system to its 5 documentation requirements: technical documentation, conformity assessment, human oversight, data governance, and incident reporting. Every requirement gets an owner and a deadline.
Build the documentation, configure the monitoring, train the teams. Conformity assessments for each high-risk system. Human oversight procedures that work in practice. Data governance that satisfies both EU AI Act and GDPR.
Governance framework locked in: policies, processes, roles, continuous monitoring. Your organization can demonstrate compliance to any auditor, regulator, or board member on demand.
Aligned with ISO 42001 AI Management System standards and EU data sovereignty requirements. GOVERN treats EU AI Act compliance as an engineering problem, not a legal checkbox exercise. Every deliverable maps directly to a regulatory obligation under Article 6 high-risk classification. Mohammed built Aegis AI — a full compliance engine for EU AI Act risk classification, obligation extraction, and audit-ready reporting — using this exact methodology.
You deploy AI in the EU market or serve EU citizens. You have multiple AI systems and no centralized inventory. Your board is asking about EU AI Act compliance and nobody has answers. You use AI in hiring, credit scoring, biometrics, or critical infrastructure — any of which triggers high-risk classification. You want audit-ready compliance before August 2, 2026, not theoretical frameworks gathering dust.
Yes. The EU AI Act has extraterritorial reach. If your AI system's output affects anyone in the EU — even if your company is headquartered in New York, Singapore, or Dubai — you fall under its jurisdiction. Same extraterritorial principle as GDPR. Mohammed Cherifi, an EU AI Act compliance consultant based in Paris, advises organizations across Europe, North America, and Asia-Pacific on meeting these cross-border obligations.
High-risk classification under EU AI Act Article 6 depends on what your AI does, not how it works. Biometric identification systems, AI used in hiring decisions, credit scoring models, AI in education, healthcare diagnostics, and critical infrastructure management — all high-risk. The test is use case, not technology. Mohammed conducts systematic AI inventories to identify exactly which of your systems require conformity assessment and which get lighter governance.
Three dates matter. February 2, 2025: prohibited AI practices already banned (social scoring, real-time biometric surveillance with limited exceptions). August 2, 2025: general-purpose AI model obligations apply. August 2, 2026: high-risk system requirements fully enforceable. You have months, not years. Starting now gives you time for inventory, classification, and documentation. Starting in June 2026 means emergency scrambling and gaps.
No. The EU AI Act separates provider obligations from deployer obligations. Your vendor must ensure their system meets technical standards. You must ensure proper use, human oversight, monitoring, and incident reporting. If your vendor's hiring AI discriminates and you deployed it without oversight procedures, you are liable. Mohammed helps deployers map their specific obligations independently of vendor claims.
Three tiers. Up to €35 million or 7% of global annual turnover for prohibited AI practices. Up to €15 million or 3% for high-risk system violations. Up to €7.5 million or 1.5% for providing incorrect information to authorities. The regulation uses whichever amount is higher. For SMEs, fines are proportionally reduced but still material. These penalties are designed to make non-compliance more expensive than compliance.
Two variables: how many AI systems you operate and whether you have existing governance. Organizations with ISO 42001 or similar frameworks typically reach audit-ready status in 3-4 months. Organizations starting from zero need 6-9 months. The biggest time sinks are AI system inventory (most companies undercount by 40-60%), conformity assessments for each high-risk system, and building human oversight procedures that work in practice. Hyperion Consulting completes the assessment sprint in 2 weeks.
ISO 42001 is the international standard for AI Management Systems. It covers risk management, governance, and responsible AI development. While it's voluntary and the EU AI Act is law, organizations with ISO 42001 certification have 60-70% of the governance infrastructure already in place. The GOVERN Framework aligns with ISO 42001 so you can pursue certification alongside regulatory compliance — two outcomes from one investment.
If your AI models are trained on EU citizen data but hosted on US or Chinese cloud infrastructure, you face overlapping regulatory exposure under both EU AI Act and GDPR. Sovereign AI deployment — on-premise, EU-hosted cloud, or hybrid architectures — reduces this risk. Mohammed evaluates sovereign AI options that keep data and AI processing within jurisdictional boundaries without sacrificing inference speed or cost efficiency.
Explore other services that complement this offering
Let's discuss how this service can address your specific challenges and drive real results.