Part of the DEPLOY Method — Yield phase
The EU AI Act is now enforceable, and the penalties are not symbolic. Deploying a prohibited AI system carries fines up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance on high-risk systems reaches €15 million or 3% of turnover. The legal text runs 144 articles and 13 annexes, and every enterprise I speak to has underestimated one of three things — which of their systems are actually in scope, how much technical documentation Annex IV actually requires, or how long the conformity assessment takes when the notified body sends its first list of clarifications. This is the GOVERN phase of the DEPLOY Method applied to the most consequential AI regulation of our decade. I serve as French Government AI Ambassador for Finance and Business Digital Transformation, which means I have spent the last several years inside the policy side of exactly this regulation — not reading the summaries, reading the text, with the people who wrote it. That is the perspective this engagement brings to your compliance program.
You do not know which of your systems are in scope. The AI Act defines prohibited systems, high-risk systems, limited-risk systems, and minimal-risk systems, and the classification turns on the use case — not the technology. A recommendation model in marketing is probably limited-risk; the same model class used for credit scoring is high-risk. A biometric categorization system used in retail may be prohibited outright. Most organizations have no written inventory of their AI systems keyed to the AI Act's risk classes, which means every enforcement question starts with discovery work that should already have been done.
Your conformity assessment timeline is wrong. Teams scope it as an audit — a few weeks of paperwork. In practice, a Title III conformity assessment for a high-risk system is a months-long engagement involving a notified body, a full risk management system, data governance documentation, technical documentation per Annex IV, post-market monitoring plans, human oversight design, and remediation of every gap surfaced during review. The first formal submission is rarely the last. Teams that planned for eight weeks find themselves eight months in and still negotiating remediations.
The Annex IV technical documentation is genuinely harder than it looks. It requires a general description of the system, detailed design, training methodology, data governance, evaluation metrics, known limitations, risk management measures, human oversight measures, accuracy specifications, and the changes log. Most internal engineering documentation does not survive contact with Annex IV — it answers different questions, at a different level of rigor, for a different audience. Rewriting to the Annex IV standard is not a formatting exercise. It is an engineering documentation exercise your team has not done before.
Post-market monitoring is almost always the gap. The Act requires a written post-market monitoring plan, active monitoring in production, and reporting of serious incidents within tight deadlines. Most organizations have reactive incident response, not a proactive AI-specific monitoring program. The first serious incident after enforcement begins exposes the absence, and the regulatory response is structurally unforgiving of organizations that did not plan for this. Post-market monitoring is the difference between a compliant program and a program on paper.
The engagement scope depends on the size of your AI footprint and the number of high-risk systems in scope. Twelve weeks covers a single high-risk system; twenty-four weeks covers a portfolio of three to five systems with shared governance infrastructure. I work embedded with your legal, compliance, and engineering teams — your teams do the work, I bring the regulatory reading and the pattern recognition from the policy side.
We build a written inventory of every AI system in your organization — production, pilot, prototype — with each one classified against the AI Act's risk categories. Prohibited, high-risk per Annex III, limited-risk with transparency obligations, or minimal-risk. The classification is documented with the reasoning and the article references. Systems that were deployed without classification get retroactive review. By end of week four you have the single document that every subsequent compliance decision references.
For each high-risk system we build the compliance artifacts in parallel: the risk management system, the data governance documentation, the Annex IV technical documentation, the human oversight design, the accuracy and robustness specifications. This is the heaviest phase and where most programs underestimate the effort. I work with your engineering teams to rewrite internal documentation to the Annex IV standard, not to invent it from nothing. For systems requiring a notified body, we prepare the submission package and the response playbook for the clarifications that will follow.
We stand up the post-market monitoring program — the written plan, the active monitoring in production, the incident classification criteria, the reporting workflows with the timeline compliance the regulation requires. The monitoring integrates with the observability stack you already have, extended to capture the AI-specific signals that matter: accuracy drift, demographic performance drift, adverse impact monitoring, user harm signals. Incident response runbooks are written to match the regulation's notification deadlines, not guessed at when the first incident happens.
We build the governance infrastructure for the long term — the AI governance committee charter, the intake process for new AI systems, the recurring review cycle, the training program for staff interacting with high-risk systems, the vendor management approach for third-party AI components. The program needs to run without me once the engagement ends. I produce the playbooks, the templates, and the decision log so your next high-risk system runs through a repeatable process rather than repeating the discovery work from week one.
Enterprises operating in or into the EU with high-risk AI systems as defined by Annex III — credit scoring, employment, education, essential services, law enforcement, migration, administration of justice, and the other categories the regulation names specifically. Organizations with an AI footprint large enough that classification alone is a multi-week discovery exercise. Public sector buyers and deployers who will be scrutinized by their own oversight bodies in addition to the regulator. This is not for organizations whose AI usage is entirely outside Annex III scope — a limited-risk transparency obligation is a much smaller engagement than a full conformity assessment. It is also not a substitute for external legal counsel on regulatory strategy; I engineer the compliance program, and your general counsel or outside firm handles the legal positioning.
Your law firm tells you what the regulation requires; I build the program that implements it. Those are complementary, not competing. Most organizations find the legal advice is clear and the implementation gap is enormous — the law firm cannot write your Annex IV technical documentation, design your human oversight controls, or stand up your post-market monitoring pipeline. That is engineering and program management, which is what this engagement delivers. I work alongside external counsel; they own the legal strategy, I own the operational program.
The staged enforcement timeline is legislated. Prohibitions have been enforceable since February 2025, general-purpose AI rules since August 2025, and the high-risk obligations land in August 2026 with the remaining provisions following in 2027. Specific guidance from notified bodies and the European AI Office continues to evolve, which affects the details of how conformity is demonstrated — not whether it is required. Any program scoped for the 2026 high-risk deadline needs to be in implementation now, not in planning, because the conformity assessment timelines themselves run into months once a notified body is involved.
Document the classification reasoning at the time the decision is made, grounded in the article references, and be prepared to defend it. A defensible limited-risk classification with a written rationale is a much stronger posture than a system classified informally or not at all. Where the classification is genuinely ambiguous, we classify conservatively — the cost of preparing a high-risk system that turns out to be limited-risk is small; the cost of deploying a limited-risk classification on a system that is high-risk is the penalties named in the subtitle. I make that trade-off explicit in the inventory document, not hidden in a spreadsheet.
If your AI system's output is used in the EU, yes, the regulation applies to you as a provider or deployer regardless of where you are incorporated. This catches most US SaaS companies with European customers, which is often an unwelcome discovery. The practical answer is to scope the program to the systems with EU exposure and run the compliance work against those — which may be a smaller footprint than your full AI portfolio, which often reduces the timeline and cost significantly compared to a full-program rollout.
GDPR covers personal data processing; the AI Act covers AI system safety, transparency, and risk management. They overlap in the data governance section — your AI Act data governance documentation will reference and extend your GDPR DPIAs where personal data is involved. In practice, the two programs should share the same data inventory and the same governance committee, because the same engineering systems are in scope under both regulations. A compliance program that runs GDPR and AI Act as parallel silos duplicates work and produces inconsistencies that auditors notice.
Explore other services that complement this offering
30 minutes. I diagnose your situation, tell you honestly whether this service fits — and if it doesn't, what does.