Lifecycle stage — Discover
You are somewhere in the gap between the last strategy document and the next one, and the last one has not aged well. The framework the previous consultant left behind reads like a 2023 artifact — a lot about 'transformation' and very little about the models, margins, and operational realities your 2026 plan has to account for. This is the Discover stage of the Hyperion Lifecycle at full enterprise scope: four weeks, embedded with your leadership team, producing a single reconciled document that covers strategy, business case, ROI, and 12-month execution. SMEs get a 2-week light version of the same engagement — same deliverable, smaller surface area, same rigor. I've written 11 articles on AI strategy frameworks for the Forbes Technology Council, shipped 8 AI ventures to production, and serve as French Government AI Ambassador for Finance & Business Digital Transformation. I bring a strategy that survives a board conversation because I've had those board conversations.
Every vendor has a framework; none of them hold up. Accenture's pyramid, McKinsey's horizons, BCG's diamond — each one makes the slides look coherent and falls apart the moment the board asks 'what does this commit us to in Q2?' The frameworks are built to sell the next engagement, not to anchor a commitment. Your strategy needs a spine specific enough that it constrains the next vendor pitch, not a framework generic enough that every vendor can claim alignment.
The ROI model fell apart in quarter one. The last model assumed LLM API pricing held, assumed adoption at 70% by month six, assumed the use cases identified in the workshop would all ship. Pricing moved, adoption landed at 23%, and two of the three use cases were killed by legal. The problem was not the math — it was that the model had no ranges, no sensitivities, and no kill criteria. A real ROI model tells you when to stop, not just what to hope for.
Strategy and roadmap are two documents that do not reconcile. The strategy was written by the CEO's office with the external consultant. The roadmap was written by the head of engineering with the AI lead. They use different taxonomies, they ladder up differently, and when the CFO asks how the strategy connects to next quarter's spend, nobody has a clean answer. Every board cycle someone spends a week trying to reconcile them, and the reconciliation never lasts.
The strategy tries to please everyone and commits to nothing. The board wants defensibility. The CFO wants a number. Engineering wants scope discipline. The AI team wants room to experiment. The strategy as currently written gestures at all of these and commits to none of them — which is how you end up with a document everyone signs off on and nobody uses. A useful strategy says no to things specifically, in writing, with the reasoning. Your current one does not.
The engagement runs in four one-week phases, each ending with a draft you can react to. I work on site for the first and last weeks and embedded remotely in between. The output is a single document with four views — board view, CFO view, execution view, technical view — all reconciled to the same underlying model.
Structured interviews with the CEO, CFO, CAIO, head of engineering, and two to three business unit leaders whose P&L will carry the AI commitment. I read every AI-related board deck from the past 18 months, every relevant vendor contract, and the current roadmap in whatever form it exists. By end of week one I produce a written diagnostic — where you are, what the previous strategy got right and wrong, and the three or four strategic questions the next document actually has to answer.
I draft the strategy spine — the commitments that hold regardless of which model ships next quarter. This includes the domains where AI is a competitive lever versus the domains where it is operational efficiency, the build-versus-buy-versus-partner boundary, the data and IP positions you will defend, and the explicit list of things this strategy says no to. End of week two: a 15-page strategy draft your leadership team reviews and redlines.
A quantified business case for each strategic commitment, with base case, downside case, and a set of sensitivities that actually matter — LLM pricing, adoption rate, integration cost, regulatory timing. Every number has a source. The model has explicit kill criteria: the conditions under which each initiative gets stopped rather than extended. This is what makes the ROI defensible twelve months from now when half the assumptions have moved.
A 12-month execution plan that reconciles line-by-line to the strategy and the business case. Specific initiatives, owners, quarterly milestones, success metrics, and dependencies. I then run a full reconciliation pass — board view, CFO view, execution view, technical view — so the CFO's number matches the execution plan's scope matches the strategy's commitments. Week four ends with a 3-hour board-level readout where I walk the document with your leadership and take the hard questions.
CEOs and CAIOs at companies with more than €50M in revenue preparing a Q+2 AI commitment that will be debated at the board. CFOs who need a defensible ROI model for an AI budget line above €2M. Public sector leaders who need a strategy document that survives procurement review and political scrutiny. SMEs can commission the 2-week light version — same methodology, smaller surface area, same deliverable quality. This is not for startups without revenue or without a current AI footprint to strategize around — those organizations need the Readiness Audit first, which is a precondition for a useful strategy.
Usually you wouldn't — but the fact that you're asking suggests the last one didn't hold. The big-four AI strategy is typically produced by analysts who have not shipped AI to production. The frameworks look rigorous and fall apart under operational pressure. If your existing strategy is surviving board review and reconciling to your actual spend, you do not need this engagement. If it is being rewritten every quarter and nobody's sure what it commits you to, that is what I'm built to fix.
Same methodology, smaller surface area. For an SME, there are typically 2-3 strategic questions that matter rather than 8-10. The business case has fewer initiatives. The reconciliation across views is simpler because there are fewer stakeholders. The deliverable format is identical — strategy, business case, ROI model, execution plan — but compressed because the scope is smaller. The one thing I do not compress is the rigor of the ROI model; it takes what it takes to be defensible.
Then we start from it rather than from a blank page, and the engagement becomes a rigorous pressure-test and rewrite rather than a greenfield draft. That usually shortens week two by a few days, which we use on deeper business-case work. Bring the draft to the first interview — I will tell you on day two whether it's a starting point worth keeping or whether we're better off starting fresh. I'll say this honestly either way.
Yes, and I often do. My engagement has a specific scope — strategy, business case, ROI, execution plan — and your existing firm may be running transformation, change management, or vendor selection work that continues in parallel. I will not try to expand into their scope, and I expect them not to expand into mine. We meet once a week during the engagement so the work products reconcile. This has worked well with the big four and with specialist AI firms.
Explore other services that complement this offering
30 minutes. I diagnose your situation, tell you honestly whether this service fits — and if it doesn't, what does.