A structured facilitation worksheet for leadership teams planning their first — or next — AI initiative. The 6Ws framework (Who, What, When, Where, Why, hoW) provides a complete scaffold for strategic alignment, use-case selection, and execution planning in a single workshop session.
The 6Ws is a structured strategy worksheet designed to help executive teams and AI programme leads move from vague ambition to executable roadmap in a single facilitated session. It borrows from the classic journalistic framework (Who, What, When, Where, Why) and adds a critical sixth dimension: hoW — the technical and organisational implementation approach.
Most AI strategy failures are not technical failures. They are alignment failures: the wrong people were not consulted, the right use cases were not selected, the regulatory environment was not mapped, or success was never defined. The 6Ws forces every one of these questions onto the table before a single line of code is written or a vendor contract is signed.
flowchart TD WHY["🎯 WHY\nStrategic Rationale\n& Business Case"] WHO["👥 WHO\nStakeholders\nGovernance"] WHAT["💡 WHAT\nUse Cases\nCapabilities"] WHEN["📅 WHEN\nTimeline\nMilestones"] WHERE["🌍 WHERE\nDeployment\nRegulatory Zones"] HOW["⚙️ HOW\nTech Approach\nVendors"] WHY --> WHO WHY --> WHAT WHO --> WHAT WHO --> HOW WHAT --> WHEN WHAT --> WHERE WHAT --> HOW WHEN --> HOW WHERE --> HOW HOW --> |"Iteration &\nValidation"| WHY style WHY fill:#6366f1,stroke:#4f46e5,color:#fff style WHO fill:#0ea5e9,stroke:#0284c7,color:#fff style WHAT fill:#10b981,stroke:#059669,color:#fff style WHEN fill:#f59e0b,stroke:#d97706,color:#fff style WHERE fill:#ef4444,stroke:#dc2626,color:#fff style HOW fill:#8b5cf6,stroke:#7c3aed,color:#fff
The WHY dimension anchors everything. It feeds directly into WHO should be involved and WHAT should be built. The HOW closes the loop: technical choices loop back to refine the WHY as new constraints and capabilities emerge.
Every AI initiative succeeds or fails on people, not technology. The WHO dimension maps the complete stakeholder landscape: those who will champion the initiative, those who might resist it, and the governance structures needed to sustain it.
Who is the named executive sponsor, and what is their stake in this initiative succeeding?
Which business unit will be most affected by this AI system in the first 12 months?
Who controls the data assets this initiative requires? Have they been consulted?
List three stakeholders who are likely to resist this initiative and their primary concern.
What governance body will own ongoing decisions about model updates, data access, and risk thresholds?
Who will communicate about this initiative externally (customers, regulators, press)?
The WHAT dimension is where strategic intent becomes concrete. It forces teams to move from “we want to use AI” to “we will automate X process using Y capability, and it requires Z data.” This is where most teams underinvest — and where the most value is created.
| Use Case | AI Capability Needed | Data Required | Priority Score |
|---|---|---|---|
| Customer support deflection | Conversational AI / RAG | Past tickets, product docs | High (9/10) |
| Contract clause extraction | Document AI / NLP | Historical contracts (labeled) | High (8/10) |
| Demand forecasting | Time-series prediction | Sales history, market signals | Medium (6/10) |
| Code review assistance | Code LLM / static analysis | Codebase, style guide | Medium (5/10) |
List the top 5 business processes that are manual, repetitive, or data-intensive in your organisation.
For each use case candidate, what is the estimated annual cost of doing it manually?
What structured or unstructured data exists today that could train or ground an AI system?
What is the minimum viable output quality needed for business adoption? How will you measure it?
Which use cases require human-in-the-loop oversight, and which can be fully automated?
What AI capabilities does your team currently have internally vs what must be sourced externally?
AI timelines are notoriously optimistic. The WHEN dimension forces realistic phasing that accounts for data readiness, infrastructure preparation, change management, and the fact that AI systems improve iteratively — not linearly.
What is the board-level deadline (if any) driving this initiative? Is it realistic?
What data preparation work must be complete before model training can begin?
What other transformation programmes is this competing with for team bandwidth?
What is your rollback plan if the pilot fails to meet quality thresholds?
When do you need to see ROI to maintain organisational support for the programme?
WHERE encompasses deployment environments, geographic footprint, and regulatory zones. For European organisations especially, this dimension can rewrite the HOW entirely: EU AI Act compliance, GDPR data residency requirements, and sector-specific regulations (finance, health, critical infrastructure) all constrain technology choices before they are made.
| Dimension | Questions to Answer | Impact on HOW |
|---|---|---|
| Data Residency | Where must training and inference data reside? | May exclude hyperscaler APIs; requires on-prem or EU-hosted infra |
| EU AI Act Risk Tier | Is this a high-risk AI system under Annex III? | Requires conformity assessment, human oversight, audit logging |
| User Geography | In which countries will end users access this system? | Localisation, language model selection, latency requirements |
| Sector Regulation | Finance, health, critical infra? Which body governs? | Model explainability, bias auditing, approval gates |
In which countries will this AI system operate, and which data protection frameworks apply?
Does personal data used in training or inference need to remain within a specific jurisdiction?
Has legal reviewed the EU AI Act risk classification for this use case?
What deployment environment is required: public cloud, private cloud, or on-premises?
Are there sector-specific regulators (FCA, EBA, EMA) who must be consulted or notified?
The WHY is the anchor for the entire strategy. It must be sharp enough to survive the question: “Why AI, and why now?” Vague answers (“to stay competitive”, “because the CEO read an article”) will collapse under cost pressure. Specific answers — with baselines, targets, and accountable owners — will sustain a multi-year programme.
What specific business outcome will improve, and by how much, if this initiative succeeds?
What is the cost of inaction — what happens if you do not pursue this in the next 12 months?
What is the current baseline for your primary success metric? (If you cannot answer this, that is the first problem to solve.)
Who is accountable for achieving the stated business outcome — not the AI system, but the outcome?
What is the minimum ROI threshold the board will accept to continue investment past Phase 1?
The HOW dimension translates strategic intent into technical and organisational choices. The central decision is build vs buy vs partner — and it should be driven by where you have, or intend to build, a genuine competitive differentiator.
| Approach | Best When | Risk | Speed to Value |
|---|---|---|---|
| Buy (SaaS AI product) | Non-differentiating process, proven category | Vendor lock-in, data sharing | Fast (weeks) |
| API (foundation model) | Custom logic needed, no model training capacity | Cost at scale, API dependency | Medium (months) |
| Fine-tune open model | Domain-specific quality required, data available | Ops burden, retraining cadence | Medium (months) |
| Build (train from scratch) | Core IP, unique data advantage, scale justifies cost | Very high cost, talent scarcity | Slow (12–24 months) |
Is AI a strategic differentiator in this use case, or a commodity capability? (This determines build vs buy.)
What is your current ML engineering and data science capacity — can your team own a model in production?
Which vendors have you evaluated? Have you run a structured proof-of-concept with real data?
What does your MLOps and model monitoring infrastructure look like today?
How will you manage model drift, retraining, and version control over the product lifecycle?
What is the total cost of ownership (not just licensing) over 3 years for your preferred approach?
The following shows how a mid-size European logistics company (1,200 employees, operating in 8 EU countries) applied the 6Ws framework to launch an AI-powered freight document processing system.
Reduce freight document processing time from 45 min/shipment to under 10 min. Current cost: €1.2M/year in manual processing labour. Target: 70% reduction in 12 months. Owner: CFO.
Sponsor: CFO. Champions: Head of Operations, IT Director. Resistors: Document processing team (25 FTEs — consulted via works council). Governance: AI Steering Committee (CEO, CFO, COO, DPO).
Use case: Extract structured data (shipper, receiver, weight, HS code) from CMR waybills, bills of lading, and customs declarations. Capability: Document AI with OCR + LLM extraction. Data: 3 years of historical documents (180K+ records).
Phase 1 (Q1): Data audit, GDPR assessment, vendor shortlist. Phase 2 (Q2): Pilot with 3 trade lanes, 500 documents/week. Phase 3 (Q3–Q4): Full rollout, 8,000 documents/week. Board review at end of each phase.
All data processed within EU (GDPR). Vendor must be EU-based or offer EU data residency. EU AI Act: Limited risk system (document extraction, human review of errors). Deployed on company's Azure EU-West instance.
Decision: Buy + API (not build). Selected Mistral API for extraction + custom validation layer. No internal ML training capacity. Integration via REST API to existing TMS. Human review queue for <90% confidence extractions.
Processing time reduced to 8 minutes per shipment (82% reduction). €840K annual savings. Team of 25 redeployed to exception handling and customer relations. System processes 95% of documents automatically; 5% routed to human review.
| Dimension | Without Framework | With 6Ws Framework |
|---|---|---|
| Stakeholder Alignment | Discovered late — often after pilot failure | Mapped at the start; champions and resistors identified |
| Use Case Selection | Driven by vendor demos or executive preference | Systematically scored on value × feasibility |
| Timeline Expectations | Unrealistic deadlines causing trust erosion | Phased milestones with clear dependencies |
| Regulatory Risk | Surfaced during deployment — costly to remediate | Addressed in the WHERE dimension before build |
| Build vs Buy | Decided based on vendor relationships | Decided based on strategic differentiators and TCO |
| Success Metrics | Vague or absent; team cannot declare victory | Defined in the WHY dimension with baselines |