Is the AI company you're evaluating genuinely defensible? Or just a UI on top of someone else's model? AI startups raised €131.5B in 2025. Most won't survive. The 'wrapper era' is collapsing as foundation models integrate features that startups pitched as unique. From 30+ assessments: 40% showed significant wrapper risk. 30% had moderate defensibility. Only 20% had strong moats. This service goes to the molecular level — reverse-engineering AI architectures, evaluating data provenance, testing model performance claims, and determining if the moat is real, artificial, or nonexistent.
70% of 'AI companies' are thin layers on top of foundation models. When OpenAI or Anthropic ships their feature as a built-in capability, the startup's entire value proposition evaporates overnight. I've watched it happen to 4 companies I assessed in the past 12 months.
AI demos are seductive. A carefully curated demo can make a $50/month API call look like a $50M breakthrough. You need someone who can read the infrastructure bill and know the difference in minutes, not weeks.
Data provenance is the new IP — but most investors can't evaluate whether a company's data advantage is real, sustainable, and legally defensible under GDPR and EU AI Act. The data moat is the moat. Everything else is temporary.
EU AI Act creates a new dimension of risk. A high-risk AI classification can add €500K-€2M in compliance costs that nobody has budgeted for. And a company built on someone else's model has zero control over compliance when the provider changes their terms.
A systematic investigation that goes from surface-level claims to molecular-level reality. Each layer builds on the previous, creating a complete picture of what the AI actually is, how defensible it is, and how long the moat holds.
Reverse-engineer the AI pipeline: data ingestion, feature engineering, model architecture, inference pipeline, feedback loops. Is this a fine-tuned model, a RAG system, a prompt chain, or genuine novel architecture?
Evaluate data provenance, rarity, network effects, and defensibility. Can a well-funded competitor replicate this data advantage in 12 months? Is the data legally obtained and GDPR-compliant?
Training methodology, architectural innovation, domain-specific optimization. Is this reproducible by a competent ML team in 3 months? Are there genuine trade secrets or just prompt engineering?
Map the technical moat against foundation model roadmaps (OpenAI, Anthropic, Google, Meta). Where exactly is the moat, and how many months does it hold before commoditization?
A forensic AI assessment methodology built from building 31 production AI models and evaluating 30+ AI companies. Goes beyond surface-level technical review to molecular-level architecture forensics.
VCs evaluating AI-first startups from Seed through Series C. PE firms acquiring AI-enabled companies where the AI is the value thesis. Corporate venture arms assessing AI technology partnerships. You need someone who has built production AI — not just reviewed it — to tell you if this AI is real.
Standard tech DD covers the full technology stack: infrastructure, team, security, scalability. AI Moat Forensics is laser-focused on the AI specifically: Is the AI real? Is it defensible? How long before foundation models commoditize it? Think of it as the specialist examination after the general checkup.
Multiple signals: API call patterns to external providers, response latency profiles, error message patterns, model behavior consistency tests, infrastructure cost analysis vs. claimed capabilities, and direct architectural investigation. A company running proprietary models has a fundamentally different infrastructure footprint than one calling OpenAI's API.
Deep expertise in production AI across automotive (Renault-Nissan), enterprise platforms (Cisco), industrial IoT (ABB), and general SaaS. For highly specialized domains like drug discovery or climate modeling, I assess the AI architecture and defensibility while partnering with domain experts for application-level evaluation.
From 30+ assessments: roughly 40% show significant wrapper risk (thin layer on foundation models), 30% have moderate defensibility (proprietary data or fine-tuning, but reproducible), 20% have strong moats (genuine architectural innovation or irreplaceable data assets), and 10% are exceptional (true breakthrough that would be very difficult to replicate).
Every assessment includes EU AI Act classification: Unacceptable (banned), High-risk (heavy compliance requirements), Limited risk (transparency obligations), or Minimal risk (self-regulation). For high-risk classifications, I estimate compliance costs (€200K-€2M+), timeline, and impact on the business model. This is increasingly critical for any AI investment in Europe.
Explore other services that complement this offering
Let's discuss how this service can address your specific challenges and drive real results.