That impressive AI demo in the pitch deck? It might be a thin wrapper around GPT-4 with a custom prompt. I've evaluated 30+ AI companies for investors. 40% had wrapper risk — thin layers on foundation models dressed up as proprietary innovation. You need someone who has built production AI at Cisco (100M+ users) and AuraLinkOS (319 microservices, ~20 AI agents) to tell you what's actually running behind the demo. Mohammed Cherifi, an AI tech due diligence advisor for VC and PE firms, evaluates AI architecture at the molecular level — reverse-engineering pipelines, stress-testing scalability claims, and classifying EU AI Act risk before you write the check.
The target claims proprietary AI. Their infrastructure bill tells a different story: $3K/month in OpenAI API calls and a React frontend. You can't distinguish a genuine moat from a prompt chain without someone who has built what they claim to have.
Your technical advisors are generalist CTOs. None have shipped multi-agent systems, built physics-informed neural networks, or deployed 319 microservices. They check boxes. They don't reverse-engineer AI pipelines.
Post-acquisition, you discover the 'AI platform' is held together with duct tape, manual processes, and one engineer who wrote everything. Key-person risk you didn't price in because DD didn't look deep enough.
EU AI Act enforcement starts August 2026. A high-risk AI classification adds €500K-€2M in compliance costs nobody budgeted for. Your DD team doesn't understand AI regulation because they've never built compliant systems.
You get a deep technical investigation by someone who has built the exact systems targets claim to have. Not a generalist with a template. Not a junior analyst with a scorecard. Someone who can read the infrastructure bill and know immediately whether the AI is real.
Reverse-engineer the AI stack: model architecture, training pipeline, data infrastructure, deployment topology, and scalability limits. Is this a fine-tuned model, a RAG system, a prompt chain, or genuine novel architecture? The answer determines everything.
Evaluate defensibility across five dimensions: proprietary data advantages, model differentiation, integration depth, switching costs, and competitive positioning against foundation model roadmaps. Score moat durability at 6, 12, and 24 months.
Traffic-light scoring across 8 dimensions: technical debt, team capability, scalability, security posture, EU AI Act exposure, data quality, vendor dependency, and IP ownership. Each dimension benchmarked against 30+ previous assessments.
Clear GO/NO-GO/CONDITIONAL recommendation with risk-adjusted valuation input, estimated remediation costs, and a 90-day post-close action plan. A decision, not a document.
A structured AI technical due diligence methodology developed from evaluating 30+ AI companies across Series A through growth equity. Goes beyond code review to assess the entire AI value chain.
VC and PE firms evaluating AI-native companies from Seed to growth equity. Fund-of-funds needing portfolio-wide AI risk assessment. Corporate M&A teams acquiring AI capabilities. You need more than a generalist consultant — you need someone who has built and shipped the exact systems you're evaluating.
Rapid Scan takes 1-2 weeks and covers architecture, team, and top-line risk assessment. Full Due Diligence takes 2-4 weeks and includes deep code review, scalability testing, EU AI Act classification, and complete risk scoring. Timeline can flex based on target company cooperation and scope.
Two tiers available: Rapid Scan includes architecture review, AI moat assessment, team evaluation, top-line risk scoring, and 15-page executive report. Full DD adds deep code review, scalability stress testing, security audit, EU AI Act compliance assessment, IP analysis, and 50-80 page detailed report with investment committee presentation. Contact us for pricing based on deal complexity.
Yes. The Portfolio Retainer provides quarterly technical health checks across your AI portfolio, on-call technical advisory for new deals, and early warning on emerging risks like EU AI Act enforcement changes or competitive moat erosion. Contact us for retainer pricing based on portfolio size.
Standard practice. I sign your fund's NDA before any engagement begins. I maintain strict information barriers between portfolio companies and competing deals. All reports and work products are delivered under your ownership. I have 15+ years of enterprise confidentiality practice at Cisco and similar.
Deep expertise in automotive AI (Renault-Nissan), enterprise SaaS, industrial AI, cybersecurity, and healthtech. For specialized verticals like biotech or fintech, I partner with domain-specific advisors while leading the core AI technical assessment.
Common scenario. I work with whatever access is available—architecture diagrams, API documentation, demo environments, team interviews, published papers, and public artifacts. Even without code access, an experienced AI builder can identify red flags from system behavior, team responses, and architectural descriptions. Limited access is noted as a risk factor in the report.
Five dimensions: (1) Data moat—proprietary data that's expensive or impossible to replicate, (2) Model moat—genuine architectural innovation vs. fine-tuned open-source, (3) Integration moat—deep embedding in customer workflows, (4) Talent moat—key researchers or engineers, (5) Regulatory moat—compliance advantages. Each is scored and benchmarked against the competitive landscape.
I provide technical input to valuation, not financial valuation itself. This includes: risk-adjusted technology scoring, estimated remediation costs for identified issues, scalability ceiling analysis, and comparable technology benchmarking. Your financial team combines this with market and financial analysis for final valuation.
Explore other services that complement this offering
Let's discuss how this service can address your specific challenges and drive real results.