Geselecteerde AI-ontwikkelingen met expertanalyse. Wat belangrijk is, wat niet, en wat u eraan moet doen.
Wekelijkse Samenvatting
Ontvang de AI Radar Wekelijks
Geselecteerd AI-nieuws met expertcommentaar, elke maandag geleverd. Geen spam, geen opvulling — alleen wat belangrijk is voor enterprise AI.
8 items
Hoge ImpactRegulation1 feb 2026European Commission
EU AI Act High-Risk Requirements Enforcement Begins August 2026
The EU AI Act's high-risk AI system requirements officially take effect in August 2026, requiring organizations deploying AI in healthcare, finance, HR, and critical infrastructure to implement risk management, data governance, transparency, and human oversight measures.
Expert Mening
This is the single most important deadline for European companies deploying AI. If you have AI making decisions about people — hiring, lending, medical diagnoses — you need to be compliant by August. The companies who started 6 months ago are in good shape. The companies who haven't started yet are in trouble. My recommendation: begin with an AI system inventory. You can't comply with what you can't see.
Mistral Releases Large 2: European LLM Competitive with GPT-4
Mistral AI's latest model, Large 2, demonstrates performance competitive with GPT-4o across benchmarks while maintaining EU data sovereignty. The model is available both via API and as open-weight for self-hosted deployments.
Expert Mening
This matters for European companies who need data sovereignty. Until now, the 'use the best model' and 'keep data in Europe' goals conflicted. Mistral Large 2 closes that gap. For regulated industries — banking, healthcare, government — this changes the calculus. You can now get GPT-4-class performance while keeping every byte of data under EU jurisdiction.
OpenAI o3: Chain-of-Thought Reasoning Reaches New Heights
OpenAI's o3 model family pushes chain-of-thought reasoning further, achieving near-human performance on complex mathematical and scientific reasoning tasks. The model 'thinks' before answering, trading latency for accuracy.
Expert Mening
o3 is a paradigm shift — not faster, but smarter. For enterprise use cases like legal analysis, financial modeling, or engineering design, the extra reasoning time is a worthwhile trade-off. My advice: don't default to o3 for everything. Use fast models for simple tasks, reasoning models for complex ones. The real innovation is knowing when to use which.
Hoge ImpactTools & Infra15 jan 2026Industry Analysis
AI Agent Frameworks Mature: Production-Ready Autonomous Systems
LangGraph, CrewAI, and AutoGen have matured significantly, enabling production-grade AI agent deployments. Key improvements include better error recovery, human-in-the-loop workflows, and observability tooling.
Expert Mening
2025 was the year of AI agent hype. 2026 is the year they actually work in production. The key difference? Guardrails. The frameworks that won aren't the most autonomous — they're the ones with the best human-in-the-loop patterns. If you're building AI agents, invest in evaluation and monitoring before you invest in capability. An agent that's 80% accurate but you can't monitor is worse than one that's 70% accurate with full observability.
European Manufacturers Double AI Investment in 2026
A McKinsey survey of 500 European manufacturers reveals that AI investment budgets doubled year-over-year, with predictive maintenance, quality inspection, and supply chain optimization as the top use cases. However, 65% report at least one 'stuck' AI pilot.
Expert Mening
The money is flowing, but the execution gap is widening. 65% with stuck pilots tells you everything — the bottleneck isn't budget, it's production ML engineering. Companies hire data scientists but not MLOps engineers. They build great models that never leave the notebook. If you're in that 65%, the fix isn't more R&D spend — it's an engineer who's shipped AI to production before.
Gemiddelde ImpactResearch5 jan 2026Research Community
RAG Evaluation Frameworks Get Serious About Hallucination Detection
New evaluation frameworks (RAGAS 2.0, DeepEval, and TruLens) provide production-grade hallucination detection for RAG systems, enabling automated testing of retrieval quality, faithfulness, and answer relevance.
Expert Mening
Finally. The biggest risk with RAG systems isn't bad retrieval — it's confidently wrong answers. These evaluation frameworks let you catch hallucinations before users do. If you're running RAG in production, integrate automated eval into your CI/CD pipeline. Test every prompt template change against a golden dataset. This is the RAG equivalent of unit testing — nobody does it until something breaks in production.
NVIDIA Blackwell GPUs Enable On-Premise Enterprise AI at Scale
NVIDIA's Blackwell architecture GPUs are now widely available for enterprise on-premise deployments, enabling companies to run large language models locally with performance previously only available in the cloud.
Expert Mening
This is a game-changer for data sovereignty. Running Llama 3 or Mistral on-premise with Blackwell means you get cloud-class inference without cloud-class data risk. For European companies under GDPR and the AI Act, on-premise AI just became viable. The cost is high upfront but the TCO calculation works for companies processing sensitive data at volume. If you're spending over €50K/month on API calls, do the math on self-hosting.
European AI Startup Funding Hits €12B in 2025, Led by Mistral and Aleph Alpha
European AI startup funding reached a record €12 billion in 2025, with Mistral AI, Aleph Alpha, and a wave of vertical AI companies leading the charge. Enterprise AI tools and AI compliance platforms saw the highest growth.
Expert Mening
Europe is finally building its AI ecosystem, not just consuming American models. The rise of compliance-focused AI startups (Credo AI, Holistic AI) is uniquely European — we're turning regulation into innovation. For enterprise buyers: this means more choice and more European options. For founders: vertical AI with built-in compliance is the European advantage.