The definitive real-time reference for EU AI Act compliance status, key deadlines, and obligations by risk tier. We are currently 5 months from the most critical deadline.
Key dates from entry into force through full application
The EU AI Act was published in the Official Journal and entered into force.
Chapter I (General Provisions) and Chapter II (Prohibited AI Practices) became applicable. AI systems that violate fundamental rights — including social scoring, subliminal manipulation, and real-time remote biometric surveillance — are now banned.
General Purpose AI (GPAI) model obligations and the EU AI Office governance structure became applicable. Providers of GPAI models (including frontier models) must comply with transparency, copyright, and systemic risk requirements.
The core compliance deadline for high-risk AI in critical sectors: healthcare, education, employment, law enforcement, border control, administration of justice, and democratic processes. Conformity assessments, technical documentation, and human oversight mechanisms required.
High-risk AI systems embedded in regulated products (Annex I) — including machinery, medical devices, civil aviation, vehicles, and rail systems — must comply. These systems already subject to existing EU product safety legislation get an extra year.
The EU AI Act uses a risk-based approach. Your obligations depend entirely on how your AI system is classified.
AI practices that pose unacceptable risks to fundamental rights
AI systems in critical sectors requiring mandatory conformity assessment
AI systems with specific transparency obligations
AI systems posing little or no risk — the vast majority of AI applications
General Purpose AI models — including large language models — have specific obligations under Title VIII of the AI Act, now in force since August 2025.
Maintain up-to-date technical documentation covering training methodology, data sources, compute used, and known limitations.
Implement and maintain a copyright policy. Publish summaries of training data used for text and data mining.
Publish model cards describing capabilities, limitations, intended and foreseeable misuse, and evaluation results.
For frontier models (≥10²⁵ FLOPs training compute), conduct adversarial testing, incident reporting, and cybersecurity measures.
The conformity assessment process for high-risk AI systems takes 3–12 months depending on complexity. Organizations that start now have the highest chance of meeting the deadline without disrupting operations.