Assess your EU AI Act exposure across 5 dimensions. All 15 questions on one page — scroll, review and change answers freely before submitting.
This diagnostic applies to traditional ML models, LLM-based deployments, automated pipelines and agentic workflows. Answer each question for your AI systems as a whole.
We have a complete, up-to-date inventory of every AI and ML system running in production — including third-party tools, embedded AI features, and vendor AI systems.
Each AI system has been assessed against EU AI Act risk tiers (Unacceptable / High-Risk / Limited / Minimal) and we know which Annex III categories apply and what our obligations are.
There is a named, accountable owner for each AI system — a person who can be held responsible if that system causes harm, a compliance breach, or a regulatory finding.
We have tested our production AI models for calibration — we know whether confidence scores are reliable, and we monitor for overconfidence or systematic miscalibration (ECE testing).
We monitor for distribution shift (PSI) — we have a process to detect when the data our models receive in production is meaningfully different from training data, and we act on it.
Our models have been tested with edge cases and adversarial inputs — we know how they behave when inputs are unusual, ambiguous, or deliberately manipulated to probe robustness.
We have documented data lineage for every production AI system — we can show where training data came from, how it was processed, and whether it met quality and representativeness requirements.
We have assessed whether our training data was representative of the real-world population our models make decisions about — including bias assessment across protected characteristics.
If an upstream data source changes, we would detect this impact on our AI systems promptly — through automated pipeline monitoring or alerting.
There is a formal deployment gate for AI systems — a defined process that must be completed before any AI system goes into production, including technical evaluation, risk sign-off, and documentation review.
We have a model version registry — every production model has a documented version history, including who approved each version, what changed, and when it was deployed.
There is a documented incident response process for AI systems — if a model produces a harmful or incorrect output, or causes an operational failure, we have a defined response protocol.
Our AI systems can produce explanations for their outputs that a non-technical decision-maker can act on — specific reasons why a particular decision was made (SHAP/LIME or equivalent).
We have technical documentation for each AI system that meets Article 11 requirements — system purpose, capabilities, limitations, training data description, performance metrics, and known risks.
Humans can meaningfully override our AI systems — there is a real human-in-the-loop for high-stakes decisions, not just a theoretical override that nobody uses in practice.
Answer all 15 questions above, then calculate your score. Results appear instantly — no email required. Scroll up to review or change any answer before submitting.
Leave your work email and I will send you a PDF of your ACAI score — dimension breakdown, findings, and recommended next steps. Takes 10 seconds. No obligation.
The ACAI audit runs real engineering tests on your production systems and delivers a findings register mapped to the exact EU AI Act articles — with a 90-day remediation roadmap your team can execute.