ACAI — AI Compliance Audit & Inspection Framework

Free AI Compliance
Diagnostic

Assess your EU AI Act exposure across 5 dimensions. All 15 questions on one page — scroll, review and change answers freely before submitting.

15 questions· 5 dimensions· ~10 minutes· No email required· Instant results
Covers all AI system types

This diagnostic applies to traditional ML models, LLM-based deployments, automated pipelines and agentic workflows. Answer each question for your AI systems as a whole.

0 of 15 answered 0%
D1
System Inventory & Risk Classification
EU AI Act · Annex III · Article 6 · Article 26
D1 · Question 1 of 15

We have a complete, up-to-date inventory of every AI and ML system running in production — including third-party tools, embedded AI features, and vendor AI systems.

1Not at all
2Partially
3Mostly
4Yes
5Complete
No inventoryFully documented
D1 · Question 2 of 15

Each AI system has been assessed against EU AI Act risk tiers (Unacceptable / High-Risk / Limited / Minimal) and we know which Annex III categories apply and what our obligations are.

1Not done
2Started
3Partial
4Mostly
5Complete
No classificationFully classified
D1 · Question 3 of 15

There is a named, accountable owner for each AI system — a person who can be held responsible if that system causes harm, a compliance breach, or a regulatory finding.

1No owners
2Some
3Most
4All
5Enforced
No ownershipFormally enforced
D2
Behavioural Consistency & Reliability
EU AI Act · Article 15 · Article 61
D2 · Question 4 of 15

We have tested our production AI models for calibration — we know whether confidence scores are reliable, and we monitor for overconfidence or systematic miscalibration (ECE testing).

1Never
2Once
3Sometimes
4Regularly
5Continuous
Never testedAutomated monitoring
D2 · Question 5 of 15

We monitor for distribution shift (PSI) — we have a process to detect when the data our models receive in production is meaningfully different from training data, and we act on it.

1No
2Manual
3Partial
4Automated
5Full MLOps
No monitoringMLOps integrated
D2 · Question 6 of 15

Our models have been tested with edge cases and adversarial inputs — we know how they behave when inputs are unusual, ambiguous, or deliberately manipulated to probe robustness.

1Never
2Ad hoc
3Partial
4Systematic
5Full suite
Never testedFull test suite
D3
Data Governance & Lineage
EU AI Act · Article 10 · Article 17
D3 · Question 7 of 15

We have documented data lineage for every production AI system — we can show where training data came from, how it was processed, and whether it met quality and representativeness requirements.

1No docs
2Partial
3Most
4Complete
5Auditable
No documentationFully auditable
D3 · Question 8 of 15

We have assessed whether our training data was representative of the real-world population our models make decisions about — including bias assessment across protected characteristics.

1Never
2Assumed
3Partial
4Assessed
5Documented
Never assessedFormally documented
D3 · Question 9 of 15

If an upstream data source changes, we would detect this impact on our AI systems promptly — through automated pipeline monitoring or alerting.

1We wouldn't
2Eventually
3Manual check
4Alerting
5Automated
No detectionAutomated alerting
D4
Governance Process & Change Control
EU AI Act · Article 9 · Article 11 · Article 17
D4 · Question 10 of 15

There is a formal deployment gate for AI systems — a defined process that must be completed before any AI system goes into production, including technical evaluation, risk sign-off, and documentation review.

1None
2Informal
3Partial
4Formal
5Enforced
No gate existsFormally enforced
D4 · Question 11 of 15

We have a model version registry — every production model has a documented version history, including who approved each version, what changed, and when it was deployed.

1No registry
2Ad hoc
3Partial
4MLflow/equiv
5Full history
No version controlComplete history
D4 · Question 12 of 15

There is a documented incident response process for AI systems — if a model produces a harmful or incorrect output, or causes an operational failure, we have a defined response protocol.

1None
2Ad hoc
3Partial
4Documented
5Tested
No processRegularly tested
D5
Transparency & Explainability
EU AI Act · Article 13 · Article 14
D5 · Question 13 of 15

Our AI systems can produce explanations for their outputs that a non-technical decision-maker can act on — specific reasons why a particular decision was made (SHAP/LIME or equivalent).

1Black box
2Some
3SHAP/LIME
4Business terms
5Actionable
No explanationsActionable explanations
D5 · Question 14 of 15

We have technical documentation for each AI system that meets Article 11 requirements — system purpose, capabilities, limitations, training data description, performance metrics, and known risks.

1None
2Partial
3Some systems
4Most
5All, Art.11
No documentationArticle 11 compliant
D5 · Question 15 of 15

Humans can meaningfully override our AI systems — there is a real human-in-the-loop for high-stakes decisions, not just a theoretical override that nobody uses in practice.

1No override
2Theoretical
3Exists
4Used
5Enforced
No human oversightEnforced oversight

See your compliance score

Answer all 15 questions above, then calculate your score. Results appear instantly — no email required. Scroll up to review or change any answer before submitting.

Please answer all 15 questions before calculating your score. Unanswered questions are highlighted above.
Calculating your score…
ACAI — AI Compliance Audit & Inspection Framework
0
/ 100

Score by dimension — with specific findings

Priority findings

Ready to close these gaps?

The ACAI audit runs real engineering tests on your production systems and delivers a findings register mapped to the exact EU AI Act articles — with a 90-day remediation roadmap your team can execute.