COOKIES. CONSENT. COMPLIANCE
secure privacy badge logo
November 3, 2026

EU AI Act Implementation Sprint: A 90-Day Playbook for Enterprise Compliance

The EU AI Act is no longer a regulation on the horizon. Prohibited AI practices have been enforceable since February 2025. General-purpose AI obligations have applied since August 2025. And on 2 August 2026 — five months from now — the full weight of high-risk AI system requirements under Annex III comes into force, bringing with it a penalty structure that exceeds even the GDPR: up to €35 million or 7% of global annual turnover for the most serious violations, and up to €15 million or 3% for non-compliance with high-risk obligations.

Most enterprises understand the regulation conceptually. Far fewer have translated that understanding into an operational compliance programme. An appliedAI study of 106 enterprise AI systems found that 40% had unclear risk classifications. EY's global survey found that a majority of C-suite leaders now cite regulatory non-compliance as their primary AI risk. Over half of organisations still lack a basic inventory of the AI systems they have in production. For those organisations, the gap between where they are and where they need to be by August is not a documentation problem — it is an operational one.

This guide provides a structured 90-day sprint to close that gap. It covers AI system inventory, EU AI Act risk classification, impact assessment, technical controls, governance policy, and the compliance infrastructure needed to produce auditable evidence that your AI deployment meets the regulation's requirements.

The Regulatory Context You're Operating In

The AI Act's staggered timeline was designed to give organisations time to prepare. That transition period is now functionally over for the most consequential provisions. From 2 August 2026, organisations deploying high-risk AI systems in the Annex III categories — hiring and recruitment algorithms, credit scoring, biometric identification, critical infrastructure management, educational assessment, law enforcement tools, migration and border control, and AI affecting access to essential services — must demonstrate full compliance with Articles 9 through 49.

This means documented, ongoing risk management systems under Article 9. Training data governance under Article 10. Complete technical documentation under Article 11 and Annex IV. Automatic logging of system events under Article 12. Transparency obligations to deployers and affected persons under Articles 13 and 14. Human oversight mechanisms under Article 14. Accuracy, robustness, and cybersecurity controls under Article 15. A quality management system under Article 17. Conformity assessment before market entry. EU database registration under Article 49.

The regulation applies extraterritorially, mirroring the GDPR's scope. Any organisation providing or deploying AI systems that produce outputs affecting EU residents must comply, regardless of where it is headquartered. Finland became the first member state with fully operational AI Act enforcement powers in January 2026; other national competent authorities are activating throughout the first half of the year.

One additional complication for organisations already managing GDPR obligations: the two frameworks overlap substantially. High-risk AI systems processing personal data trigger both a Fundamental Rights Impact Assessment under AI Act Article 27 and a data protection impact assessment under GDPR Article 35. The most efficient compliance path integrates these into a unified assessment process rather than running parallel exercises.

Step 1: Build Your AI System Inventory

You cannot classify, assess, or govern what you cannot find. The most common structural failure in enterprise AI compliance is not poor documentation — it is that significant AI deployments are entirely invisible to the compliance function.

The inventory problem takes three forms. First, internal models developed by product or data science teams that were shipped without any compliance review process. Second, third-party AI tools and APIs integrated as software dependencies — hiring platforms with algorithmic screening, credit tools with embedded scoring models, analytics platforms with automated segmentation — that are treated as vendor services rather than AI systems requiring classification. Third, shadow AI: employees using external generative AI tools that interact with production data, customer data, or sensitive business processes, creating compliance exposure that no engineering team is currently measuring.

A complete AI system registry must document every AI system across all three categories. For each system, you need to capture the use case and intended purpose, the data types and personal data categories it processes, the system owner and accountable executive, the vendor or provider if third-party, the deployment context and affected population, and a preliminary risk classification. The registry is not a one-time project. The Annex III list of high-risk applications is not static — the European Commission has authority to update it based on technological developments — so the inventory process must be embedded as an ongoing operational practice rather than a sprint deliverable.

Step 2: Classify Your AI Systems Under the Risk Tiers

The AI Act's four-tier risk structure determines the entire scope of your compliance obligations. Getting classification wrong in either direction creates exposure: over-classifying wastes compliance resource on unnecessary controls; under-classifying creates the legal and reputational exposure of deploying a regulated system without the required safeguards.

Prohibited systems are those the regulation bans outright. Since February 2025, AI systems that use subliminal or manipulative techniques to distort behaviour, exploit vulnerabilities of specific groups, perform real-time remote biometric identification in public spaces outside the narrow law enforcement exceptions, conduct untargeted scraping of facial images for recognition databases, infer emotions in workplace or educational settings without safety justification, and perform social scoring of individuals by public or private actors are all prohibited. Penalties reach €35 million or 7% of global turnover. If your AI inventory surfaces systems in these categories, discontinuation is the only compliant path.

High-risk systems under Annex III carry the full burden of Articles 9 through 49. The categories include AI used in the management and operation of critical infrastructure; AI systems deployed as safety components in road traffic, water, gas, electricity, or digital infrastructure management; educational and vocational training assessment; employment, workforce management, and access to self-employment; access to essential private and public services including credit scoring, insurance risk assessment, and emergency services dispatch; law enforcement purposes including risk assessment of individuals and crime analytics; migration, asylum, and border control; and administration of justice and democratic processes. If your organisation operates hiring algorithms, credit scoring models, biometric verification, or content moderation systems affecting access to services, there is a high probability you have Annex III systems in your portfolio.

Limited-risk systems — chatbots, recommendation engines, generative AI with no significant rights impacts — face primarily transparency obligations under Article 50. Users interacting with chatbots must be informed they are dealing with an AI system. AI-generated content, particularly deepfakes, must carry machine-readable watermarks. These requirements take effect 2 August 2026. Minimal-risk systems — spam filters, inventory management tools, AI-enabled games — face no specific obligations, though voluntary codes of conduct are encouraged.

The practical classification challenge is the middle ground. The appliedAI study found 18% of enterprise systems were clearly high-risk, but 40% had ambiguous classifications — primarily in critical infrastructure, employment, law enforcement, and product safety areas. For borderline cases, the European Commission published implementation guidelines in February 2026, and organisations should document their classification rationale thoroughly regardless of the conclusion reached. Ambiguity about classification is not a defence against enforcement; undocumented classification decisions are.

Step 3: Conduct an AI DPIA for Each High-Risk System

An AI Data Protection Impact Assessment combines the familiar structure of a GDPR DPIA with the expanded scope that AI systems demand. The GDPR DPIA addresses risks to personal data subjects. The AI Act's Fundamental Rights Impact Assessment under Article 27 addresses broader systemic risks: algorithmic bias and discrimination, impacts on groups that may not be individually identifiable as data subjects, explainability and transparency of automated decisions, safety and reliability of outputs, and whether human oversight mechanisms can actually intercept harmful decisions before they take effect.

Where a high-risk AI system processes personal data — which in practice is almost always — organisations should conduct a unified assessment addressing both frameworks simultaneously. The AI Act explicitly anticipates this, stating that a FRIA may complement a DPIA. The recommended sequencing is DPIA first, then extended to cover the broader FRIA dimensions: algorithmic risk, bias testing results, explainability documentation, human override mechanisms, and the system's post-market monitoring plan.

The core components of an AI DPIA are as follows. The system description section documents the AI system's purpose, intended use case, affected population, data inputs and outputs, decision logic at a functional level, and how its outputs feed into consequential decisions. The data governance section documents training, validation, and testing dataset provenance — sources, collection methodology, labelling procedures, data augmentation — and records bias assessment results showing statistical model performance across demographic segments. The risk assessment section identifies foreseeable risks to individual rights (including discrimination, privacy violation, and denial of services), estimates their severity and likelihood, and documents the mitigation measures applied. The human oversight section describes the mechanism by which a human can review, override, or halt the system's decisions, who has authority to exercise it, and under what conditions it is triggered. The residual risk section documents what risks remain after controls are applied and why they are acceptable. The review schedule sets out when the assessment will be updated, particularly after any significant modification to the system.

This documentation is not produced once and filed. Article 9's risk management obligation is explicitly continuous — the system must be reviewed and updated throughout its lifecycle. For organisations running continuous training pipelines, where model versions in production may differ materially from the version that was originally assessed, governance hooks in the ML pipeline that trigger documentation updates on retraining are a technical necessity, not an optional enhancement. The intersection of AI systems and GDPR compliance creates particular complications here: if personal data was used in training and a data subject exercises their erasure right, the model's statistical patterns cannot be surgically removed after the fact — prevention through automated DPIA processes before training data enters the pipeline is the only viable approach.

Step 4: Implement Technical and Governance Controls

For high-risk systems, the AI Act's technical requirements are specific and demanding. The risk management system under Article 9 must be a continuous process identifying and evaluating known and foreseeable risks to health, safety, and fundamental rights throughout the system's lifecycle. It must document residual risks, specify testing procedures, and be updated when material changes occur. This is not a risk register in a spreadsheet — it is a governed, documented, continuously maintained process.

Data governance under Article 10 requires that training, validation, and testing datasets undergo quality management ensuring they are relevant, sufficiently representative, and as free as possible of errors and biases. Where sensitive personal data must be processed to detect and correct bias — a tension the Act explicitly acknowledges with the GDPR's data minimisation principle — the regulation provides a specific legal basis for doing so. Dataset documentation must record sources, collection methodologies, labelling procedures, and the results of bias assessments across the demographic dimensions relevant to the system's use case.

Technical documentation under Article 11 and Annex IV is one of the most underestimated compliance burdens. It requires the system's general description, intended purpose, and interaction with hardware; a detailed description of the development process including design specifications and data requirements; performance metrics, accuracy benchmarking, and known limitations; a description of the risk management system; a lifecycle change log; a list of harmonised standards applied; the EU declaration of conformity; and the post-market monitoring plan. Organisations that practice agile development without structured documentation find this requirement particularly difficult to satisfy retroactively for systems already in production.

Automatic logging under Article 12 requires that high-risk AI systems technically allow recording of events over the system's lifetime sufficient to trace operation, identify situations presenting risk, and support post-market monitoring. The logging capability is not optional and cannot be added later as a documentation layer — it must be built into the system's architecture. Human oversight under Article 14 requires that systems be designed to allow natural persons to understand their capabilities and limitations, monitor operation, override or halt outputs, and intervene when the system fails to perform as intended. This is a design requirement, not a policy statement — the override mechanism must actually exist and be exercised.

Conformity assessment before market entry varies by system type. Most Annex III high-risk systems allow internal self-assessment under Annex VI, meaning the provider themselves conducts and documents the conformity assessment against the Act's requirements. Certain categories — biometric identification, emotion recognition systems, remote biometric categorisation — require third-party assessment by a notified body. The assessment must be completed, the EU declaration of conformity signed, CE marking affixed, and the system registered in the EU database under Article 49 before the system can be placed on the market or put into service.

Step 5: Establish AI Governance Policies and Documentation

Technical controls without governance infrastructure create compliance evidence that cannot be maintained, updated, or produced on demand. The AI Act's accountability model requires that the compliance programme exist as an operational system, not a set of documents assembled for a one-time audit.

The governance structure requires a designated accountable executive for each AI system — a single person responsible for ensuring the system maintains compliance throughout its lifecycle. Cross-functional ownership involving legal, privacy, data science, engineering, and business units must be formalised, not assumed. An AI governance framework that operates as a standing process — not a project that closes when the documentation is complete — is the structural foundation that makes the rest sustainable.

Required documentation includes an AI governance policy setting out the organisation's principles and operational rules for AI development and deployment; a risk management framework specifying how systems are assessed, what triggers re-assessment, and who owns remediation; a vendor AI risk assessment process covering third-party AI tools and APIs with contractual provisions allocating compliance responsibilities; an incident reporting procedure specifying how serious incidents are detected, investigated, and reported to national authorities under Article 73; and a lifecycle monitoring plan specifying how system performance, bias, and drift are continuously tracked after deployment.

The AI Act and GDPR governance infrastructure share significant overlap — Records of Processing Activities, DPIAs, data subject rights workflows, and consent management are all relevant to AI systems processing personal data. Organisations that have already built mature GDPR compliance programmes have a governance foundation to build on. The most efficient approach integrates AI Act obligations into that existing structure rather than building a parallel programme from scratch.

The 90-Day Implementation Sprint

With the August 2026 deadline approximately five months away, and conformity assessment alone typically taking six to twelve months for complex systems, the time pressure is significant. The following phased approach prioritises the activities with the highest compliance risk and the longest lead times.

Days 1 through 30 are the assessment phase. The primary output is a complete AI system registry covering internal models, third-party AI integrations, and generative AI tool usage. Every system should be mapped to a preliminary risk classification with documented rationale. Vendor contracts for AI tools should be reviewed to determine where your organisation sits in the provider-deployer-importer structure, because that determines which obligations fall on you and which on the vendor. Any systems that appear to fall in prohibited categories should be escalated immediately for legal review and discontinuation planning. By end of day 30, you should have full visibility into your AI portfolio and a prioritised list of systems requiring high-risk compliance work.

Days 31 through 60 are the risk and governance phase. For each system classified as high-risk, a unified AI DPIA and Fundamental Rights Impact Assessment should be initiated. The data governance documentation for training datasets should be compiled or commissioned. Human oversight mechanisms should be audited — not just policy-stated but technically verified. The AI governance policy, risk management framework, and vendor assessment process should be drafted and approved. Where technical documentation gaps exist for systems already in production, documentation reconstruction should be resourced as a dedicated project.

Days 61 through 90 are the operational readiness phase. Conformity assessments should be completed for systems not yet assessed. EU database registrations should be filed for systems requiring them. Logging infrastructure should be verified against Article 12's requirements. The post-market monitoring plan should be operational, not drafted. Training should be delivered to the teams responsible for human oversight. A documentation repository should be established — not a shared drive with unversioned files, but a governed system that maintains audit trails and produces evidence packages on demand. Internal audit should conduct a readiness review against the compliance checklist before the August deadline.


Common Implementation Mistakes

The compliance failures that will appear in 2026 enforcement actions will not generally be organisations that tried and got the details wrong. They will be organisations that never classified their systems, never built the logging infrastructure, and shipped AI without governance gates. Several specific failures are worth naming.

Treating AI compliance as purely a legal exercise — producing a risk assessment document without changing any engineering or governance practice — creates paper compliance that does not survive regulatory scrutiny. The Act's requirements are technical and operational, not just documentary. The documentation must reflect actual system state; documentation assembled manually after the fact is stale from the moment it is written.

Ignoring third-party AI is the most pervasive gap. Organisations that have conducted thorough internal assessments are often surprised to discover that their most significant high-risk AI exposure is a vendor platform they use for hiring, credit decisioning, or customer service routing. The deployer obligations under the AI Act apply regardless of whether the AI system is built in-house. Vendor contracts must allocate compliance responsibilities, and vendor AI assessments must be part of the governance programme.

Skipping impact assessments for systems not clearly in Annex III categories creates exposure as the Commission updates the list and as national authorities develop enforcement priorities. Undocumented classification decisions — even decisions that correctly conclude a system is not high-risk — leave organisations unable to demonstrate the assessment was conducted. Document the reasoning, not just the conclusion.

Failing to integrate AI governance with the broader privacy programme creates duplication, gaps, and inconsistent evidence. The DPIA obligations, RoPA entries, data subject rights workflows, and consent management infrastructure that support GDPR compliance are directly relevant to AI systems processing personal data. A unified approach across both frameworks is more defensible and more efficient than running them in parallel.

EU AI Act Compliance Checklist

The following checklist covers the core operational requirements for high-risk AI system compliance ahead of the August 2026 enforcement date.

AI system inventory completed and maintained as a living registry. Risk classification documented for every AI system, including third-party tools and vendor integrations, with rationale recorded. Prohibited AI practices confirmed absent from the portfolio or discontinuation in progress. AI DPIAs and Fundamental Rights Impact Assessments conducted for all Annex III high-risk systems and filed in a governed documentation repository. Training and validation dataset documentation complete, including bias assessment results. Technical documentation under Annex IV complete for each high-risk system and updated after any significant modification. Automatic logging infrastructure built and verified against Article 12 requirements. Human oversight mechanisms technically implemented and tested — not just described in policy. Conformity assessment completed and EU database registration filed for applicable systems. Quality management system under Article 17 documented and operational. Post-market monitoring plan active, with performance tracking, drift detection, and incident escalation procedures defined. Vendor AI contracts reviewed and updated to allocate compliance responsibilities and include Article 28-equivalent processor obligations where relevant. AI governance policy, risk management framework, and incident reporting procedure approved and communicated. Internal audit readiness review conducted before August 2026.


FAQ

What is a high-risk AI system under the EU AI Act? A high-risk AI system is one that falls within the Annex III categories — covering AI used in critical infrastructure, employment, essential services, law enforcement, education, migration, and democratic processes — or that is a safety component of a regulated product under Annex I. These systems face the full compliance burden of Articles 9 through 49.

Do companies need an AI DPIA? High-risk AI systems processing personal data trigger both a GDPR DPIA under Article 35 and a Fundamental Rights Impact Assessment under AI Act Article 27. Organisations should conduct a unified assessment addressing both frameworks simultaneously, with the DPIA conducted first and then extended to cover the AI Act's broader fundamental rights dimensions.

When does the EU AI Act take effect? Key provisions are already in force. Prohibited AI practices have been enforceable since February 2025. GPAI obligations have applied since August 2025. High-risk AI system requirements under Annex III take effect 2 August 2026. Full application to all risk categories completes in August 2027.

What documentation is required for compliance? Providers of high-risk AI systems must maintain technical documentation under Annex IV, a continuous risk management system under Article 9, training data governance records under Article 10, automatic logs under Article 12, a quality management system under Article 17, a conformity assessment under Article 43, and an EU database registration under Article 49. Documentation must reflect actual current system state, not a historical snapshot.

Does the EU AI Act apply to non-EU companies? Yes. Any organisation providing or deploying AI systems whose outputs affect EU residents must comply, regardless of where it is headquartered. The extraterritorial scope mirrors the GDPR's approach.

What penalties apply for non-compliance? Violations of prohibited AI practice prohibitions carry fines up to €35 million or 7% of global annual turnover, whichever is higher. Non-compliance with high-risk system obligations carries fines up to €15 million or 3% of turnover. Providing incorrect or misleading information to authorities carries fines up to €7.5 million or 1% of turnover. These levels exceed GDPR's maximum penalties.


Conclusion

The August 2026 deadline is not a theoretical future compliance event. National enforcement authorities are operational. The penalty framework is calibrated to create board-level attention at any company size. And the compliance work required for high-risk AI systems — inventory, classification, impact assessment, technical documentation, conformity assessment, database registration, post-market monitoring — cannot be compressed into a final month of activity. Organisations starting today are already late for the most demanding requirements.

What they are not is out of time. A disciplined 90-day sprint, focused on the highest-risk systems first and building governance infrastructure that makes ongoing compliance operational rather than episodic, can achieve meaningful readiness before August. The organisations that will face enforcement actions are those that still haven't started.

logo

Get Started For Free with the
#1 Cookie Consent Platform.

tick

No credit card required

Sign-up for FREE

image

EU AI Act Implementation Sprint: A 90-Day Playbook for Enterprise Compliance

The EU AI Act is no longer a regulation on the horizon. Prohibited AI practices have been enforceable since February 2025. General-purpose AI obligations have applied since August 2025. And on 2 August 2026 — five months from now — the full weight of high-risk AI system requirements under Annex III comes into force, bringing with it a penalty structure that exceeds even the GDPR: up to €35 million or 7% of global annual turnover for the most serious violations, and up to €15 million or 3% for non-compliance with high-risk obligations.

  • AI Governance
image

Security by Design: Principles, Frameworks, and Enterprise Implementation

Security vulnerabilities found and patched after deployment cost organisations ten times more to remediate than the same vulnerabilities caught at the design stage. That figure — cited consistently across NIST, CISA, and IBM cost-of-breach research — is the foundational economic argument for security by design. But the concept has expanded well beyond cost avoidance. In 2026, security by design is simultaneously an engineering methodology, a regulatory obligation, and a governance architecture. Understanding how all three layers work together — and where most organisations are still failing — is the purpose of this guide.

  • Privacy Governance
  • Data Protection
image

How to Implement Consent in Captive Browsers for GDPR-Compliant Public Wi-Fi

A captive portal collects personal data — IP addresses, MAC addresses, emails, session metadata — from the moment a user connects. GDPR applies to all of it.

  • Consent Management
  • Governance