The EU AI Act is now in force — and for many technology teams, the real work of compliance is just getting started. With the first set of obligations already active and the bulk of enforcement deadlines arriving throughout 2026 and 2027, this is no longer a future concern. It is a present one.
This guide breaks down the EU AI Act’s risk-tier framework, explains which systems your organization likely needs to evaluate, and outlines the concrete steps engineering and compliance teams should take right now.
What the EU AI Act Actually Requires
The EU AI Act (Regulation EU 2024/1689) is a comprehensive regulatory framework that classifies AI systems by risk level and attaches corresponding obligations. It is not a sector-specific rule — it applies across industries to any organization placing AI systems on the EU market or using them to affect EU residents, regardless of where the organization is headquartered.
Unlike the GDPR, which primarily governs data, the AI Act governs the deployment and use of AI systems themselves. That means a U.S. company running an AI-powered hiring tool that filters resumes of EU applicants is within scope, even if no EU office exists.
The Risk Tiers: Prohibited, High-Risk, and General Purpose
The Act sorts AI systems into four broad categories, with obligations scaling upward based on potential harm.
Prohibited AI Practices
Certain uses are outright banned with no grace period. These include social scoring by public authorities, real-time biometric surveillance in public spaces (with narrow law enforcement exceptions), AI designed to exploit psychological vulnerabilities, and systems that infer sensitive attributes like political views or sexual orientation from biometrics. Organizations that already have systems in these categories must cease operating them immediately.
High-Risk AI Systems
High-risk AI is where most enterprise compliance work concentrates. The Act defines high-risk systems as those used in sectors including critical infrastructure, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and border control, and the administration of justice. If your AI system makes or influences decisions in any of these areas, it likely qualifies.
High-risk obligations are substantial. They include conducting a conformity assessment before deployment, maintaining technical documentation, implementing a risk management system, ensuring human oversight capabilities, logging and audit trail requirements, and registering the system in the EU’s forthcoming AI database. These are not lightweight checkbox exercises — they require dedicated engineering and governance effort.
General Purpose AI (GPAI) Models
The GPAI provisions are particularly relevant to organizations building on top of foundation models like GPT-4, Claude, Gemini, or Mistral. Any organization that develops or fine-tunes a GPAI model for distribution must comply with transparency and documentation requirements. Models deemed to pose “systemic risk” (broadly: models trained with over 10^25 FLOPs) face additional obligations including adversarial testing and incident reporting.
Even organizations that only consume GPAI APIs face downstream documentation obligations if they deploy those capabilities in high-risk contexts. The compliance chain runs all the way from provider to deployer.
Key Enforcement Deadlines to Know
The Act’s timeline is phased, and the earliest deadlines have already passed. Here is where things stand as of early 2026:
- February 2025: Prohibited AI practices provisions became enforceable. Organizations should already have audited for these.
- August 2025: GPAI model obligations entered into force. Providers and deployers of general purpose AI models must now comply with transparency and documentation rules.
- August 2026: High-risk AI obligations for most sectors become enforceable. This is the dominant near-term deadline for enterprise AI teams.
- 2027: High-risk AI systems already on the market as “safety components” of regulated products get an extended grace period expiring here.
The August 2026 deadline is now under six months away. Organizations that have not begun their compliance programs are running out of runway.
Building a Practical Compliance Program
Compliance with the AI Act is fundamentally an engineering and governance problem, not just a legal one. The teams building and operating AI systems need to be actively involved from the start. Here is a practical framework for getting organized.
Step 1: Build an AI System Inventory
You cannot manage what you have not catalogued. Start with a comprehensive inventory of all AI systems in use or development: the vendor or model, the use case, the decision types the system influences, and the populations affected. Include third-party SaaS tools with AI features — these are frequently overlooked and can still create compliance exposure for the deployer.
Many organizations are surprised by how many AI systems turn up in this exercise. Shadow AI adoption — employees using AI tools without formal IT approval — is widespread and must be addressed as part of the governance picture.
Step 2: Classify Each System by Risk Tier
Once inventoried, each system should be classified against the Act’s risk taxonomy. This is not always straightforward — the annexes defining high-risk applications are detailed, and reasonable legal and technical professionals may disagree about borderline cases. Engage legal counsel with AI Act expertise early, particularly for use cases in employment, education, or financial services.
Document your classification rationale. Regulators will scrutinize how organizations assessed their systems, and a well-documented good-faith analysis will matter if a classification decision is later challenged.
Step 3: Address High-Risk Systems First
For any system classified as high-risk, the compliance checklist is substantial. You will need to implement or verify: a risk management system that is continuous rather than one-time, data governance practices covering training and validation data quality, technical documentation sufficient for a conformity assessment, automatic logging with audit trail capabilities, accuracy and robustness testing, and mechanisms for meaningful human oversight that cannot be bypassed in operation.
The human oversight requirement deserves special attention. The Act requires that high-risk AI systems be designed so that the humans overseeing them can “understand the capacities and limitations” of the system, detect and address failures, and intervene or override when needed. Bolting on a human-in-the-loop checkbox is not sufficient — the oversight must be genuine and effective.
Step 4: Review Your AI Vendor Contracts
The AI Act creates shared obligations across the supply chain. If you deploy AI capabilities built on a third-party model or platform, you need to understand what documentation and compliance support your vendor provides, whether your use case is within the vendor’s stated intended use, and what audit and transparency rights your contract grants you.
Many current AI vendor contracts were written before the AI Act’s obligations were clear. This is a good moment to review and update them, especially for any system you plan to classify as high-risk or any GPAI model deployment.
Step 5: Establish Ongoing Governance
The AI Act is not a one-time audit exercise. It requires continuous monitoring, incident reporting, and documentation maintenance for the life of a system’s deployment. Organizations should establish an AI governance function — whether a dedicated team, a center of excellence, or a cross-functional committee — with clear ownership of compliance obligations.
This function should own the AI system inventory, track regulatory updates (the Act will be supplemented by implementing acts and technical standards over time), coordinate with legal and engineering on new deployments, and manage the EU AI database registration process when it becomes required.
What Happens If You Are Not Compliant
The AI Act’s enforcement teeth are real. Fines for prohibited AI practices can reach €35 million or 7% of global annual turnover, whichever is higher. Violations of high-risk obligations carry fines up to €15 million or 3% of global turnover. Providing incorrect information to authorities can cost €7.5 million or 1.5% of global turnover.
Each EU member state will designate national competent authorities for enforcement. The European AI Office, established in 2024, holds oversight authority for GPAI models and cross-border cases. Enforcement coordination across member states means that organizations cannot assume a low-profile presence in a smaller market will keep them below the radar.
The Bottom Line for Engineering Teams
The EU AI Act is the most consequential AI regulatory framework yet enacted, and it has real teeth for organizations operating at scale. The window for preparation before the August 2026 enforcement deadline is narrow.
The organizations best positioned for compliance are those that treat it as an engineering problem from the start: building inventory and documentation into development workflows, designing for auditability and human oversight rather than retrofitting it, and establishing governance structures before they are urgently needed.
Waiting for perfect regulatory guidance is not a viable strategy — the Act is law, the deadlines are set, and regulators will expect good-faith compliance efforts from organizations that had ample notice. Start the inventory, classify your systems, and engage your legal and engineering teams now.

Leave a Reply