Does the EU AI Act Apply to Your Product?
Your team shipped an AI feature. Maybe it scores credit applications. Maybe it screens resumes. Maybe it recommends treatment plans or flags students for intervention. Someone on your legal or compliance team just asked whether the EU AI Act applies. Here's how to find out.
The Act classifies AI systems by risk. The category that matters for most product and engineering teams is "high-risk," which covers systems that make or substantially influence decisions about people in these areas:
- Employment: recruiting, CV screening, interview evaluation, performance review, promotion, task allocation
- Financial services: credit scoring, creditworthiness evaluation, insurance pricing
- Healthcare: eligibility for health services, risk assessment for life and health insurance
- Education: admissions, assessment, learning pathway determination
- Essential services: public benefit eligibility, emergency dispatch prioritization
If your product touches any of these domains and uses AI in the decision path, your system is likely high-risk. That includes internal tools. An AI-powered HR system that influences promotion decisions carries the same obligations as a customer-facing credit scoring product.
There's an exception for AI that only performs narrow procedural tasks, improves previously completed human work, or handles purely preparatory steps. But if your system profiles people, it's high-risk regardless.
Customer service chatbots are generally not high-risk. They fall under "limited risk" with a transparency requirement: disclose that the user is interacting with AI. But if your chatbot makes or influences decisions about service eligibility, claims, or benefits, the classification shifts. Air Canada learned this when a tribunal held them liable for a chatbot's false promises about refund policies.
"We don't operate in the EU." If your AI system's output affects people in the EU, the Act applies. Full stop. Extraterritorial by design, modeled on GDPR's approach. And the EU isn't alone. Colorado's AI Act takes effect June 2026. Illinois made AI employment discrimination a civil rights violation. NYC requires independent bias audits for automated hiring tools. Canada's OSFI Guideline E-23 will require AI model risk management for every federally regulated financial institution by May 2027. Governance infrastructure that works across jurisdictions is cheaper to build once than to retrofit for each deadline.
What You'd Actually Need to Build
If your system is high-risk, the Act requires four things at the system level: traceability, human oversight, monitoring, and documentation. Here's what each means in engineering terms.
Traceability
Structured, automated event logging with metadata. What input was received, what output was generated, which model version ran, when it happened, and enough context to reconstruct the decision later. Minimum retention: six months.
If your system processes a credit application and a regulator asks "why was this applicant denied," you need to produce the specific data, prompt, model version, and output that led to that decision. Not a description of how the system generally works. The specific decision, retrievable.
Human Oversight
The Act defines three oversight models:
- Human-in-the-loop: Active human participation in each decision cycle
- Human-on-the-loop: Real-time monitoring with the ability to intervene
- Human-in-command: Governance-level supervision with authority to override or shut down
Which model applies depends on the risk level and context. All three require that oversight mechanisms are usable by someone who isn't an ML engineer. A monitoring dashboard only your data science team can interpret doesn't satisfy the requirement.
The engineering implication: build intervention mechanisms, not just monitoring. Dashboards, alert systems, and override controls that someone in compliance or product management can actually operate.
Monitoring
Beyond logging individual decisions, the Act requires ongoing monitoring of system behavior in production. Tracking accuracy, detecting drift, identifying patterns that might indicate bias or degraded performance.
This intersects with the shadow AI problem. Over half of department-level AI initiatives operate without formal oversight. If you don't have a complete inventory of what AI systems are running in your organization and what data flows into them, compliance monitoring is impossible. The inventory is the prerequisite.
Documentation
Technical documentation for high-risk systems covers nine areas: system description and intended purpose, development methodology, design choices and rationale, data requirements and governance, performance metrics, risk management measures, system changes and lifecycle, applicable standards, and post-market monitoring procedures.
Industry estimates: 40 to 80 hours per complex system, assuming your team documented design choices as they went. For teams that didn't, the cost is significantly higher because you're reconstructing decisions from memory.
The practical lesson: documentation is dramatically cheaper as a byproduct of your development process rather than a separate workstream under deadline pressure.
The Deployer Trap
There's a detail worth understanding before your team scopes this work. The Act distinguishes between providers (who build or substantially modify AI systems) and deployers (who use systems built by others). Deployers have lighter obligations. Providers carry the heavy burden: full risk management, comprehensive documentation, conformity assessment, EU database registration, post-market monitoring.
The trap: fine-tuning a model, adding retrieval-augmented generation that enables high-risk use cases, or rebranding a third-party system can silently upgrade you from deployer to provider. Provider obligations are estimated at 5 to 10 times the resource intensity. If your engineering team has customized an LLM beyond basic prompt engineering, you need to evaluate which side of that line you're on.
A compliant foundation model does not make your system compliant. The compliance obligation is on your system architecture, not on the model provider.
What Happens When You Get It Wrong
The fines are well-publicized: up to 7% of global turnover for prohibited practices, 3% for high-risk non-compliance. But the enforcement actions that should concern engineering leaders target the systems themselves:
- Rite Aid: Facial recognition generated disproportionate false positives in predominantly Black and Asian communities. The FTC ordered the entire system deleted: all collected photos, videos, and the AI models. Years of development, destroyed.
- Everalbum: The FTC's first "algorithmic disgorgement" order forced destruction of not just improperly collected data, but the models trained on that data. If your model was built on data you shouldn't have used, you can lose the model entirely.
- UnitedHealth: The nH Predict algorithm denied healthcare claims, overriding treating physicians. Over 90% of denials were overturned on appeal. Deploying AI for consequential decisions without adequate oversight or accuracy validation is exactly what regulators are targeting.
These cases predate the EU AI Act. They demonstrate that regulators and courts are already holding companies accountable for how AI systems are designed and operated.
How Long You Have
The full requirements for high-risk AI systems take effect August 2, 2026. Parts of the Act are already in effect: prohibited practices became enforceable in February 2025, and obligations for general-purpose AI models in August 2025.
Over 50% of organizations still lack a basic inventory of their AI systems. 40% can't classify their systems under the Act's risk tiers. If that sounds familiar, you have company, but the deadline doesn't move.
The work itself is familiar engineering: logging, versioning, oversight workflows, monitoring, and documentation, built into your system architecture. Teams that start now have time to do it well. Teams that wait until Q3 will be retrofitting under pressure.
Next Steps
We put together a short guide to help you assess where you stand. It covers the regulatory landscape beyond the EU, a self-assessment across the four compliance pillars, and practical next steps.
Get the AI Act Readiness Guide →
If you'd rather talk it through, we offer a 30-minute governance review where we walk through your current architecture and identify gaps. No pitch. If you walk away with a clear picture and never call again, that's a good outcome.

