The EU AI Act

Keywords: ai act,regulation,eu

The EU AI Act is the world's first comprehensive AI regulation, enacted by the European Union in 2024, that establishes a risk-based regulatory framework classifying AI systems by potential harm and imposing proportionate obligations — ranging from outright bans on the most dangerous AI applications to transparency requirements for foundation models, setting a global regulatory standard that affects any organization deploying AI systems to EU residents regardless of where they are headquartered.

What Is the EU AI Act?

- Definition: Regulation (EU) 2024/1689 — the European Union's landmark AI legislation that classifies AI systems into four risk tiers, assigns compliance obligations proportionate to risk level, establishes governance bodies (AI Office, AI Board), and creates enforcement mechanisms with substantial fines.
- Publication: Entered into force August 1, 2024. Phased implementation: prohibited AI bans (February 2025), general provisions and GPAI rules (August 2025), high-risk obligations fully applicable (August 2026-2027).
- Jurisdictional Scope: Applies to providers and deployers of AI systems affecting people in the EU — regardless of where the organization is established. A U.S. company deploying AI to EU customers must comply.
- Brussels Effect: EU regulatory standards frequently become global de facto standards — the AI Act is expected to influence AI regulation worldwide, similar to how GDPR became the global privacy standard.

The Four Risk Categories

1. Unacceptable Risk (Prohibited):
Complete bans with no exceptions:
- Social scoring: Government or private AI systems evaluating individuals based on social behavior across unrelated contexts (China-style social credit systems).
- Real-time biometric surveillance: Remote biometric identification in public spaces by law enforcement (narrow exceptions for terrorism, serious crime, missing children).
- Subliminal manipulation: AI exploiting psychological vulnerabilities or subconscious biases to influence behavior harmfully.
- Exploitation of vulnerabilities: AI targeting children, elderly, or people with disabilities using their vulnerability.
- Emotion inference in workplaces/education: Using AI to infer emotions from biometric data in professional or educational settings.
- Biometric categorization for sensitive characteristics: Inferring race, political opinions, religion, sexual orientation from biometric data.

2. High Risk (Strict Obligations):
Permitted but requires pre-market conformity assessment, registration, and ongoing compliance:
- Critical infrastructure: AI managing power grids, water systems, transport.
- Education: AI determining access to education, scoring exams.
- Employment: AI for recruitment, CV screening, promotion, termination decisions.
- Essential services: Credit scoring, insurance pricing, benefits eligibility.
- Law enforcement: Predictive policing, lie detection, evidence evaluation.
- Migration and border control: Risk assessment of asylum seekers, border surveillance.
- Administration of justice: AI assisting judicial decisions.

Obligations for High-Risk AI:
- Technical documentation and conformity assessment.
- Data governance and quality management.
- Transparency and logging of operations.
- Human oversight design requirements.
- Accuracy, robustness, and cybersecurity specifications.
- Registration in EU database before deployment.

3. Limited Risk (Transparency Obligations):
- Chatbots: Users must be informed they are interacting with AI.
- Deepfakes: AI-generated synthetic media must be disclosed as AI-generated.
- Emotion recognition systems: Users must be informed when their emotions are being analyzed.

4. Minimal Risk (No Obligations):
- AI-enabled spam filters, video games, translation tools — minimal or no regulation.
- Voluntary adherence to codes of conduct encouraged.

General Purpose AI (GPAI) Model Rules

Foundation models (GPT-4, Gemini, Llama, Claude) face specific obligations:
- All GPAI Models: Technical documentation; compliance with EU copyright law; training data summaries.
- High-Impact GPAI (>10²⁵ training FLOPs or significant systemic risk): Adversarial testing (red-teaming), incident reporting to AI Office, cybersecurity protections, energy efficiency reporting.
- Open-Source Exception: Free and open-source GPAI models released with open weights have reduced compliance obligations (copyright and documentation requirements remain).

Governance Structure

- AI Office: European Commission body responsible for enforcing GPAI rules, scientific research, and international cooperation.
- AI Board: Representatives from all 27 EU member states; coordinates national enforcement.
- National Competent Authorities: Each member state designates authority for enforcement in their jurisdiction.
- Scientific Panel: Independent AI experts advising on systemic risk classification.

Penalties

| Violation | Maximum Fine |
|-----------|-------------|
| Prohibited AI violations | €35 million or 7% of global annual turnover |
| High-risk AI non-compliance | €15 million or 3% of global annual turnover |
| Providing incorrect information | €7.5 million or 1.5% of global annual turnover |
| SME/startup cap | Lower of percentage or absolute amount |

The EU AI Act is the regulatory architecture that defines the governance terms for AI's integration into European society — by establishing a clear risk hierarchy with proportionate obligations, it creates legal certainty for compliant AI deployment while banning the most harmful applications, setting the standard that other jurisdictions will increasingly adopt as the global consensus on responsible AI governance crystallizes.

Want to learn more?

Search 13,225+ semiconductor and AI topics or chat with our AI assistant.

Search Topics Chat with CFSGPT