The EU AI Act is the European Union’s risk-based regulation of artificial intelligence systems, in force since August 2024 with phased application through 2027. For legal teams operating in or serving the EU, it creates two layers of obligation: (a) compliance for AI systems the legal team itself uses (their own AI policy), and (b) advisory work for clients deploying AI in higher-risk domains. The Act sits alongside (not replacing) GDPR, which continues to govern personal-data processing including AI training and inference.
The four risk tiers
| Tier | Examples relevant to legal practice | Obligations |
|---|---|---|
| Unacceptable | Social scoring, emotion recognition in workplace, manipulative AI | Prohibited entirely |
| High-risk | AI used in administration of justice (judicial decision-support), employment (CV screening, performance evaluation), critical infrastructure access | Conformity assessment, registration, ongoing monitoring, human oversight |
| Limited-risk | Most legal-AI tools (drafting assistants, contract review, research) when interacting with people | Transparency: disclose AI use to interacting parties |
| Minimal-risk | Spam filters, AI-enabled video games | No specific obligations |
For most in-house legal teams using AI for internal contract review and drafting, the relevant tier is limited-risk — transparency obligations apply but conformity-assessment doesn’t.
When the legal team’s own AI use becomes high-risk
A legal team’s own use of AI rarely crosses into high-risk on its own. It can when:
- The AI is used in employment decisions (performance reviews, hiring decisions, terminations) about firm staff or client employees
- The AI is used to support judicial decision-making (e.g., AI tools used by courts that the firm supplies or operates)
- The AI is used to evaluate creditworthiness or insurance eligibility for individuals
For most contract-review, research, and drafting work, the AI is limited-risk and the obligations are mostly transparency.
Transparency obligations
Limited-risk AI systems must:
- Disclose to interacting persons. When AI is interacting with humans (chatbot, AI-drafted communication), the system must disclose the AI nature unless it’s clearly evident from context.
- Watermark AI-generated content. Some categories of AI-generated content (deepfakes, synthetic media) require labeling.
- Document for downstream users. Providers of general-purpose AI models must publish documentation enabling downstream deployers to comply with their own obligations.
For legal practice: AI-drafted client communications, AI-generated briefs, and AI-created summaries should be disclosed to the recipient when material.
Phased application
| Date | What applies |
|---|---|
| Feb 2025 | Prohibited practices banned; AI literacy obligations begin |
| Aug 2025 | Governance rules; general-purpose AI model obligations |
| Aug 2026 | High-risk system requirements (most categories) |
| Aug 2027 | High-risk system requirements (remaining categories) |
Legal teams advising clients on AI compliance should track the phased dates carefully — different obligations vest at different times.
GDPR interaction
The EU AI Act doesn’t replace GDPR. Both apply when AI processes personal data:
- GDPR governs the personal data processing. Lawful basis, data minimization, data subject rights, DPA requirements with AI vendors.
- EU AI Act governs the AI system itself. Transparency, risk classification, conformity for high-risk systems.
For an AI vendor processing personal data, both regulations apply. Legal teams reviewing AI vendor contracts must verify both compliance regimes.
How to operationalize for in-house legal teams
- Inventory AI tools used. Every AI system in use by the legal team and adjacent functions, with risk-tier classification under the AI Act.
- Build compliance into vendor due diligence. AI vendors get an AI Act addendum during diligence: confirmation of risk tier, transparency disclosures, GDPR compliance, training data sourcing.
- Update AI policy for transparency. Internal use of AI for client work should disclose AI involvement as appropriate; external use (chatbots, AI-drafted communications) requires explicit disclosure.
- Train on AI literacy. The Act creates a literacy obligation — staff using AI must have appropriate training. Build into existing AI-policy training cycles.
- Track regulatory developments. EU AI Office is publishing guidance through 2027; risk classifications and high-risk lists will evolve.
Common pitfalls
- Assuming legal AI is high-risk by default. Most legal-AI use is limited-risk; over-classifying creates unnecessary compliance overhead.
- Treating AI Act and GDPR as one regime. They’re distinct; complying with one doesn’t complete the other.
- Ignoring extraterritorial reach. US firms with EU clients or EU operations may be subject to AI Act requirements even without EU establishment.
- Static compliance assessment. Risk tiering is per-deployment, not per-vendor; the same AI tool used in different contexts may have different obligations.
Related
- AI policy for legal teams — the broader internal policy framework
- GDPR for legal teams — the data-protection layer that intersects with AI
- DPA checklist — vendor data-protection terms relevant for AI vendors
- What is Legal Ops? — function coordinating AI Act compliance internally