An AI policy for legal teams is the documented set of rules governing how attorneys and Legal Ops staff use generative AI tools — what tools are authorized, for what use cases, with what data, with what attorney oversight, and with what client disclosure. The state bars in California, New York, Florida, and most other US jurisdictions have issued formal opinions on AI use; the EU AI Act creates additional obligations for legal practice in Europe. A working policy translates these requirements into concrete operational rules.
What an AI policy must cover
Six required elements:
- Authorized tools. Which AI vendors are approved for which use cases. Default-deny for unapproved tools.
- Authorized use cases. What the AI may do (research assistance, drafting, summarization) vs what only humans may do (legal advice rendered to client, sign-off on filings, ethical determinations).
- Data handling. What data may be sent to which tool. Confidential client info → only enterprise-tier vendors with no-training contractual guarantees. Privileged communications → typically prohibited from non-enterprise tools entirely.
- Attorney oversight. AI output is reviewed by a competent attorney before it leaves the firm. Verification standards for citations, factual claims, legal conclusions.
- Client disclosure. When and how to disclose AI use to clients. Some jurisdictions require disclosure for material AI involvement; others don’t but best-practice favors transparency.
- Training and monitoring. Required training for attorneys before AI tool access; monitoring of usage patterns to detect off-policy use.
Authorized tools by tier
A working tier model:
- Tier A — Enterprise approved. Specifically licensed for the firm with enterprise data terms (no-training, audit logs, SSO). Examples: Claude Enterprise, Harvey, Thomson Reuters CoCounsel, Spellbook Business. Authorized for confidential and (with restrictions) privileged content.
- Tier B — Personal-account permitted. Free or personal-tier tools without enterprise data terms. Authorized only for non-confidential work — research on public matters, learning, analysis of public documents.
- Tier C — Prohibited. Any tool without acceptable data terms or with known security concerns. Default state for any unevaluated tool.
The policy explicitly enumerates Tier A and Tier B; everything else defaults to Tier C.
Authorized use cases
Six categories, with typical authorization:
| Use case | Tier A | Tier B |
|---|---|---|
| Legal research (public sources) | Authorized | Authorized |
| Document drafting (with attorney review) | Authorized | Restricted to non-confidential |
| Contract review and redlining | Authorized | Prohibited |
| Document summarization | Authorized | Restricted to non-confidential |
| Generating client communications | Authorized with review | Prohibited |
| Court filing draft | Authorized with attorney verification | Prohibited |
| Legal advice to client | Prohibited (always a human attorney) | Prohibited |
The “always a human attorney” line is the bright rule across every jurisdiction’s AI ethics opinion — AI assists; attorneys advise.
Citation verification — the hallmark of professional-grade AI use
Every cited authority in AI-generated work product is verified by an attorney before the document leaves the firm. The professional-discipline cases against attorneys for AI hallucinations (the “Avianca” cases of 2023 onward) all share the same fact pattern: AI generated fictitious cases; attorney didn’t verify; case got filed; sanctions followed.
The verification standard is concrete: pull the case from Westlaw or LEXIS, confirm citation accuracy, confirm the case stands for the cited proposition. AI-generated citations without independent verification are professional malpractice in every US jurisdiction.
EU AI Act implications
For legal practice in the EU, the EU AI Act adds:
- Most legal-AI tools are limited-risk. Transparency obligations (disclose AI use to interacting parties) apply.
- Some legal-AI tools may be high-risk. AI used in administration of justice (judicial decision-support) is high-risk and subject to conformity assessment, registration, and ongoing monitoring.
- Workplace AI rules. AI tools that monitor or assess employees (including attorneys’ work patterns) trigger workplace-AI obligations including consultation with works councils.
US firms practicing in EU should align AI policy with EU AI Act requirements where data flows or operations cross.
How to operationalize
- Document the policy explicitly. A written, versioned policy that all attorneys acknowledge in writing. Verbal “norms” produce inconsistency and don’t defend against ethics complaints.
- Tier-A tool list maintained centrally. Legal Ops owns the approved-tool list; new tools require formal evaluation before authorization.
- Mandatory training before access. No attorney gets Tier A tool access without completing AI policy training. Annual refresher.
- Sample audits of AI-generated work. Spot-check a sample of AI-augmented work product per quarter for citation verification, attorney review, and disclosure compliance.
- Incident-response playbook. When AI output produces an error in client work product, defined process: notify client, correct, document, learn. Don’t hide.
- Update against bar opinions. State and jurisdictional opinions evolve; the policy needs update cadence to track.
Related
- What is Legal Ops? — function that owns AI policy in coordination with GC
- EU AI Act for legal teams — regulatory layer the policy must align with
- Contract review SOP — operational discipline AI policy intersects with
- Claude — Tier A enterprise option for legal-team AI