Claude Is Now in Microsoft Word via Copilot. What Legal Teams Should Know.
What happened
Anthropic's Claude models — including Claude 3.5 Sonnet and Claude 4 Opus — are now available inside Microsoft 365 Copilot. This means lawyers using Word, Excel, PowerPoint, and Outlook can access Claude's reasoning capabilities directly inside the applications they already use.
Microsoft rolled this out as part of its Wave 3 Copilot update. Organizations that enable the Anthropic connector get Claude as an alternative to GPT-4o in Copilot. A notable feature called Researcher Council uses a dual-model approach: GPT drafts a response, then Claude reviews it for accuracy, completeness, and citation integrity.
This is a significant move. Claude is one of the strongest reasoning models available, and putting it inside Word removes a major friction point for anyone who previously had to copy-paste between a browser tab and their document.
What this means for legal teams
For lawyers who already pay for Microsoft 365 Copilot, this is a meaningful upgrade. Claude's long-context reasoning is particularly well-suited for document analysis, contract review, and extended research tasks. The model can process entire documents without hitting the context limits that frustrate users of smaller models.
The dual-model Researcher Council feature is interesting for legal work. Having one model draft and another review for accuracy mirrors how lawyers already work — a junior drafts, a senior reviews. Whether the implementation is reliable enough for legal-grade output is the open question.
But there are important limitations that legal teams should understand before relying on this for client work.
What Claude in Copilot does not do
Claude in Copilot is a general-purpose AI assistant that happens to be inside Word. It is not a legal AI tool. That distinction matters.
No citation grounding. Claude does not verify that the cases it references actually exist. It can and does hallucinate legal citations. Courts have sanctioned lawyers over $100,000 for filing briefs with AI-generated fake citations. Claude in Copilot provides no safeguard against this.
No matter continuity. When you close the document or start a new Copilot session, the context resets. Claude does not remember your case across sessions. It does not know what was negotiated in the last round, what documents are in your matter folder, or what your client's specific position is. Every session starts from zero.
No caselaw database. Claude reasons well, but it does not have access to a real-time legal research database. It cannot search across federal and state court opinions the way a dedicated legal research tool can.
No audit trail. There is no matter-level tracking of what Claude produced, when, for which client, or from what inputs. For firms with compliance or malpractice risk management requirements, this is a gap.
Data flows through Microsoft and Anthropic. Your document content is processed by Microsoft's infrastructure and routed to Anthropic's models. For firms handling privileged client communications, this means your data traverses multiple third-party processors. Microsoft's enterprise data protection policies apply, but the data still moves outside your controlled environment.
The broader context
This is part of a larger trend: foundation model companies are moving from standalone chatbots to embedded integrations inside the tools people already use. OpenAI did it with ChatGPT for enterprise. Google did it with Gemini in Workspace. Now Anthropic has done it with Claude in Microsoft 365.
For legal teams, this means AI is getting harder to avoid. It is showing up inside Word whether you planned for it or not. The question is shifting from "should we use AI" to "which AI, under what controls, with what safeguards."
That is a governance question as much as a technology question. ABA Model Rule 1.6 requires competent supervision of the technology used for client work. A general-purpose AI inside Word does not inherently satisfy that standard.
How Irys approaches this differently
Irys also has a Word Add-In — but it takes a fundamentally different approach. Instead of being a general-purpose AI that happens to be in Word, Irys brings a purpose-built legal workspace into the Word sidebar.
The Irys Word Add-In has two modes. Chat mode lets you ask questions about your document, your matter, or your full case history — with the AI maintaining context across sessions via matter folders and a knowledge graph. Work mode applies AI revisions as tracked changes directly in the document, so you review them the same way you review any edit.
Citation grounding is built in. Irys searches across over 50 million federal and state court opinions via CourtListener and verifies every citation before it reaches your draft. Claude in Copilot does not do this.
Matter continuity is the core differentiator. When you return to a case next week, Irys still knows the documents, the facts, the issues, and the prior analysis. Claude in Copilot starts fresh every time.
On data handling, Irys operates as a data processor with zero data retention. Approximately 80% of AI processing runs on Irys-controlled infrastructure. LLM calls use ephemeral encrypted tokens — nothing is retained after the session. This is a materially different data architecture than routing your client documents through Microsoft and Anthropic's servers.
The Irys Word Add-In is included with every plan at no additional cost. Microsoft 365 Copilot requires a separate per-user subscription on top of your existing Microsoft 365 license.
What to consider
- If your firm already pays for Copilot, Claude is a meaningful upgrade over GPT-4o alone. Use it for general drafting and document analysis where privilege risk is low and citation accuracy is not critical.
- For privileged client work, understand the data flow. Your document content routes through Microsoft and Anthropic infrastructure. Ask your compliance team whether this meets your firm's data handling requirements under Rule 1.6.
- For anything involving legal citations, do not rely on Claude in Copilot without independent verification. The model does not have access to a caselaw database and will hallucinate citations. This is not a theoretical risk — it has resulted in six-figure sanctions.
- If you need matter continuity — an AI that remembers your case across sessions, connects to your document library, and maintains context over weeks of work — a general-purpose Copilot integration does not provide this. Purpose-built legal AI platforms are designed for this use case.
See how Irys compares
