Beyond Prompts: Why Legal AI Must Be Built Around Matters, Not Conversations

Why legal AI platforms must be built around matters not blank chat conversations for law firms

Legal work is not a sequence of questions. It’s a file.

Whether you’re at a law firm or in-house, you don’t wake up thinking, “Let me open a blank chat.” You wake up thinking:

  • “Where are we on the motion?”

  • “What did we tell the client last week?”

  • “What did the other side concede?”

  • “Which version is the clean one?”

  • “What changed after that last email?”

  • “What’s our position if the judge asks X?”

That’s not a prompt. That’s a matter.

Legal reasoning is cumulative. It depends on what was established yesterday, what changed today, and what has to remain consistent across the life of a case, transaction, investigation, or advisory project. The unit of legal work is continuity.

Yet many legal AI tools are built around conversations instead of continuity.

They begin with a blank chat. You upload documents. You explain background. You get an answer. Then you open a new thread and the system forgets the file unless you rebuild it. That is not how lawyers work.

Lawyers work in matters.

What a “matter” actually is  -  in plain terms

A matter is the living container of legal work. It’s the one place where the record, the context, and the decisions stay coherent over time.

At a law firm, a matter typically includes:

  • the pleadings, motions, briefs, and orders (plus the current posture)

  • discovery: requests, responses, exhibits, transcripts, privilege logs

  • correspondence with opposing counsel and the client

  • key facts, timeline, and witness or deponent profiles

  • strategy decisions and constraints (“we’re not conceding X,” “we must preserve Y”)

  • internal work product: outlines, drafts, issue lists, research memos

  • deadlines, tasks, and who owns what

In-house, a matter includes:

  • the contract set, redlines, fallback positions, and deal history

  • business context: what the team is trying to achieve and what risks they’ll accept

  • approvals and constraints (“legal-approved fallback only,” “no MFN,” “cap at X”)

  • prior negotiation positions and why they were taken

  • email threads, addenda, policy docs, and playbooks

  • related matters (same counterparty, same clause, same risk issue)

In both environments, the matter is the source of truth. It’s where legal work stays accountable. It’s what allows you to move fast without drifting.

The core design mistake: blank chat is not a matter

Conversation-first AI is fine for one-off drafting and generic questions. It breaks down the moment you’re doing real legal work across time.

Because when a system starts from a blank chat, lawyers end up doing the most expensive part manually:

  • re-uploading the record

  • re-explaining the posture

  • re-stating constraints

  • re-confirming what changed since last time

  • and re-checking whether the AI is reasoning from the same facts you are

That’s not leverage. It’s repetition.

And repetition creates risk: inconsistent facts, inconsistent positions, missed constraints, and “drift” across drafts. The output may look polished, but the underlying understanding isn’t stable.

Matters are not a UI choice  -  they are a reliability requirement

A legal AI platform should not rely on the user to reconstruct context every session. It should operate inside a matter the way a legal team does.

That means: when you open the matter, the system already knows what the file is, what the current posture is, what the standing instructions are, and what the relevant record is - without you rebuilding it every time.

The AI should act like it’s working in the same file you’re working in.

How Irys approaches this

At Irys, the matter is the organizing unit of legal work - not the chat.

A matter in Irys is a structured workspace tied to a specific case, deal, investigation, or advisory project. Documents, drafts, research, analysis, and instructions live together and persist over time.

That changes the experience immediately:

  • The system doesn’t reset the world every session.

  • Matter-level instructions persist (jurisdiction, representation, posture, constraints, style).

  • Work product stays connected to the underlying record.

  • Context compounds instead of being retyped.

Why this creates better outcomes

When legal AI is matter-native, three things happen:

First, the work becomes consistent. The AI can reason from the same evolving record across days and weeks.

Second, it becomes defensible. Outputs can stay tied to the record, which makes review faster and reduces “where did this come from?” moments.

Third, it becomes safer. You don’t need to keep pushing entire files upstream just to recreate context. You can operate on the smallest relevant slice needed for a task, which reduces exposure and keeps workflows cleaner.

The takeaway

Legal work is built on continuity. A tool that forgets the file forces lawyers to become the memory, and that is exactly where risk enters.

Prompt-first AI can be helpful. But serious legal AI has to operate the way legal teams operate: inside matters, with a stable source of truth, across time.

Legal AI that forgets context isn’t intelligence. It’s assistance.

If legal AI is going to become core infrastructure for legal professionals, it has to be built around matters - not conversations.