
Courts Aren’t Regulating AI. They’re Defining Its Conditions.
This order doesn’t regulate AI. It defines the conditions under which AI can be used in legal work.
A federal court in Morgan v. V2X didn’t ask whether AI is useful. It didn’t question whether lawyers should use it.
It did something more consequential.
It defined what acceptable AI use looks like inside litigation. Not in theory. In practice.
The shift most people will miss
This will be read as another warning about AI risk.
It isn’t.
The court didn’t focus on hallucinations. It didn’t question output quality. It didn’t suggest lawyers should avoid AI.
It focused on conditions: whether inputs are stored, whether data is used for training, whether information is exposed to third parties, whether the system allows deletion and control.
That’s not a critique of intelligence. It’s a standard.
AI is no longer judged by output
For the last two years, legal AI has been evaluated on what it produces. Is the research correct? Is the draft usable? Does it sound like a lawyer?
This order changes the standard.
AI is now judged by how it handles data. Not what it says — but what it does with what you give it.
Tool choice is now part of litigation
The court didn’t stop at restricting use. It required disclosure.
The plaintiff was ordered to identify the AI system used to process confidential information so the opposing party could assess risk.
That’s new.
AI is no longer a private drafting layer. It’s part of the litigation surface. Something opposing counsel can question. Something the court can constrain. Something that carries consequences beyond the output.
“Mainstream AI” doesn’t meet the standard
The court’s requirements were straightforward:
- No training on inputs
- No uncontrolled storage
- No third-party exposure without safeguards
- Deletion on demand
- Contractual guarantees around all of it
In practice, that excludes most widely used tools. Not because they aren’t powerful. Because they weren’t built for this environment.
ChatGPT. Claude. Gemini. Harvey. All named. All restricted under this standard.
Consider what that means in practice. A litigator who used ChatGPT to process deposition transcripts designated as confidential under a protective order could now face a court-ordered disclosure of that fact — giving opposing counsel standing to assess whether confidential information was compromised and to seek further relief. The output of that AI session isn’t the issue. The act of uploading confidential material to a non-compliant system is.
The risk is not the output. It’s the system the data passes through.
This isn’t about better prompts
There’s a tendency to treat this as a usage problem. Be more careful. Write better prompts. Avoid sensitive inputs.
That’s not what the court is saying.
The issue isn’t how the tool is used. It’s what the system is.
You can’t prompt your way into compliance with requirements that are architectural.
Most systems can’t adapt to this
Most legal AI tools are built as interfaces on top of general-purpose models. They generate outputs from inputs provided in a session. That structure creates the exact risks the court is trying to control: uncontrolled data exposure, unclear retention boundaries, and no governed environment for the matter.
To meet this standard, those systems don’t need safeguards. They need to change how they work.
The divide this creates
The court acknowledged the consequence. Systems that meet these requirements exist — accessed through procurement, with cost, contracts, and control. Everything else sits outside the standard.
That’s not a temporary gap. It’s structural.
Where Irys fits
This order doesn’t introduce a new idea. It formalizes one.
Legal AI has to operate inside a controlled environment where data is scoped, exposure is limited, retention is governed, and reasoning is traceable.
Irys is built on that assumption. Not as safeguards layered onto a chat tool, but as a matter-based system where the architecture directly satisfies each of the court’s requirements.
No training on inputs. Irys does not train on customer data — across all accounts, not just enterprise tiers.
Ephemeral inference. Data is not retained after a session from a model standpoint. Matter information stays within the matter boundary.
Matter isolation. Each client’s data is fully containerized. Nothing crosses matter boundaries by design.
Deletion on demand. Delete it and it’s gone — not archived in ways that require legal process to surface.
For Enterprise clients, Irys also provides a Data Processing Agreement, SOC 2 Type II certification, and written documentation of subprocessor obligations — the contractual record the court specifically requires firms to retain.
The difference isn’t the interface. It’s the structure underneath it.
A note on scope — and why it still matters
This is a single district court order. But the reasoning isn’t novel — it’s grounded in existing doctrine around confidentiality, work product, and data control. Other courts will face the same questions. This provides the template.
Not an outlier. An early articulation.
The standard has changed
Courts aren’t rejecting AI. They’re defining the conditions under which it can be used.
And those conditions don’t evaluate intelligence. They evaluate control.
Legal AI isn’t being limited. It’s being specified.
And systems that weren’t built for that specification will have to adapt — or remain outside of it.


