Architecture
Why Irys is priced the way it is.
Most legal AI is a wrapper. We're not. Here's what that means for your margin — and for ours.
The market today
Here's the market, priced honestly.
Legal AI has split into five rough categories, priced very differently:
| Category | Typical price | Architecture |
|---|---|---|
| General consumer AI | $20 / month | Not built for legal work |
| Legal AI point tools | $150 – 200 / month | Single workflow, Word-only |
| Legal AI wrappers | $200 – 250 / month | GPT wrapper plus legal prompts |
| Irys One | $299 / month | Full platform, proprietary architecture |
| Enterprise legal AI | $1,000 – 1,500 / month | GPT wrapper plus enterprise sales overhead |
Every tool in this table except ours calls a frontier lab — Anthropic, OpenAI, or similar — on every query. Their cost structure is straightforward: input tokens go in, the lab bills them, output tokens come out, they bill you. That cost scales linearly with usage, attorney by attorney, matter by matter, query by query. Their margins depend entirely on the labs continuing to cut prices.
We built Irys to be architecturally different.
How Irys is built
Three proprietary layers, and an LLM only at the end.
Irys has three systems that run before any frontier model is called. By the time the frontier model is reached, the prompt is small, the context is tight, and the token cost is a fraction of what a wrapper pays.
Every matter in Irys produces a structured representation of facts, parties, timeline, issues, and authorities. Built once, cached per matter, updated as new documents are ingested. A question about the Smith case doesn't re-read every document in the matter — it retrieves from the graph.
When a query comes in, our system reasons about what's actually needed, pulls only the relevant material from the matter, and compresses context before any LLM call. Many queries are answered entirely inside our infrastructure without touching a general-purpose model.
The platform remembers what your matter is about, what you've asked, what you've drafted, and what's been decided. That memory is stored in our infrastructure and retrieved in milliseconds — not passed into an LLM prompt on every query.
Only the final synthesis step calls a frontier model. By the time it does, the prompt is small, the context is tight, and the token cost is a fraction of what a wrapper pays for the same task.
Model-agnostic by design. When Claude improves, Irys improves. When GPT improves, Irys improves. We're not locked to any single lab.
What this actually costs us
Service-like margins at software-like prices.
A heavy Irys power user — 25 matters, hundreds of documents, daily usage — costs us under $20 a month in infrastructure and compute. For the same usage pattern, a wrapper-based competitor pays the frontier lab many times that amount, because every query involves loading large context into the model.
That cost difference is structural. It's not a promotion, it's not a pricing strategy, it's not a race to the bottom. It's what happens when the proprietary infrastructure does the work before the LLM ever sees the query.
The structural flip
When lab prices fall, wrappers get squeezed. We don't.
Here's what most investors and buyers haven't fully worked through. When Anthropic or OpenAI cut their prices — which they will, repeatedly, because the labs are in an efficiency race — the wrappers don't win. They get squeezed. Everyone else's costs drop by the same amount, so competitive pricing pressure absorbs the savings. The wrappers end up racing each other to lower subscription prices while their primary cost line erodes underneath them.
Our costs don't fall much with lab price cuts, because 80% of our compute is in our own infrastructure. But our pricing doesn't face the same pressure either, because our unit economics were never dependent on the labs. When the labs cut prices, our margin expands. When the labs raise prices, we're insulated.
That's the structural difference between running on someone else's infrastructure and running on your own.
| When frontier prices drop 50% | Wrapper AI | Irys |
|---|---|---|
| COGS drops | −50% | −10% |
| Savings captured by | Customers (competitive pressure) | Irys (retained as margin) |
| Directional outcome | Margin compression | Margin expansion |
What this means for you
We're a different kind of company, doing a different kind of work.
We priced Irys One at $299 a month because that's where our costs are. We didn't price at the $1,200 enterprise level — where Harvey and similar tools sit — because we don't need to. We have service-like margins at software-like prices, which means we can pass efficiency on to customers instead of extracting it. We didn't price at Claude's $20 because we're not a chatbot — we're a platform with a proprietary backend that took years to build and continues to deepen.
The market is telling you the price of legal AI is somewhere between $200 and $1,500 per lawyer per month. We're telling you that's what it costs if you're running on someone else's infrastructure.
We're not.
See the architecture run.
Seven-day free trial. Full platform, no credit card. Or book a call and we'll walk through the stack and your unit economics side-by-side.
Prices above are representative of publicly available information as of 2026 and may vary by customer, contract, and deployment. Irys architecture claims are validated continuously through internal testing and third-party evaluation. For the detailed technical breakdown, contact us.
