Skip to main content

NewIrys for Word is here. Analyze, draft, review, and revise — all from one sidebar. Install Free →

Architecture

Why Irys is priced the way it is.

Most legal AI is a wrapper. We're not. Here's what that means for your margin — and for ours.

The market today

Here's the market, priced honestly.

Legal AI has split into five rough categories, priced very differently:

Legal AI market price categories
CategoryTypical priceArchitecture
General consumer AI$20 / monthNot built for legal work
Legal AI point tools$150 – 200 / monthSingle workflow, Word-only
Legal AI wrappers$200 – 250 / monthGPT wrapper plus legal prompts
Irys One$299 / monthFull platform, proprietary architecture
Enterprise legal AI$1,000 – 1,500 / monthGPT wrapper plus enterprise sales overhead

Every tool in this table except ours calls a frontier lab — Anthropic, OpenAI, or similar — on every query. Their cost structure is straightforward: input tokens go in, the lab bills them, output tokens come out, they bill you. That cost scales linearly with usage, attorney by attorney, matter by matter, query by query. Their margins depend entirely on the labs continuing to cut prices.

We built Irys to be architecturally different.

How Irys is built

Three proprietary layers, and an LLM only at the end.

Irys has three systems that run before any frontier model is called. By the time the frontier model is reached, the prompt is small, the context is tight, and the token cost is a fraction of what a wrapper pays.

LAYER 1Knowledge GraphProprietary

Every matter in Irys produces a structured representation of facts, parties, timeline, issues, and authorities. Built once, cached per matter, updated as new documents are ingested. A question about the Smith case doesn't re-read every document in the matter — it retrieves from the graph.

LAYER 2Agentic RetrievalProprietary

When a query comes in, our system reasons about what's actually needed, pulls only the relevant material from the matter, and compresses context before any LLM call. Many queries are answered entirely inside our infrastructure without touching a general-purpose model.

LAYER 3Matter Context MemoryProprietary

The platform remembers what your matter is about, what you've asked, what you've drafted, and what's been decided. That memory is stored in our infrastructure and retrieved in milliseconds — not passed into an LLM prompt on every query.

LAYER 4Frontier Model — the last 20%External

Only the final synthesis step calls a frontier model. By the time it does, the prompt is small, the context is tight, and the token cost is a fraction of what a wrapper pays for the same task.

Model-agnostic by design. When Claude improves, Irys improves. When GPT improves, Irys improves. We're not locked to any single lab.

What this actually costs us

Service-like margins at software-like prices.

A heavy Irys power user — 25 matters, hundreds of documents, daily usage — costs us under $20 a month in infrastructure and compute. For the same usage pattern, a wrapper-based competitor pays the frontier lab many times that amount, because every query involves loading large context into the model.

That cost difference is structural. It's not a promotion, it's not a pricing strategy, it's not a race to the bottom. It's what happens when the proprietary infrastructure does the work before the LLM ever sees the query.

The structural flip

When lab prices fall, wrappers get squeezed. We don't.

Here's what most investors and buyers haven't fully worked through. When Anthropic or OpenAI cut their prices — which they will, repeatedly, because the labs are in an efficiency race — the wrappers don't win. They get squeezed. Everyone else's costs drop by the same amount, so competitive pricing pressure absorbs the savings. The wrappers end up racing each other to lower subscription prices while their primary cost line erodes underneath them.

Our costs don't fall much with lab price cuts, because 80% of our compute is in our own infrastructure. But our pricing doesn't face the same pressure either, because our unit economics were never dependent on the labs. When the labs cut prices, our margin expands. When the labs raise prices, we're insulated.

That's the structural difference between running on someone else's infrastructure and running on your own.

When frontier prices drop 50%Wrapper AIIrys
COGS drops−50%−10%
Savings captured byCustomers (competitive pressure)Irys (retained as margin)
Directional outcomeMargin compressionMargin expansion

What this means for you

We're a different kind of company, doing a different kind of work.

We priced Irys One at $299 a month because that's where our costs are. We didn't price at the $1,200 enterprise level — where Harvey and similar tools sit — because we don't need to. We have service-like margins at software-like prices, which means we can pass efficiency on to customers instead of extracting it. We didn't price at Claude's $20 because we're not a chatbot — we're a platform with a proprietary backend that took years to build and continues to deepen.

The market is telling you the price of legal AI is somewhere between $200 and $1,500 per lawyer per month. We're telling you that's what it costs if you're running on someone else's infrastructure.

We're not.

See the architecture run.

Seven-day free trial. Full platform, no credit card. Or book a call and we'll walk through the stack and your unit economics side-by-side.

Prices above are representative of publicly available information as of 2026 and may vary by customer, contract, and deployment. Irys architecture claims are validated continuously through internal testing and third-party evaluation. For the detailed technical breakdown, contact us.