Skip to main content

NewIrys for Word is here. Analyze, draft, review, and revise — all from one sidebar. Install Free →

Legal AI, Honestly

The legal AI landscape, decoded for lawyers.

Two halves. First: plain-language definitions of the AI terminology vendors use (LLMs, tokens, wrappers) so the marketing no longer obscures what you're actually buying. Second: the structural problems with how most legal AI is built, and how Irys addresses each one.

Click any concept or problem to expand it. Read top to bottom for the full picture, or jump to the section you need.

AI basics

AI, decoded for lawyers.

LLMs. Tokens. Wrappers. The vendors use the terms. Here is what they actually mean and what they mean for your practice.

Select any concept to expand.

01

What is an LLM?

GPT-4, Claude, Gemini: what they actually are

LLM stands for Large Language Model. It's the technology behind ChatGPT, Claude, and Gemini. These models are trained on hundreds of billions of words scraped from the internet, books, and code. That training lets them predict the next word in a sequence with uncanny accuracy, which turns out to produce surprisingly useful writing, reasoning, and summarization.

They are genuinely powerful. But they were trained to produce plausible text, not accurate law. They have no access to a verified case law database. They don't know what was true as of yesterday. And left unchecked, they confidently fabricate citations that don't exist. This is called hallucination.

Using a raw LLM for legal work is like hiring a paralegal who has read every book in the library but has never verified a single citation and cannot tell you if a case was overruled.

Irys

Irys uses frontier LLMs (GPT, Claude, and others) as the final synthesis layer in a multi-stage pipeline. By the time a prompt reaches the model, the retrieval, verification, and legal reasoning has already happened on Irys-controlled infrastructure. The LLM generates the output; Irys supplies the ground truth.

02

What is a token?

The unit AI thinks in, and why it matters for legal work

LLMs don't read words. They read tokens. A token is roughly three-quarters of a word. "attorney" is one token; "attorney-client" is two or three. Code and unusual words can cost more. Numbers, punctuation, spaces: all tokens.

Why does this matter? Because everything about AI performance and cost is measured in tokens. The "context window" (how much the model can see at once) is measured in tokens. A 50-page contract is roughly 25,000 to 35,000 tokens. A full deposition transcript might be 60,000 tokens. Some models have 128,000-token windows; others, 200,000.

When you hit the context limit, the model can no longer see the beginning of the document while it reads the end. For legal work, where a defined term on page 3 governs a clause on page 47, that is not a theoretical problem. It is a real one.

Irys

Irys compresses document context using proprietary retrieval and chunking pipelines before anything reaches a model. Our architecture is roughly 67x more token-efficient than sending raw documents to an LLM. You can upload hundreds of documents to a matter and query across all of them without worrying about context limits.

03

What do tokens cost?

Why AI pricing is more complicated than a monthly subscription

Every call to an LLM API costs money per token, charged separately for input (what you send) and output (what you receive). OpenAI and Anthropic publish their rates; as of 2026, frontier models cost between $3 and $75 per million tokens depending on the model.

A single complex legal research query, where you upload a contract and ask for a risk analysis, might consume 40,000 to 100,000 tokens. That is $0.12 to $7.50 in raw API cost per query, before any infrastructure, engineering, or margin.

This is why "unlimited AI" claims from wrapper products deserve scrutiny. Someone is paying for those tokens. Either usage is quietly throttled, or you are, at a steep markup.

Irys

Because roughly 80% of Irys's processing runs on our own infrastructure before a frontier model is called, the prompt we send to the model is small and precise. For the same research task, we spend a fraction of what a wrapper spends on tokens. That is why we can offer genuinely unlimited usage at $299/month without hidden throttling.

04

What is a wrapper?

How most 'legal AI' companies are actually built

A wrapper is a product built by taking an existing LLM API (OpenAI's GPT, Anthropic's Claude), adding a user interface, writing a system prompt that says something like "You are a helpful legal assistant," and selling the result as a legal AI product.

There is no proprietary technology. No legal-specific training. No citation verification infrastructure. No independent data architecture. The wrapper company is a middleman between you and OpenAI's API, typically charging 5 to 10 times the underlying API cost.

The risks are real: your documents flow through OpenAI or Anthropic's servers, which have been subpoenaed. The model has no awareness of attorney-client privilege. Hallucinated citations get returned with the same confidence as real ones. And when the underlying model improves or changes, the wrapper has no control over the behavior of its own product.

Several well-known legal AI tools, including some with significant venture backing and aggressive marketing, are wrappers. You can often identify them because they cannot explain their citation verification methodology, their data never leaves third-party servers, and their pricing increases significantly for features that a purpose-built platform includes by default.

Irys

Irys is not a wrapper. We run proprietary retrieval, verification, and reasoning pipelines on our own infrastructure. Approximately 80% of AI processing happens before a frontier model is called. That means your data stays on Irys-controlled servers, citations are verified against real databases, and our architecture does not change when OpenAI releases a new model.

05

What is Irys?

How we're different from LLMs, wrappers, and everything in between

Irys is a purpose-built legal platform. Not a chatbot with a law firm logo, and not a wrapper around someone else's API.

The architecture has three stages. In the first stage, your documents and matter context are processed by Irys's own retrieval and analysis pipelines: text is extracted, structured, indexed, and made queryable. In the second stage, an agentic reasoning layer (built by Irys, not rented from a lab) plans the research task, queries the case law database, validates citations against primary sources, and resolves conflicts. Only in the third stage does a frontier model receive a prompt, and by that point, the prompt is small, precise, and grounded in verified facts.

The result: fewer hallucinations (because the model isn't guessing at case law, it's synthesizing retrieved and verified results), better performance on complex multi-document tasks, genuine privilege protection (data never touches third-party model servers), and pricing that reflects actual infrastructure costs rather than API markup.

The practical difference for you: research that cites real cases, drafts that reflect your actual document set, and a platform that handles the full legal workflow (research, drafting, redlining, matter management, document comparison) in one place.

Irys

Start with a 7-day free trial at core.iqidis.ai. Full platform access, no credit card. Or book a 30-minute demo and we'll walk through exactly how the architecture handles your specific workflow.

The problem

The legal tech industry has been selling you short.

Lock-in contracts. Gated features. Upsells disguised as innovation. AI built by people who have never practiced law. We built Irys because we lived it.

Your Data Is Exposed

The Problem

In United States v. Heppner (SDNY, 2026), Judge Rakoff ruled that AI-generated documents are not protected by attorney-client privilege. OpenAI has been successfully subpoenaed — full chat histories, uploaded PDFs, documents. Norton Rose Fulbright and Proskauer have both warned that consumer AI tools create privilege and confidentiality risks that most firms are not addressing.

The Irys Way

End-to-end encryption, SOC 2 certified, and your data is never stored or used for training. Built around attorney-client privilege from day one.

AI Hallucinations Are Real

The Problem

In the UK, the High Court's Hamid jurisdiction cases (Ayinde and Al-Haroun, 2025) found 18 non-existent cases cited in a single filing. In the US, the Sixth Circuit levied $30,000 in sanctions for AI-fabricated citations in 2026. Every LLM can fabricate authorities. The question is whether anything catches it before it reaches a filing.

The Irys Way

Cite Check validates against 50M+ real cases. Built into the workflow, not bolted on. You see the source before it leaves your desk.

Models Change Every Month

The Problem

GPT-5, Claude 4, Gemini — the landscape shifts constantly. Picking the wrong one costs you accuracy, speed, or both.

The Irys Way

Irys routes each task to the optimal model. You get the intelligence without managing the chaos.

Wrappers Sold At Markup

The Problem

Most legal AI tools are a UI layer over ChatGPT sold at 5–10x the API cost. No proprietary technology. No legal training. Just a prompt template and a subscription.

The Irys Way

Proprietary pipelines for research, drafting, and document analysis. Not a wrapper — a platform engineered for how lawyers actually work.

Lock-In Contracts

The Problem

Multi-year commitments. Features gated behind enterprise pricing. Upsells at renewal. The industry profits from making it hard to leave.

The Irys Way

No usage caps, no gated features, no token limits. Published pricing. Cancel anytime. The same AI whether you are solo or enterprise.

Built By Outsiders

The Problem

Most legal tech is designed by people who have never drafted a brief, managed a matter, or sat through a deposition. They build for demos.

The Irys Way

Our team has drafted briefs, managed matters, and sat through depositions. We build for practice, not for demos.