What is a language model?
A Large Language Model (LLM) is an AI system trained on billions of texts: books, articles, web pages, legal documents. It doesn't "think" or "understand" in the human sense: it generates the most probable next word given the previous context, using statistical patterns learned during training.
How it works (simplified)
- Receives your text (the "prompt"): "What is the deadline to claim severance for unfair dismissal?"
- Finds patterns: in the texts it was trained on, it has seen thousands of times patterns like "the deadline for... is 20 business days according to article..."
- Generates word by word: chooses the most probable next word, then the next, and so on.
This explains why LLMs are brilliant for language tasks (summarizing, rephrasing, classifying) but are not databases or search engines.
What an LLM does well
- Summarize long texts: can condense a 40-page ruling into a 2-paragraph executive summary.
- Generate draft briefs: produces a first draft of a complaint, answer, or appeal you can review and refine.
- Answer questions about documents: if you provide an 80-page contract, it can locate relevant clauses.
- Find patterns in case law: identifies trends in rulings on a specific topic.
- Translate and adapt: converts legal texts between languages or complexity levels.
What it doesn't do well
- Fabricate rulings: can create references with case numbers and dates that don't exist (this is called "hallucination").
- Guarantee accuracy: an LLM always generates its "best prediction", not verified truth.
- Replace professional judgment: AI is a tool, not a lawyer.
- Apply current law if trained on old data: models have a training "cutoff date".
What is a hallucination?
A hallucination occurs when an LLM generates information that looks real but is fabricated. In the legal context, this is especially dangerous:
| Type of hallucination | Example |
|---|
| Fabricated ruling | "STS 3456/2022, Chamber 1, March 15": the number exists but for different subject matter |
| Altered article | "Art. 56 Workers' Statute provides 45 days/year compensation": that was before 2012 reform |
| Fabricated doctrine | "According to Díez-Picazo in his Treatise on...": the citation doesn't exist in that work |
| Incorrect deadlines | "The statute of limitations is 5 years under the Civil Code": may be 1 year for tort claims |
The Mata v. Avianca case (2023)
In June 2023, a lawyer in New York filed a brief containing 6 fabricated rulings from ChatGPT:
- The lawyer used ChatGPT to find relevant case law.
- ChatGPT generated 6 rulings with case numbers, parties, courts, and dates.
- None of the 6 rulings existed.
- The lawyer asked ChatGPT to "confirm" the citations. ChatGPT confirmed its own hallucination.
- The court discovered the fabrication and fined the lawyer $5,000.
- The case became a worldwide reference on the risks of AI without verification.
Key lesson: Asking the model to "confirm" a citation doesn't work. It can confirm with total confidence something it fabricated itself.
Source verification: the minimum standard
Golden rule: Every AI-generated citation must be verified against the original source before use in any professional document.
4-step verification protocol
- Identify the cited source: Is it a ruling, a statute, a report?
- Search the official source: CENDOJ for rulings, BOE for legislation, official databases.
- Check that the content matches: not just that the ruling number exists, but that it says what the AI claims.
- Document the verification: note where you checked each citation.
AI as a tool, not a substitute
Generative AI is a productivity tool, not a substitute for the lawyer. Think of it as a very fast intern who sometimes makes things up: it needs supervision.
Recommended workflow
- Use AI for the first draft: let it generate an outline or initial draft.
- Review critically: read everything as if it came from an intern: question every statement.
- Verify citations: check ALL references against official sources.
- Apply your judgment: AI doesn't know the specific circumstances of your case or your procedural strategy.
- Sign as yours: if your name is on the brief, the responsibility is yours.
Key statistics
- 64.7% of Spanish lawyers have not received AI training (CGAE 2025 Survey).
- 78% of firms adopting AI report time savings on routine tasks.
- 12% of lawyers using generic AI have included unverified information in briefs.
- Only 3% of firms have a formal AI usage protocol.