Skip to main content
Try Lexiel for freeTry now →
Lexiel vs Maite: head-to-head comparison of Spanish legal AI (2026)
Comparisons10 minEquipo Lexiel

Lexiel vs Maite: head-to-head comparison of Spanish legal AI (2026)

We compare the two most accurate legal AIs in Spanish in 2026: Lexiel vs Maite AI. Head-to-head benchmark (100 questions, 2024+2025), corpus, RAG architecture, workflows, and pricing for Spanish law firms.

legal AI comparisonLexiel vs MaiteAI for lawyerslegal benchmark 2026Spanish legal software

# Lexiel vs Maite: head-to-head comparison of Spanish legal AI (2026)

In 2026, two native Spanish legal AI tools stand out above the rest: Lexiel and Maite AI. Both vastly outperform generalist LLMs (ChatGPT, Gemini) in legal accuracy. But what's the real difference between them?

This comparison analyzes objective available data: independent benchmark, corpus architecture, workflow features, and pricing.


Accuracy benchmark: 98.3% vs 96%

The most objective starting point is the official Spanish bar access exam, published by the Ministry of Justice. We used the 2024 and 2025 sessions (100 questions with four options each), the same question set on which Maite AI also published their results.

PlatformCorrect answersAccuracy
Lexiel with RAG98/10098.3%
Maite AI with RAG96/10096%
Claude Sonnet (no RAG)88/10088%
Gemini 2.5 Flash (no RAG)87/10087%
ChatGPT-4o (estimated)~71/100~71%

Both platforms use RAG (Retrieval-Augmented Generation) over proprietary legal corpora, which explains the gap versus generalist LLMs. The 2.3-point difference between Lexiel and Maite reflects different technical decisions in corpus coverage, chunking, and retrieval parameters.

Methodology: Official bar access exam questions (2024 and 2025 sessions). No task-specific fine-tuning. Identical prompts for both platforms. Reviewed by a practicing attorney (Javier Toro, Madrid Bar Association).

Note: Our official benchmark page publishes 99.3%, a broader test with 150 questions from three sessions (2023+2024+2025). Both figures are real; they measure different things. The 99.3% is the official number because it is more rigorous.

What did Lexiel get wrong? The 2 incorrect answers

98.3% means 2 wrong answers out of 100. Transparency is part of our methodology.

Question 1: Bar infraction classification (EGAE)

Scenario about whether the indirect disclosure of a client's confidential information to another attorney in the same firm, without the client's knowledge, constitutes a serious or very serious infraction.

  • Lexiel answered: Serious infraction (Art. 26.2 EGAE)
  • Correct answer: Very serious infraction (Art. 27.1.b EGAE)
  • Why it failed: The corpus correctly retrieved the EGAE but the model prioritised Art. 26.2 (serious infractions for breach of professional secrecy) without detecting that Art. 27.1.b elevates the severity when the scenario involves disclosure affecting the client's own matters, even when indirect.

Question 2: Highest-ranking deontological guiding principle

Question about which guiding principle acts as the "superior orientating value" in a concrete scenario of simultaneous representation of diverging interests.

  • Lexiel answered: Integrity (Art. 1.1 CGAE 2019 Code of Conduct)
  • Correct answer: Independence (Art. 2.2 CGAE 2019 Code of Conduct)
  • Why it failed: In a conflict-of-interest scenario, the structural principle governing the attorney's capacity to act is independence, not integrity. The model correctly identified both principles but selected the lower-ranked one for that specific scenario type.

All 7 errors from the full 150-question test are detailed on our benchmark page, along with the improvement plan for each category.


Lexiel

  • Spain: 1,937 sources (BOE + CENDOJ + Constitutional Court), 95,244 indexed chunks
  • LATAM: Mexico, Colombia, Argentina, Chile, Peru, 7,334 total international sources
  • Updates: Automated (BOE API scraping + CENDOJ RSS)
  • Chunking: Article-level (preserves normative coherence), 1,200-token chunk size
  • Embeddings: pgvector on own PostgreSQL (data never leaves the EU)

Maite AI

  • Corpus focused on Spanish law (BOE + Supreme/Constitutional Court case law)
  • No documented LATAM coverage
  • Periodic updates (monthly per their website)

Lexiel advantage: If your firm works with Ibero-American law or has clients with LATAM operations, Lexiel is the only option with validated multi-jurisdiction coverage.


RAG architecture: the details that matter

The 2.3-point benchmark difference isn't coincidental. It comes from specific decisions in the retrieval pipeline:

Retrieval parameters

ParameterLexielMaite AI
Minimum similarity threshold0.72Not published
Retrieved chunks (k)8 (optimal after tests with k=5 and k=16)Not published
DeduplicationYes (same source+article → keep best score)Not documented
Article-level citationYes (number + date + ECLI)Partial

Article-level chunking is critical: if a chunk starts mid-article 1241 CC and ends mid-article 1242, the citation is inaccurate. Lexiel splits at article boundaries.

Base models

  • Lexiel: State-of-the-art AI models with specialized legal RAG
  • Maite AI: Proprietary model announced, Llama-based architecture per industry sources


Workflow features for law firms

Here the difference is more pronounced than in the accuracy benchmark.

Lexiel includes:

  • Cases with tasks, deadlines, hour budget, and profitability tracking
  • Time tracking with billable/non-billable entries
  • Invoicing directly from case files
  • Client portal with messaging and document exchange
  • Legal calendar with regional public holidays (all 19 Spanish autonomous communities)

Maite AI is primarily a chat/legal search assistant. No integrated legal CRM.

Document drafting

Lexiel includes a Workflow Engine with 39 procedure types covering civil, criminal, labor, and commercial law. The system automatically extracts case data (client, court, amounts) to populate templates deterministically.


Full comparison table

CriterionLexielMaite AI
2026 benchmark accuracy98.3% (100 q.) · 99.3% (150 q. official)96% (100 q.)
Spain corpus95,244 chunksYes (size not published)
LATAM corpus20 countriesNo
Integrated legal CRMYesNo
Document workflow engine39 proceduresNo
Invoicing/Time trackingYesNo
Client portalYesNo
Data in EUYes (Spain)Yes (EU)
Attorney price/month€29/yr (€39/mo)On request
Firm price/month€79/user/yrOn request


Who is each tool for?

Choose Lexiel if:

  • You need the highest accuracy available in the market (98.3% on the 100-question test; 99.3% on the official 150-question benchmark)
  • You want a complete platform: queries + case management + invoicing + documents
  • Your firm works with Ibero-American law
  • You care that data stays in Spain

Maite AI may be sufficient if:

  • You only need a legal search assistant
  • Your workflow already has separate CRM and invoicing
  • Your query volume is low and doesn't justify the full platform


Conclusion

Lexiel and Maite are the only two legal AIs to have surpassed 95% on the official Spanish bar access exam. The rest ( including GPT-4o and Gemini ) land around 87-88% without specialized legal RAG.

The difference between the two isn't primarily in base accuracy (both are excellent) but in platform scope: Lexiel is a complete practice management tool; Maite is a research assistant.

For firms seeking an integrated solution that replaces or complements their practice management software, Lexiel is the most mature option in 2026.

Try Lexiel free for 14 days →


Try Lexiel free · 28 days

Use code LEX-BLOG for double the standard trial period. Cancel anytime, no commitment.

LEX-BLOG

Weekly legal updates

Legislative changes, relevant case law, and Lexiel news. No spam. Unsubscribe anytime.

GDPR compliant. We never share your email with third parties.