Skip to main content
Try Lexiel for freeTry now →
18 minSofía + Adrián

Algorithmic Bias in Justice

Identify, measure, and mitigate biases in AI tools used in judicial proceedings.

The Problem of Algorithmic Bias

An algorithmic bias occurs when an AI system systematically produces unfair results toward certain groups. In the judicial sphere, this can violate fundamental rights.

Landmark Case: COMPAS (USA)

The COMPAS system, used to predict criminal recidivism, was denounced by ProPublica (2016) for:

  • Overestimating recidivism risk for African Americans
  • Underestimating risk for white people
  • Producing disproportionate "false positives" by race

Sources of Bias

  1. Historical data: if data reflects past discrimination, the model perpetuates it
  2. Proxy variables: postal code can be a proxy for race or social class
  3. Underrepresentation: minority groups underrepresented in training data
  4. Model design: the choice of what to optimize can favor certain groups

Relevant Judicial Bias Types

  • Confirmation bias: the system reinforces the judge's prior hypotheses
  • Anchoring bias: the system's suggestion conditions the human decision
  • Automation bias: tendency to over-rely on machine recommendations

Mitigation Strategies

  • Periodic fairness audits
  • Disaggregated impact analysis by gender, origin, age
  • Model transparency (explainability)
  • Independent oversight committees
  • Right to challenge algorithmic decisions

Have your own legal questions?

The Individual Plan gives you 50 queries/month with answers verified against official legal sources.

Try free for 14 days