Identify, measure, and mitigate biases in AI tools used in judicial proceedings.
An algorithmic bias occurs when an AI system systematically produces unfair results toward certain groups. In the judicial sphere, this can violate fundamental rights.
The COMPAS system, used to predict criminal recidivism, was denounced by ProPublica (2016) for:
Have your own legal questions?
The Individual Plan gives you 50 queries/month with answers verified against official legal sources.
Video coming soon
For now you can read the written content below
Which AI system was denounced for racial bias in recidivism prediction?
What is a "proxy variable" in the context of algorithmic biases?
Is automation bias the tendency to over-rely on AI recommendations?
What is a key strategy to mitigate biases in judicial AI systems?
Which of these is NOT a source of algorithmic bias?