AI Fundamentals
← All Concepts
intermediate

AI Bias & Fairness

The Mirror That Reflects What Was Shown to It

7 min read

The Analogy

The Mirror That Reflects What Was Shown to It

If you only show a child pictures of one type of family, they'll assume that's what all families look like.

AI models learn from human-created data — and human-created data contains human biases. If historical hiring data shows men being hired for technical roles, an AI trained on it will learn that bias. The AI isn't malicious; it's a mirror. The danger is when we treat that mirror as an objective judge.

In Plain English

AI bias happens when a model produces systematically unfair or skewed outputs because of patterns in its training data that reflect historical inequalities. Bias can affect hiring tools, facial recognition, healthcare AI, and credit scoring — with real consequences for real people.


The Technical Picture

AI bias manifests as systematic errors correlated with demographic attributes. Sources include biased training data (historical discrimination), measurement bias (inconsistent labelling), and feedback loops (biased model outputs shaping future training data). Mitigation strategies include data auditing, fairness constraints, and diverse labelling teams.

Real-World Examples

  • Amazon scrapped an AI hiring tool in 2018 after it penalised CVs mentioning 'women's'
  • Facial recognition systems with significantly higher error rates on darker skin tones
  • Medical AI trained primarily on Western patients performing poorly on Indian patient data
Key Takeaway

AI learns from human data — and human data carries human biases. Responsible AI requires actively auditing for this.

Related Concepts