Why It Matters
Algorithmic bias is not a theoretical concern — it has real consequences for real people. Biased AI has denied loans to qualified applicants, filtered out job candidates based on gender, assigned higher healthcare risk scores to certain racial groups, and produced discriminatory facial recognition results. Regulators in the EU, US, and UK are actively targeting algorithmic discrimination with enforcement actions and new legislation.
Sources of Bias
Training Data Bias
- Historical bias — data reflects past discrimination (e.g., hiring data from a company that historically favored men)
- Representation bias — certain groups are underrepresented in training data
- Measurement bias — proxies that correlate with protected characteristics (e.g., ZIP code as proxy for race)
- Label bias — human annotators apply inconsistent or biased labels
Model Design Bias
- Feature selection that inadvertently includes protected characteristics
- Optimization targets that don't account for fairness across groups
- Feedback loops that reinforce initial biases over time
Deployment Bias
- Using an AI system outside its intended context
- Applying a model trained on one population to a different demographic
- Insufficient human oversight in high-stakes decisions
Real-World Examples
- Amazon hiring tool (2018) — penalized resumes containing the word "women's" because it was trained on historically male-dominated hiring data
- COMPAS recidivism algorithm — produced higher false positive rates for Black defendants compared to white defendants
- Healthcare algorithm (2019) — assigned lower risk scores to Black patients despite equivalent health needs, affecting access to care
- Apple Card (2019) — investigated for offering lower credit limits to women despite similar financial profiles
Regulatory Response
- EU AI Act — requires high-risk AI providers to use data governance practices that address bias and ensure training data is representative
- FTC — treats algorithmic discrimination as an unfair practice; can require "algorithmic disgorgement" (deletion of biased models)
- GDPR Article 22 — right not to be subject to decisions based solely on automated processing
- Colorado AI Act — requires bias testing for high-risk AI used in consequential decisions
- NYC Local Law 144 — mandates annual bias audits for automated employment decision tools
- UK Equality Act 2010 — algorithmic bias can constitute indirect discrimination
Detection and Mitigation
- Pre-deployment testing — test for disparate impact across demographic groups
- Fairness metrics — measure equal opportunity, demographic parity, predictive parity
- Bias audits — regular third-party assessments of AI system outputs
- Diverse development teams — different perspectives catch blind spots
- Human-in-the-loop — human review for high-stakes decisions
- Ongoing monitoring — bias can emerge or shift over time as data distributions change
- Documentation — record training data sources, known limitations, and testing results
Key Regulation
- EU AI Act Articles 10, 15 — data governance and accuracy requirements
- FTC Act Section 5 — unfair or deceptive practices (applied to AI)
- EEOC guidance on AI in employment — Title VII applicability to algorithmic hiring