AI & Governance
Algorithmic Bias
Systematic and repeatable errors in an AI system that produce unfair outcomes for particular groups. Bias can enter through training data (historical discrimination baked into the dataset), model design (optimization targets that disadvantage certain populations), or deployment context (using a model in a setting it wasn't built for). Detecting bias requires more than running a fairness metric -- it requires understanding who the system affects and how.