🔮The Codex

AI Bias

Systematic errors in AI outputs that reflect prejudices in training data or design.

📖 Apprentice Explanation

AI bias happens when an AI system consistently produces unfair or skewed results. This usually comes from biased training data — if the data reflects societal prejudices, the AI will too.

🧙 Archmage Notes

Bias manifests as representational harm, allocational harm, and stereotyping. Sources include training data imbalance, annotation bias, and algorithmic amplification. Mitigation strategies include diverse datasets, debiasing techniques, and fairness constraints.