Bias often sneaks into AI through data.
AI systems don’t think — they learn from examples we give them.
So, if the data is unbalanced, the AI’s “learning” becomes unbalanced too.
Let’s look at three main ways bias happens:
- Data Bias:
The training data doesn’t represent everyone equally.
Example: An AI trained mostly on urban traffic images might not work well in rural roads. - Human Bias:
The people creating or labeling the data may have unconscious preferences.
Example: A photo dataset where “engineer” images show mostly men.
Algorithm Bias:
The computer program itself may unintentionally favor certain outcomes due to how it’s written.
Example: A hiring algorithm giving more importance to resumes from certain schools.