AI is smart — but not always fair.
Why? Because AI learns from data made by humans, and humans sometimes have biases or unequal representations in the data.
Imagine this:
If a face recognition app is trained mostly on pictures of adults, it might struggle to recognize children.
Or if it’s trained mostly on one type of face or voice, it might make mistakes with others.
This kind of unfair learning is called AI Bias — and understanding it helps us make technology that’s fair to everyone.