• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

Artificial intelligence has a problem with gender and racial bias. Here’s how to solve it.

By Joy Buolamwini 

Machines can discriminate in harmful ways.

I experienced this firsthand, when I was a graduate student at MIT in 2015 and discovered that some facial analysis software couldn’t detect my dark-skinned face until I put on a white mask. These systems are often trained on images of predominantly light-skinned men. And so, I decided to share my experience of the coded gaze, the bias in artificial intelligence that can lead to discriminatory or exclusionary practices.

Altering myself to fit the norm—in this case better represented by a white mask than my actual face—led me to realize the impact of the exclusion overhead, a term I coined to describe the cost of systems that don’t take into account the diversity of humanity. How much does a person have to change themselves to function with technological systems that increasingly govern our lives?

Related Content