I love this story, but it also worries me. Children are naturally curious and are eager to use technology as a lens to make sense of the world around them. But the results of this study tell me one thing: our children place a ton of authority in technology while not clearly understanding how that technology works. And now we are in the era of AI, where children are growing up not just digital natives, but AI natives (as my advisor, Cynthia Breazeal, likes to say). And while this might be harmless in the case of searching for what sloths eat (mainly leaves and occasionally insects, in case you were wondering), I fear children placing this much authority in more pervasive algorithms—YouTube’s recommendation algorithm, for example.
In 2018, Zeynup Tufecki penned an article in The New York Times called YouTube, the Great Radicalizer where she recounts the experience of watching political rallies—from Donald Trump as well as Hillary Clinton and Bernie Sanders—on YouTube. It seemed, however, that after watching these videos—as any person seeking to be politically informed might—the YouTube recommendation algorithm paired with the “Autoplay” feature increasingly recommended to her conspiratorial content. It didn’t matter if she had watched a lot of left-leaning or right-leaning content, the algorithm bombarded her with videos denying the Holocaust or alleging that the US government had coordinated the September 11 attacks.
It’s this finding, in conjunction with the fact that YouTube is the most popular social media platform among teens and the fact that 70% of adults use YouTube to learn how to do new things or about news in the world, that makes me worried about the authority our children place in technology, and specifically in artificial intelligence.
So...how can we prepare our children to be conscientious consumers of technology in the era of AI? And can we empower them to be not only conscientious consumers of AI systems, but also conscientious designers of technology?
Empowering kids through AI + ethics education
These two questions are at the heart of my research at the Media Lab, where I seek to translate the theoretical findings of those who study artificial intelligence, design, and ethics to actionable teaching exercises.
The first pilot of my ethics + AI curriculum was a three-session workshop I ran last October at David E. Williams Middle School, where I was able to work with over 200 middle school students. During the three sessions, students learned the basics of deep learning, about algorithmic bias, and how to design AI systems with ethics in mind.
The goal of the course was to get students to see artificial intelligence systems as changeable, and give them the tools to effect change. The idea is that if we are able to see a system as something that is flexible, to know that it doesn’t have to be the way that it is, then the system holds less authority over us. The algorithm becomes, as Cathy O’Neil once put it, simply an opinion.
Getting students to see algorithms as changeable first meant getting students to see algorithms at all. They needed to become aware of the algorithms they interact with daily: Google Search or the facial recognition systems on SnapChat. Once students were able to identify algorithms in the world around them, they were then able to learn about how these systems work—for example, how does a neural net learn from a dataset of images?
I believe it’s essential to introduce ethics topics at this point in time, as soon as students have learned the basic technical material. At the collegiate level, an ethics lesson or “societal impact” lesson is often relegated to the last class of the semester, or to an independent course altogether. This is problematic for two reasons. First, it inadvertently teaches students of think of ethics as an afterthought, and not as fundamental to the algorithm building or design process. Second, instructors often run out of time! I think back to the many U.S. history classes I took during elementary, middle, and high school - my teachers never got past the Civil Rights Act of 1964. As an adult, I have a vague sense of what the Cold War was about, but couldn’t tell you the details the way I can tell you about the French Revolution. This is not the fate we want for ethics topics in technology classes. Ethics content has to be integrated with technical content.
For example, during the pilot students trained their own cat-dog classification algorithms. But there was a twist: the training dataset was biased. Cats were overrepresented and dogs were underrepresented in the training set, leading to a classifier that was able to classify cats with high accuracy, but often mislabeled dogs as cats. After seeing how the training data affected the classifier, students had the opportunity to re-curate their training set to make the classifier work equally for both dogs and cats.