By Joi Ito
When the Boston public school system announced new start times last December, some parents found the schedules unacceptable and pushed back. The algorithm used to set these times had been designed by MIT researchers, and about a week later, Kade Crockford, director of the Technology for Liberty Program at the ACLU of Massachusetts, emailed asking me to cosign an op-ed that would call on policymakers to be more thoughtful and democratic when they consider using algorithms to change policies that affect the lives of residents. Kade, who is also a Director's Fellow at the Media Lab and a colleague of mine, is always paying attention to the key issues in digital liberties and is great at flagging things that I should pay attention to. (At the time, I had no contact with the MIT researchers who designed the algorithm.)
I made a few edits to her draft, and we shipped it off to the Boston Globe, which ran it on December 22, 2017, under the headline "Don’t blame the algorithm for doing what Boston school officials asked." In the op-ed, we piled on in criticizing the changes but argued that people shouldn't criticize the algorithm, but rather the city’s political process that prescribed the way in which the various concerns and interests would be optimized. That day, the Boston Public Schools decided not to implement the changes. Kade and I high-fived and called it a day.
The protesting families, Kade and I did what we thought was fair and just given the information that we had at the time. A month later, a more nuanced picture emerged, one that I think offers insights into how technology can and should provide a platform for interacting with policy—and how policy can reflect a diverse set of inputs generated by the people it affects. In what feels like a particularly dark period for democracy and during a time of increasingly out-of-control deployment of technology into society, I feel a lesson like this one has given me greater understanding of how we might more appropriately introduce algorithms into society. Perhaps it even gives us a picture of what a Democracy 2.0 might look like.
A few months later, having read the op-ed in the Boston Globe, Arthur Delarue and Sébastien Martin, PhD students in the MIT Operations Research Center and members of the team that built Boston’s bus algorithm, asked to meet me. In very polite email, they told me that I didn’t have the whole story.
Kade and I met later that month with Arthur, Sebastien, and their adviser, MIT professor Dimitris Bertsimas. One of the first things they showed us was a photo of the parents who had protested against the schedules devised by the algorithm. Nearly all of them were white. The majority of families in the Boston school system are not white. White families represent only about 15 percent the public school population in the city. Clearly something was off.
The MIT researchers had been working with the Boston Public Schools on adjusting bell times, including the development of the algorithm that the school system used to understand and quantify the policy trade-offs of different bell times and, in particular, their impact on school bus schedules. The main goal was to reduce costs and generate optimal schedules.
The MIT team described how the award-winning original algorithm, which focused on scheduling and routing, had started as a cost-calculation algorithm for the Boston Public Schools Transportation Challenge. Boston Public Schools had been trying to change start times for decades but had been confounded by the optimizations and a way to improve the school schedule without tripling the costs, which is why it organized Transportation Challenge to begin with. The MIT team was the first to figure out a way to balance all of these factors and produce a solution. Until then, calculating the cost of the complex bus system had been such a difficult problem that it presented an impediment to even considering bell time changes.
After the Transportation Challenge, the team continued to work with the city, and over the previous year they had participated in a community engagement process and had worked with the Boston school system to build on top of the original algorithm, adding new features that were included to produce a plan for new school start times. They factored in equity—existing start times were unfair, mostly to lower-income families—as well as recent research on teenage sleep that showed starting school early in the day may have negative health and economic consequences for high school students. They also tried to prioritize special education programs and prevent young children from leaving school too late. They wanted to do all this without increasing the budget, and even reducing it.
From surveys, the school system and the researchers knew that some families in every school would be unhappy with any change. They could have added additional constraints on the algorithm to limit some of outlier situations, such as ending the school day at some schools at 1:30 pm, which was particularly exasperating for some parents. The solution that they were proposing significantly increased the number of high school students starting school after 8 am and significantly decreased the number of elementary school students dismissed after 4 pm so they wouldn’t have to go home after dark. Overall it was much better for the majority of people. Although they were aware that some parents wouldn’t be happy, they weren't prepared for the scale of response from angry parents who ended up with start times and bus schedules that they didn't like.
Optimizing the algorithm for greater “equity" also meant many of the planned changes were "biased" against families with privilege. My view is that the fact that an algorithm was making decisions also upset people. And the families who were happy with the new schedule probably didn’t pay as much attention. The families who were upset marched on City Hall in an effort to overturn the planned changes. The ACLU and I supported the activist parents at the time and called "foul" on the school system and the city. Eventually, the mayor and the city caved to the pressure and killed off years of work and what could have been the first real positive change in busing in Boston in decades.
While I'm not sure privileged families would give up their good start times to help poor families voluntarily, I think that if people had understood what the algorithm was optimizing for—sleep health of high school kids, getting elementary school kids home before dark, supporting kids with special needs, lowering costs, and increasing equity overall—they would agree that the new schedule was, on the whole, better than the previous one. But when something becomes personal very suddenly, people to feel strongly and protest.
It reminds me a bit of a study, conducted by the Scalable Cooperation Group at the Media Lab based on earlier work by Joshua Greene, which showed people would support the sacrifice by a self-driving car of its passenger if it would save the lives of a large number of pedestrians, but that they personally would never buy a passenger-sacrificing self-driving car.
Technology is amplifying complexity and our ability to change society, altering the dynamics and difficulty of consensus and governance. But the idea of weighing trade-offs isn't new, of course. It's a fundamental feature of a functioning democracy.
While the researchers working on the algorithm and the plan surveyed and met with parents and school leadership, the parents were not aware of all of the factors that went into the final optimization of the algorithm. The trade-offs required to improve the overall system were not clear, and the potential gains sounded vague compared to the very specific and personal impact of the changes that affected them. And by the time the message hit the nightly news, most of the details and the big picture were lost in the noise.
A challenge in the case of the Boston Public Schools bus route changes was the somewhat black-box nature of the algorithm. The Center for Deliberative Democracy has used a process it calls deliberative polling, which brings together a statistically representative group of residents in a community to debate and deliberate policy goals over several days in hopes of reaching a consensus about how a policy should be shaped. If residents of Boston could have more easily understood the priorities being set for the algorithm, and hashed them out, they likely would have better understood how the results of their deliberations were converted into policy.
After our meeting with the team that invented the algorithm, for instance, Kade Crockford introduced them to David Scharfenberg, a reporter at the Boston Globe who wrote an article about them that included a very well done simulation allowing readers to play with the algorithm and see how changing cost, parent preferences, and student health interact as trade-offs—a tool that would have been extremely useful in explaining the algorithm from the start.
The lessons learned from Boston’s effort to use technology to improve its bus routing system and start times provides a valuable lesson in understanding how to ensure that such tools aren’t used to reinforce and increase biased and unfair policies. They can absolutely make systems more equitable and fair, but they won’t succeed without our help.