• Login
  • Register

Work for a Member company and need a Member Portal account? Register here with your company email address.

Article

Ethics and Governance of Artificial Intelligence Fund commits $7.6M to projects that bolster civil society efforts

The Ethics and Governance of Artificial Intelligence Fund today announced $7.6 million in support to nine organizations that aim to bolster the voice of civil society in shaping the development of artificial intelligence in the public interest. The funding seeks to lend momentum to existing initiatives and spark new efforts internationally.

“Artificial intelligence will affect every aspect of modern life,” said Alberto Ibargüen, president for the John S. and James L. Knight Foundation. “The issues of ethics and governance of new and pervasive technology are complex and profound, and the work must not only involve technologists. This initial round of support, focused on the three areas of media, criminal justice and autonomous vehicles, is just a beginning.”

The fund was launched in January 2017 with initial support of $27 million from Knight Foundation, Omidyar Network, LinkedIn founder Reid Hoffman, the William and Flora Hewlett Foundation and Jim Pallotta. The Miami Foundation is serving as fiscal sponsor for the fund.

Foundational to this initial round of funding is $5.9 million in support to the Berkman Klein Center for Internet & Society at Harvard University and the Massachusetts Institute of Technology (MIT) Media Lab. These two institutions serve as the fund’s anchor institutions, operating as a primary base of activity in advancing the goals of the effort. The funding will be used to support work in three initial core areas: media and information quality; social and criminal justice; and autonomous vehicles. Projects will address common challenges in these core areas such as the global governance of artificial intelligence, and the ways in which the use of artificial intelligence may reinforce existing biases, particularly against underserved and underrepresented populations.

“This grant fuels continued collaboration between Berkman Klein and the Media Lab as we join others in breaking down the silos between technical research, public policy and law, and the social sciences in the machine learning space,” said Joi Ito, director of the MIT Media Lab. “This will include research on society’s expectations for AI, efforts to engage the public on the governance of AI, and our work to bring industry into dialogue with the academy with efforts that will ultimately deploy working projects and systems.”

The fund is also announcing $1.7 million in support to seven organizations which will complement the skills and expertise of the anchor institutions. “MIT and Harvard are only one part of a much larger, ongoing global conversation around these technologies,” said Urs Gasser, executive director of the Berkman Klein Center. “These initial grants represent a commitment towards supporting the broader conversation around AI, inviting diverse perspectives and voices while focused on concrete challenges and solutions.”


Supporting a Global Conversation

● Digital Asia Hub (Hong Kong): Digital Asia Hub will investigate and shape the response to important, emerging questions regarding the safe and ethical use of artificial intelligence to promote social good in Asia and contribute to building the fund’s presence in the region. Efforts will include workshops and case studies that will explore the cultural, economic and political forces uniquely influencing the development of the technology in Asia.

● ITS Rio (Rio de Janeiro, Brazil): ITS Rio will translate international debates on artificial intelligence and launch a series of projects addressing how artificial intelligence is being developed in Brazil and in Latin America more generally. On behalf of the Global Network of Internet and Society Research Center, ITS Rio and the Berkman Klein Center will also co-host a symposium on artificial intelligence and inclusion in Rio de Janeiro, bringing together almost 80 centers and an international set of participants to address diversity in technologies driven by artificial intelligence, and the opportunities and challenges posed by it around the world.

Tackling Concrete Challenges

● AI Now (New York): AI Now will undertake interdisciplinary, empirical research examining the integration of artificial intelligence into existing critical infrastructures, looking specifically at bias, data collection, and healthcare.

● Leverhulme Centre for the Future of Intelligence (Cambridge, United Kingdom): Leverhulme Centre for the Future of Intelligence will be focused on bringing together technical and legal perspectives to address interpretability, a topic made urgent by the European Union’s General Data Protection Regulation coming into force next year.

● Access Now (Brussels, Belgium): Access Now will contribute to the rollout of the General Data Protection Regulation by working closely with data protection authorities to develop practical guidelines that protect user rights, and educate public and private authorities about rights relating to explainability. The organization will also conduct case studies on data protection issues relating to algorithms and artificial intelligence in France and Hungary.

Bolstering Interdisciplinary Work

● FAT ML (Global): FAT ML will host a researcher conference focused on developing concrete, technical approaches to securing values of fairness, accountability, and transparency in machine learning.

● Data & Society (New York): Data & Society will conduct a series of ethnographically-informed studies of intelligent systems in which human labor plays an integral part, and will explore how and why the constitutive human elements of artificial intelligence are often obscured or rendered invisible. The research will produce empirical work examining these dynamics in order to facilitate the creation of effective regulation and ethical design considerations across domains.

Related Content