Work for a Member company and need a Member Portal account? Register here with your company email address.
How could the increased use of AI lead humans to form addictive relationships with AI?
Robert Mahari talks to host Krys Boyd about the potential risks of human-AI companionship.
Something peculiar and slightly unexpected has happened: people have started forming relationships with AI systems.
In the struggle over who can train AI models and how, there’s a casualty many people don’t realize: The open web.
As junk web pages written by AI proliferate, the models that rely on that data will suffer.
New research from the Data Provenance Initiative has found a dramatic drop in content made available to the collections used to build AI.
The project purpose is to improve transparency, documentation, and informed use of datasets in AI.
Researchers created a tool that enables an AI practitioner to find data that suits their model, which could improve accuracy and reduce bias
The allure of AI companions is hard to resist. Here’s how innovation in regulation can help protect people.
Robert Mahari, a PhD student in the Human Dynamics group, and other experts talk to Inside Higher Ed about copyright & generative AI tools.
Postdoc Ziv Epstein, PhD student Robert Mahari, & Harvard Law lecturer Jessica Fjeld consider the issues of generative AI and copyright law.
MIT Media Lab postdoc Ziv Epstein discusses issues arising from the use of generative AI to make art and other media.
Human Dynamics group members explore how technical CBDC design choices can be used to make a CBDC inherently resistant to money laundering.
Thibault Schrepel discusses a new paper on roads to improve antitrust law with co-authors from the Human Dynamics group.