As AI models become better at mimicking human behavior, it’s becoming increasingly difficult to distinguish between real human internet users and sophisticated systems imitating them.
That’s a real problem when those systems are deployed for nefarious ends like spreading misinformation or conducting fraud, and it makes it a lot harder to trust what you encounter online.
A group of 32 researchers from institutions including OpenAI, Microsoft, MIT, and Harvard has developed a potential solution—a verification concept called “personhood credentials.” These credentials prove that their holder is a real person, without revealing any further information about the person’s identity. The team explored the idea in a non-peer-reviewed paper posted to the arXiv preprint server earlier this month.
Personhood credentials rely on the fact that AI systems still cannot bypass state-of-the-art cryptographic systems or pass as people in the offline, real world.