Today’s deepfakes are convincing enough to fool an entire room of generative AI experts at first glance. I know this because Reality Defender’s team is composed of some of the most respected names in artificial intelligence and machine learning, and every one of them have, at one point, second guessed their senses in our office after coming across a particularly persuasive piece of content. (Anyone in our field who claims to be note-perfect with manually identifying every deepfake they’ve seen is sorely misrepresenting themselves.)
While this may draw from a small sample size of thirty, it goes to show that even those who spend all day, every day looking at AI-generated materials cannot reliably do so manually. This is one of the reasons why we built Reality Defender: as generative AI advances and blows past the uncanny valley, it requires infinitely more than the naked eye or a trained ear to discern between real and fake — and to do so at scale
Yet as we built our best-in-class deepfake detection platform with the aim of providing equitable access to truth for all people, we made the conscious decision early on to not make it available for consumers while trying to cover as many and protect the most amount of people.
Covering Everyone Possible
If one person has access to deepfake detection and another does not, it creates an uneven world in which some people have access to the truth while others live without knowing real from fake. Such inequality is not only a horrible way to live and harmful on a societal level. Gated truth, especially on essential matters like news and government, further adds to division and distrust that damaging deepfakes already exist to exploit.
Then there are matters of who should constantly worry about deepfakes. Though we believe awareness on deepfakes should be part of one’s media literacy education, we also believe the average citizen should not have the added task(s) of pondering and verifying whether or not the media they consume on their platforms of choice is, in fact, real. This is especially true given the volume of content we consume on our devices every day, where verifying every post and every video is a daunting, exhausting, and unreasonable task.
To avoid this inequality of truth and driving every user into the depths of paranoia, we work with the largest organizations, governments, platforms, and institutions to implement Reality Defender’s deepfake detection platform at a high level. By adding deepfake detection to a content moderation stream, news verification backend, or call center, ordinary people don’t need to worry that their beloved platforms and services are serving up bogus, misleading media. Instead, the places where they consume content or could be vulnerable to deepfake-based attacks are proactively protecting them against this content from the get-go.
This approach allows Reality Defender to cover potentially billions of consumers and users at every turn, instead of relying on piecemeal do-it-yourself scans and putting the onus on users themselves. Users should not have to think about whether or not their voice is being exploited to transact with their bank by bad actors, or if the media in their feed is real. The platforms serving them should also silently protect them, similarly to the ways they thwart other malicious attempts and illegal content.
We’re proud to partner with governments around the world, major financial institutions, and the most trusted media brands to make our vision for equitable access to truth a reality today. As more concerned entities sign on and add Reality Defender to their processes and pipelines, the scope at which people are protected from weaponized deepfakes broadens. This means less questionable media being passed off as real, fewer successful attempts at fraud, and stronger democracies. Though bad actors and those with ill intent will always find new ways of weaponizing AI-generated media, we will continue to be one step ahead, working tirelessly to protect anyone and everyone from these ever-evolving threats.