Early Monday morning, news broke of a deepfake of President Biden robocalling residents of New Hampshire and imploring them not to vote.
The call is a laughably bad fake, with its monotonous delivery, odd timing, and lack of linguistic placeholders. Yet to the untrained ear and, more importantly, to the easily persuaded, even sounding sort of like Biden to a person or two (or more) is enough to throw truth into question and throw a wrench into a free and fair election.
As Mark Twain said, a lie will fly around the world while the truth is getting its boots on. Correcting those who were already convinced by this robocall would be a Sisyphean task.
We’re not even past January and deepfakes are already making an impact on the election. Chances are that this incident — which is now under investigation — will be one of the more comparatively benign incidents by the time November rolls around.
This is especially true as legislation with teeth to prevent such measures shows no signs of passing anytime soon. Telecommunications companies also have yet to implement widespread methods and moderation systems that block this kind of content in the same way they block spam calls (via STIR/SHAKEN protocols).
What Future Incidents Should Be Expected?
Anything and everything is possible this year, making speculation rather pointless at this stage. The democratization of generative AI has allowed for anyone with ill intent and at any level of technical expertise to spin up a sophisticated disinformation attack across all media types.
How Could Reality Defender Help?
Nearly every method of communication and consumption is now at risk of a deepfake-driven attack or disinformation campaign, so ensuring these methods can detect malicious deepfakes is crucial at the very least. Doing the opposite this year (read: absolutely nothing) could lead to the quick and permanent degradation of societal and democratic norms.
Our team is working with clients to implement deepfake detection at crucial places where citizens consume content in their daily lives, as well as places that remain vulnerable to deepfake-driven attacks. This means social media platforms (aiding with moderation), government (assisting with file verification), and traditional media outlets (verifying content for stories as they happen), among many other industries and solutions.
After all, these are the places used by our friends, family, and loved ones. We’re working hard to make sure they and millions of others never have to question the validity of what they see, read, or hear, especially when it involves the future of democracy.
Subscribe to the Reality Defender Newsletter
Get actionable insights in your inbox every week, straight from the Reality Defender team.