Securing democracy is one of the most important tenets of any free society. A secure democracy safeguards the rights, freedoms, and voices of its citizens, ensuring power lies within the hands of the people and governance remains transparent, accountable, and just.
With the introduction and advancement of artificial intelligence models and systems, any malicious actor with access to the internet and limited-to-no technical knowhow can create a believable and widespread threat to our democratic process, throwing a wrench into an otherwise secure democracy in mere seconds. This is a multi-faceted problem that will not simply go away and will not solve itself on the world’s most popular social media platforms without extensive legislation ensuring that it does.
Though user-generated content (and, yes, user-generated disinformation) is a spiraling problem that needs immediate action, use of AI-generated and synthetic media by political parties with the express purpose of misleading the public is a direct threat to our democracy. Such content and syndication of disinformation needs immediate, impactful action taken immediately before another piece of content in this vein is created and shared on a wide scale and continues to disrupt the foundations of our secure democracy.
Combatting political disinformation should always be priority number one for all parties, regardless of political alignment. By spreading disinformation on a grand scale — to supporters, to potential supporters, social media followers, television viewers — politicians and their campaigns distort truths or invent new ones that fly in the face of not only democracy, but truth and justice. One cannot hold an open and fair election (a core tenant of a secure democracy) when believable yet falsified depictions of political opponents exist in the same media sphere as verified facts and actual events. Allowing this media sent by prospective elected officials to endure will grind our secure democracy to a halt.
Because this is such a pressing matter — something that needed solving yesterday — taking any half-measures that will only delay the combatting of candidate-created and disseminated AI-generated disinformation will not solve this crisis in the sixty days currently allotted for public comment by the Federal Election Commission.
As experts in the field of detecting and protecting against inauthentic content, we implore the FEC to forgo any measures that would delay their ability to detect deepfakes and AI-generated content and institute a two-pronged approach: installation of a robust, proactive detection method to scan all political materials by all parties and all prospective candidates, along with enforcement in the form of fines, removal of content, and other more direct actions.
Reality Defender’s deepfake and AI-generated media detection platform can solve the deepfake problem for the FEC today, tomorrow, and always. We are a multi-model and multi-modal approach to detection in the sense that we use any and all methods in looking for manipulated or outright falsified media, weighing the results from said models to paint a complete picture in what may or may not be manipulated. In the same sense that we help clients and governments proactively scan media for deepfakes, we also proactively develop models to respond to tomorrow’s threats. This allows our platform to stay several steps ahead of bad actors and those spreading disinformation.
We are wholly prepared to support and scale to the needs of the FEC in detecting and fingerprinting deepfakes and AI-made media today, stopping political deepfakes today, and securing democracy today as it faces the ever-growing threats posed from deepfakes and politicians weaponizing them. Stalling or taking toothless approaches will only bring upon more disinformation, more discord, and a degree of issues that our government and its agencies are ill-prepared to address.
I do not write this as a mere business proposal. We at Reality Defender built and shaped this company not as businesspeople, but as concerned citizens of the world who believe in a secure and free democracy, one that has no place for disinformation. I, myself, left some of the largest social media platforms to work amongst the leading research minds at Reality Defender because I have personally witnessed and combatted a deluge of disinformation, all while the barrier to entry in terms of creating and disseminating this content becomes lessened. As elections loom and as the crisis of big-T Trust worsens across the open internet, tech, and media landscapes, myself and my colleagues all believe content authenticity and verification have never been more important.
Let us put an end to this problem today. Anything else will only further damage the future of our democracy.