“What will happen with AI and the 2024 elections?”
This is the most common question asked to the Reality Defender team by clients, reporters, friends, and loved ones. It’s something we talk about frequently amongst ourselves as experts in AI, looking towards the future and trying to prevent any and every worst-case scenario from happening.
Though we are not clairvoyant, we are better attuned than most to the overlap between developments in AI and real-world uses, especially in global politics. As we enter the fall season, there are three issues we as a team discuss more than most, as they’re what’s keeping us up at night.
Legislation is crucial to barring AI-generated media from American election materials.
We’re more than familiar with candidates stretching the truth in election advertisements. That said, using AI-generated media to falsely depict or defame a political opponent is wholly incompatible with having a fair election.
In recent months, we’ve witnessed candidates and parties deliberately release quasi-believable AI-generated media in the form of campaign advertisements. The keyword here, however, is believable. All parties and their representatives should agree to avoid using such materials to sway public opinion and create a chaotic atmosphere of falsehoods believable enough by millions to be taken as truth.
As we know, asking people to stick to their word is as good as the paper it’s written on. This is why legislation and related enforcement is needed: to legally prevent all parties and participants from using AI-generated media in their campaign materials across all mediums. We’re following current actions taken by bipartisan senate committees, the FEC, and the White House, and are hopeful that some appropriate action will be taken in time.
Platforms need to implement methods of preventing AI-generated disinformation — particularly those sent by state-sponsored attackers.
We are concerned about candidates using generative AI in their materials. We are worried about state-sponsored attackers spreading deepfakes and disinformation-driven content on every conceivable platform — social or otherwise. (This is already happening with Taiwan’s election.)
Even if legislation is passed to prevent American candidates from using AI-generated media in materials, legislation that forces platforms to scan user-generated content for manipulations is quite aways away if it ever happens. Which it should, as disarming disinformation-spreading bad actors and nations looking to upend democracy should be among the highest priorities for any country.
We’ve lived in scenarios with past elections where state-sponsored attackers have swayed public opinion through text alone. Add convincing-enough deepfakes into the mix and you have an infinitely greater problem on your hands, testing the resolve and media literacy of even the most educated and astute users and voters.
We need to prepare for the unknown.
Think of all the generative AI advancements not made but released in the last year. The technology reaching such wide adoption over the last twelve months existed for quite some time before getting into the hands of millions. This isn’t a new pattern of test and release; it’s something that’s occurred for many years before, and it’s something that will continue to happen.
As a company in the realm of artificial intelligence and machine learning, we’re privy to the existence of technologies that won’t see release for months (if not years) to come. We also have an ongoing dialogue amongst ourselves on where the tools of today and tomorrow could head down the line. This is part of what makes Reality Defender so special: by positing and planning for tomorrow, were able to address threats and problems before they happen.
One cannot wholly plan for the unknown, both in terms of model/tool creation their subsequent uses to sow discord and cause harm. We can and will leave no stone unturned, not just because we built the single-best solution to fight back and prevent harm from being done in the first place. We do this because it affects us, our loved ones, and the world at large, and it is crucial to us — as technologists and human beings — to pull out all the stops and help.