Over the weekend, NBC News reported that Steve Kramer — a political consultant working for one-time candidate Rep. Dean Phillips — was the driving force behind late January's deepfake robocall in New Hampshire, in which a fake President Biden implored voters to miss the primary election. In a statement to the press, Kramer not only admitted his misdoings (which are now illegal as per an FCC ruling after the incident), but showcased just how inexpensive and easy it was for him and others to attempt, pushing the urgent need for regulation of deepfakes.
While Kramer's use of a deepfake and subsequent regulatory call-to-arms could be seen as a tactic to lessen any potential ramifications stemming from his actions, it highlights three major ways that allow similar incidents to persist — potentially at larger and more damaging scales. Because even though certain uses of deepfakes are illegal, they're still causing irreparable harms around the world with nary a useful roadblock in sight.
$500
This is (allegedly) how much was spent recreating the voice of President Joe Biden and sending it out via robocall. According to Kramer, “With a mere $500 investment, anyone could replicate my intentional call." Though this doesn't factor in the hiring of outside parties to help — as Kramer allegedly hired a magician from New Orleans to assist in the efforts — it goes to show how the multitude of user-accessible deepfake creation platforms can accurately and cheaply replicate unique attributes of anyone for a buck.
1 Day
This is the approximate length of time between the commission of the calls and the calls themselves, sent out right before the New Hampshire primaries to the phones of 5,000 voters likely to vote Democrat.
It was later determined that Kramer et. al. used ElevenLabs to create the deepfake of Biden. Though the account used in this incident was suspended, ElevenLabs and inexpensive voice deepfake creation tools like it have little-to-no moderation preventing anyone from repeating the same incident or inciting new ones. Nor are they required to by law, as strict enforcement to avoid the creation of damaging deepfake impersonations is virtually non-existent in the U.S. and most western countries. Anyone with passable internet browsing skills can accomplish the same feat to sow political discord, commit financial fraud, and commit countless other acts that have no safeguards stopping them.
18 Days
This is how long it took from the time the calls went out until the FCC made future calls like Kramer's illegal. It also took 35 days for independent journalists (and not law enforcement) to circle Kramer as the culprit behind this incident.
This is deeply troubling, to put it mildly. Because of the lack of legislation, enforcement, moderation, and protections against deepfakes of all mediums, actions against their usage are reactive and done at a snail's pace (comparatively speaking). By the time people have the chance to question whether or not something they saw or heard is real and maybe react accordingly, deepfakes have already done lasting damage and those weaponizing them have moved on.
Without blanket legislation that deals with potential future uses and abuses of deepfakes, our elected officials will only to continue to maybe consider dealing with emerging and novel use cases after they first appear. Without trust and safety teams proactively scanning all content for deepfakes instead of relying on toothless watermarking techniques, we'll see many different permutations of Kramer's incident with more dangerous outcomes.
Yet we are not without hope. Incidents like Kramer's are what myself and the Reality Defender team work to combat. We believe that by implementing deepfake detection, by requiring action by platforms and pertinent telecommunications entities, and by building legislation that looks forward instead of trailing behind, we won't ever have to live another day in a world where we don't trust the voice on the other end of the call.