Earlier this week, YouTube celebrity Mr. Beast, actor Tom Hanks, and journalist Gayle King were victims of different deepfake-driven scams on social media.
In just 36 hours, all three personalities had their likenesses lifted, copied, and regurgitated, all in the name of shilling for bogus products in barely convincing video ads on Instagram and X. All three notable names have endorsement deals of their own, carefully curated brand images, and focus-tested brand values that run counter to the dental scams, weight loss nonsense, and bogus giveaways their AI-generated likenesses were caught hawking to millions.
Each celebrity victim responded in the same manner: posting an Instagram photo with a screen capture of the video in question and a warning message on top, telling their fans and viewers that what they saw was not them and they were not peddling a product or service that seemed out of the ordinary for them.
This is currently the only way to respond to any sort of deepfake attack or abuse: retroactively, to your audience, and well after damage was already done. Seeing as the platform where these videos were posted, boosted (via paid advertising), and stayed up for hours, it’s clear little to no human moderation (let alone an AI-driven deepfake detection system) exists to prevent these specific abuses from happening in the first place.
The success of these posts — even if just for a few hours — signals to the larger world of bad actors that they, too, can deepfake anyone, siphon off their brand value to push traffic to bogus products or pretty much anything, and disappear into the ether until it’s time to try again.
Results of the Mr. Beast deepfake scan on Reality Defender
We at Reality Defender expect this type of event to happen more frequently and with more damaging results. After all, anyone with little technical knowhow, a few snippets of publicly available media, and some off-the-shelf tools can send one of the most famous actors or the most famous YouTuber into a panic. Other celebrities, notable names, and internationally known brands carry just as much risk (if not more), especially when platforms have repeatedly exposed their lack of preparedness to manage said risks.
What Brand Managers Can Do
With no deepfake moderation or detection to speak of on social media, those managing the world’s most famous brands currently have limited options in dealing with this specific situation. That said, there are some preemptive measures brand managers can take in case they find themselves at the receiving end of a deepfake attack or scam.
Craft a preemptive response: The more time it takes to respond to a deepfake attack or scam, the more damage is done. Have a pre-written template at the ready to post immediately upon the first sign of a deepfake impersonation event. Be sure to post the response on all official channels, website included.
Speak with social platforms’ brand liaison teams: The individuals with large followings have an equally large amount of sway with the platforms they call home. As every brand and notable name is susceptible to such attacks, (sternly) asking brand liaisons at platforms to have a plan of action against deepfakes will certainly pay dividends. These agents and their platforms know their continued survival is largely owed to the activity of widely-followed profiles. (Should a network not have these liaisons or managers, consider speaking with those on the business development team.)
Raise the issue publicly: As Hanks, King, and Mr. Beast have done in some effort, so too can your brand ambassadors or notable names in owned communications and in the press. Public awareness in the press and elsewhere not only brings this type of attack to the attention of fans and everyday citizens; it also puts added pressure on platforms to do something about this problem at a time when they otherwise have no legal incentive to do so.
As always, implementing preemptive deepfake detection can solve this issue, and rather instantaneously. While some institutions (financial, government, and traditional media come to mind) recognize the immediate and future risks of widespread deepfake usage and have adopted such measures, social media platforms have been and will continue to be stagnant in adoption when they are not required to act. Yet by careful planning, a bit of persuasion, and raising the issue publicly, brand managers can make impactful and meaningful steps towards mitigating these crises and potentially changing the landscape to prevent future attacks from occurring.