The continued rise in deepfake incidents have prompted new efforts from major tech companies to respond with their own single-model detection tools. These new detectors are narrow in scope when it comes to the content they can detect, focusing on deepfakes created using single models or generation types, with little if any efficacy against the many thousands of other different generation models and techniques in existence.
Any new efforts at detecting deepfakes are a welcome addition in fighting the onslaught of AI-generated media used to deceive or defraud. That said, these recent developments underscore the issue we continue to see in the AI arms race: the touting of solutions with narrow focuses at a time when a wide net is needed. While providing a detection model that can reliably spot images created with a popular generation method is marginally helpful, this patchy method fails to address the many diverse approaches to deepfake creation. Using such a narrow detection model in a real world setting is akin to combing through an endless expanse of sand using a metal detector in search of plastic.
Multiple Models, Multiple Modalities
Since its inception, Reality Defender has focused on multi-model inference solutions that detect a large spectrum of existing generative AI techniques models while simultaneously adapting to the newest generative AI models. We utilize multiple concurrent models for each media type, looking for numerous signatures left behind by the generative AI process.
After all, malicious actors leveraging AI tools are not going to voluntarily indicate which technique they used. When faced with a single detection method that could thwart an attack, bad actors can simply switch to the next generative model that isn’t covered by the detector, turning the serious business of flagging deepfakes before they spread into a game of a whack-a-mole. Thus, multiple models trained on thousands of different types of generative techniques are infinitely more effective.
A Combined Approach
When it comes to solving the deepfake problem, collaboration is key. Just as our platform looks at media from multiple angles, Reality Defender collaborates with key players in our industry in efforts to create the most comprehensive detection solutions. We believe in combining approaches to detection and prevention to create an all-encompassing detection toolbox, rather than leaving companies and institutions to fend for themselves on a market filled with single-model solutions that will always leave gaps for varied attacks.
It is also worth noting that unlike other companies focused on detection software for individuals — as some of these new single-model solutions have been marketed — we do not offer our tools to everyday users. This is because we believe everyone has an equal right to access to detection, and we do not wish to picture a future in which only those who can afford a subscription cost are afforded the luxury of distinguishing between truth and deception in digital spaces. Consumer-focused detection shifts the responsibility of recognizing deepfakes onto the individual.
We maintain that the responsibility for detecting and properly labeling or removing AI-manipulated content belongs at the top, to platforms whose infrastructures allow for the spread of such content to begin with. This is why Reality Defender will continue to partner with enterprises, media platforms, and public institutions looking to integrate comprehensive, multi-model deepfake detection to protect the largest number of users possible where it counts most.
As researchers and as individuals working to combat weaponized deepfakes and dangerous AI-generated content, we are excited about new developments related to our efforts as they appear. We know those made vulnerable by these threats are just as excited. Yet only a combined approach —with multiple detection models and through collaboration — will truly make a tangible impact.