Education

May 15, 2023

The Inadequacy of Deepfake Watermarking

Person walking forward

Earlier this week, yet another a prominent tech company announced an initiative to partially counter deepfakes and AI-generated content with a tool to provide contextual information about images within its platform. This is not the first policy of its kind, the first announcement of a deepfake and AI-generated watermarking solution in the month of May, nor is it the first attempt at watermarking AI-generated images. This particular initiative does include partnerships with popular name-brand generative content solutions, and attempts to create a uniform watermark of future materials generated on their platform.

While these efforts are well-intentioned, they fall short of addressing the crux of the problem. The initiative is first based on the assumption that malevolent actors will adhere to watermarking guidelines and/or not find a backdoor to these watermarks. The reality is, individuals who use deepfakes and generative content to spread misinformation or deceive users are unlikely to willingly indicate the artificial nature of their content. These actors often leverage open-source and non-attributive tools — tools that do not and will not watermark their content — to remain anonymous and unchecked.

The focus on labeling and watermarking also neglects the need for a comprehensive approach to detecting manipulated content. While providing contextual information about an image can be useful, it does not necessarily confirm the authenticity of that image. The proposed tool might inform users about the initial appearance and subsequent uses of an image, but it does not definitively determine if the image has been altered or misrepresented.

Moreover, the reliance on such a tool places an undue burden on the end-users, who must interpret the provided context and make their judgments about the image's credibility. This process demands a level of digital literacy that many users may not possess, making it an unrealistic and unreliable solution to the deepfake problem.

Our digital world calls for a more rigorous, proactive approach to tackle the increasing threat of deepfakes and AI-generated content. This is where a platform like Reality Defender comes into play. Unlike the passive approach of labeling and watermarking, Reality Defender uses cutting-edge artificial intelligence to actively identify and flag deepfakes and generative content.

Reality Defender does not rely on malicious actors to self-disclose their deception, nor does it depend on users to interpret contextual information. Instead, it leverages sophisticated open source and proprietary models to detect deepfakes with high accuracy, proactively protecting users from misinformation and deception.

As the CEO of Reality Defender, my team and I are deeply committed to fostering a safer and more truthful digital environment. We believe that the battle against deepfakes cannot be won with half-measures or naive expectations about malevolent actors' behavior. Meaningful change requires robust, proactive detection mechanisms that can adapt to the evolving sophistication of deepfakes and AI-generated content.

When the line between reality and artifice blur, it is crucial that we equip ourselves with the right tools to discern truth from falsehood. We are that solution.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter