\
insight
\
\
Insight
\
Ben Colman
Co-Founder and CEO
We often receive inquiries about how individuals can detect deepfakes without the use of robust deepfake detection. While it's understandable that people may hope to develop a keen eye for spotting fakes, the reality is that identifying deepfakes without specialized tools is incredibly difficult — even for the experts.
Though low-quality "cheapfakes" may be easier to recognize due to obvious flaws, relying on human perception alone is not a reliable method for distinguishing genuine media from sophisticated deepfakes. Even the most experienced professionals in the field can be misled by convincing fakes, and always instead rely on the assistance of advanced deepfake detection tools to make accurate assessments.
Reality Defender's team of highly respected experts in artificial intelligence, machine learning, and deepfake research have decades of combined experience working with advanced artificial intelligence models and systems. Yet despite our collective knowledge and skills, we acknowledge that no individual or group can consistently outperform robust deepfake detection systems. This is why we built Reality Defender.
Manually detecting deepfakes can be challenging, especially as generative AI technology advances and produces increasingly realistic content. While there are certain indicators that might suggest a piece of media is a deepfake, relying solely on human perception to accurately classify them is becoming increasingly more difficult.
In an effort to contribute to overall digital media literacy, myself and my team compiled a list of potential signs and signifiers to look for when attempting to identify deepfakes across various media types. It is crucial to note that these tips are not foolproof and may quickly become outdated — even as this guide is published — given the rapid pace of technological development in this field.
Our intention in presenting these guidelines is not to suggest that everyday users should be expected to detect deepfakes with a high degree of accuracy on their own. Rather, we hope to highlight the complexities involved in manual deepfake detection, emphasize the importance of using AI-assisted deepfake detection like Reality Defender, and drive the need for caution when consuming digital media.
Beyond these clues, it is always helpful to ponder the context behind the video, image, audio, or text.
AI-powered fraud affects the lives and livelihoods of millions across the globe. According to the Sumsub 2023 Fraud Report, the most common type of identity fraud in 2023 utilized generative AI and deepfakes. The number of detected deepfakes in fraud attempts increased tenfold between 2022 and 2023, and experts predict this number will continue to rise sharply over the next few years.
Individual users, workers, and customers in the financial, media, professional services, and healthcare industries will be particularly affected by these trends, but as the skills and methods of fraudsters evolve, no person in the digital space will be safe. Fraudsters will look to hijack bank accounts, social media profiles, create fake job opportunities and interviews, and utilize phishing and fake endorsements from celebrities to attract people to fake financial and product schemes. In most extreme cases, fraudsters have used deepfakes to arrange non-existent work meetings that convinced workers to transfer vast sums of money, and arranged deepfake phone calls to convince people their loved ones have been kidnapped.
While the responsibility to protect individuals from such schemes belongs to companies and institutions — banks and governments, social media and tech corporations, employers and service providers — it doesn’t hurt for users to know what to look out for in this new world of deepfake deception. Below are a few more tips for individuals to employ to protect themselves from cybercriminals.
It is always a good idea to mistrust requests at face value, even when deepfakes depict a person familiar to us: a boss, colleague, or a loved one. Every request submitted via image, text, video — especially requests of a financial nature — should be independently verified and scrutinized. As tragic as it is, because of deepfakes, seeing and hearing can no longer translate to immediate believing when it comes to digital communication.
Beware of taking the phishing bait. Emails, messages, and other digital communications created and distributed with LLMs are designed to elicit panic, sympathy, and other emotions that lead to rash actions. One should never feel compelled to act right away, without verifying the source and veracity of the text or media. Scrutinize email addresses for subtle inconsistencies, such as a single letter dropped from an email you usually trust, and never click on links without being certain of their source.
It pays to protect your online accounts with as many steps and measures as possible: two-step verification, fingerprint and other biometric security. While a big part of the problem is that deepfakes can overcome these measures (for example, fraudsters can generate fake voiceprints and videos of users to satisfy the security requirements, which is why the implementation of real-time deepfake detection tools by companies and institutions into their verification frameworks is crucial), the more extra steps users put between their accounts and cybercriminals, the better.
If these suggestions seem obvious or insufficient, it is because they are. As with all cases of manual detection, these basic safety tips will not be enough to protect individuals from the elaborate deepfake fraud techniques just around the corner. This is why effective deepfake detection starts at the top.
As is clear from these suggestions, it is unlikely that we can keep up with the proliferation of increasingly sophisticated deepfakes merely through casual interaction via the senses. Manual detection often requires expertise in various domains such as image and video processing, computer graphics, and human behavior analysis.
At the same time, human perception and judgment are subjective and prone to fatigue, distraction, and bias, leading to mixed results. Science unfortunately confirms that humans are not very good at spotting deepfakes: in a study published in Scientific Reports in August of 2023, up to 50% of the study respondents were unable to distinguish between a deepfake video and real footage. Another study published by iScience showed that respondents were unable to distinguish between authentic and deepfake videos, but remained fully confident in their ability to do so.
Manual detection is not only unreliable, but impractical in terms of labor and limited in scalability. Considering the nearly infinite amounts of content created and distributed in digital spaces daily — a number that is bound to become even more astronomical with low-rent AI-generated content flooding the Internet — human moderators and casual users alike cannot be expected to manually scrutinize every piece of content they come across. Yet we do believe that every user is entitled to know whether the content they are viewing was created by a human or a machine, without needing a degree in digital forensics and unlimited free time.
Earn on in the development of the Reality Defender platform, we asked ourselves: who should bear this constant burden of worrying about deepfakes?
While we advocate for widespread awareness of the potential misuse of generative AI as part of everyone's media literacy education, we also believe that the average citizen should not be burdened with the responsibility of constantly pondering and verifying the authenticity of the media consumed on their chosen platforms. This is especially true given the overwhelming volume of content we encounter on our devices daily, making the task of verifying every post and video daunting, exhausting, and unreasonable.
To address this inequality of truth without plunging every user into the depths of paranoia, we collaborate with some of the world's largest organizations, governments, platforms, and institutions to implement Reality Defender's deepfake detection tools, using AI to detect AI, overcoming the fallibility of manual detection at the highest levels. By integrating deepfake detection into newsroom fact-checking or call center structures, we can ensure that everyday users don’t need to worry that their beloved platforms and services are serving up bogus, misleading media, or allowing fraudsters to hijack accounts. Instead, the platforms vulnerable to deepfake-based attacks are proactively protecting their users against this content from the get-go.
We designed our deepfake detection platform with the goal of providing equitable access to truth for all people, aiming to protect as many users as possible while consciously choosing not to directly offer our tools to individual consumers. This approach allows Reality Defender to cover potentially billions of consumers and users at every turn by shielding the platforms they use and making deepfake detection consistent and systematic, instead of placing the onus on users themselves via fragmented, do-it-yourself manual methods that are demonstrably bound to fail.
\
Insights