Frequently Asked Questions
General
What is Reality Defender?
↓
Reality Defender is a deepfake detection platform for enterprises, governments, and platforms to detect AI-generated content and manipulations across audio, video, image, and text files.
Can anyone use Reality Defender?
↓
No. Reality Defender is meant for the largest entities and governments to potentially shield hundreds of millions from deepfakes. We believe the onus of deepfake detection should not be on the end-user, consumer, or citizen, but on the largest institutions and platforms serving them.
Does Reality Defender Use Watermarking?
↓
No. Watermarking requires provenance and ground truth, which requires buy-in from all generative content models. As such a feat is impossible, we use an inference system that grades each content with a 1-99% probability rating — all without the need for the ground truth.
Is the Generative AI technology that you are leveraging proprietary, or is it provided by a third party?
↓
We are not leveraging Generative AI technologies. Our detection models are a combination of in-house models or proprietary models.
What are Reality Defender’s certifications?
↓
The platform has a range of US, UK, and EU certifications and is currently completing SOC2 and GDPR certifications.
Data
What data/algorithms do you use to train your detection tools?
↓
We use state-of-the-art neural networks (CNNs, Transformers, ViTs, including large foundation models) to model discriminating features that help differentiate generative media from real media. In this process, we perform spatial, temporal, and frequency domain analysis along with domain specific feature losses (such as artifacts in images, etc.). Furthermore, we support multiple models for various modalities.
We have created a diverse in-house dataset consisting of videos, image, audio (including telephone quality) and text.