AI models that generate images and videos are advancing at a rapid pace, creating more realistic synthetic content every day. At Reality Defender, we evolve our deepfake detection capabilities to stay in lockstep with the cutting edge of generative tools.
Today, we are announcing an exciting update to our image and video models for improved performance in detecting deepfake content.
Improved Video Detection
We are introducing a completely new video model focused on facial representations learned from real samples. This transformer model includes robust feature extraction, which, along with updated versions of existing models, delivers an 8% improvement in balanced accuracy, as well as additional support for a greater number of closed- and open-source deepfake generation platforms.
This new model, called “Guided,” replaces our Face Blending model, which has been deprecated on our platform and API.
Enhanced Image Detection Models
Our image detection models have been updated to provide better robustness for compressed images, resulting in a 4% improvement in balanced accuracy. Like the new and updated video models, the updated image models now offer additional support for media generated by a wider array of deepfake generation platforms.
As always, if you have any questions about this new update, as well as any queries related to the Reality Defender platform, click here to talk with our team.