Earlier today, the Elon Musk-owned XAI announced the launch of Grok-2, a new diffusion-based image generation platform existing on X/Twitter (as well as open-source repositories.) This new model became widely and instantly available to millions of paid X users, who can generate images using the existing Grok chat interface in a similar manner as one creates Dall-E image creation inside of ChatGPT.
Upon launch, however, it was discovered to contain decidedly fewer safeguards than other popular models, which allowed users to generate explicit, misleading, and often compromising artificial images within the first hours of its public existence. This includes (but is not limited to) realistic images of political figures in lewd or harmful positions, distortions of historical events, and notable names falsely committing unlawful acts.
Fortunately, Reality Defender’s web platform and API are able to detect Grok-2-generated images on the same day as the model’s launch, giving clients day one access to detection of misleading and damaging images in the fight against disinformation and the erosion of trust.
How to Detect Grok-2 Images on Reality Defender
Detecting Grok-2 images on the Reality Defender web platform is done in the same way as one would detect any other image type. When clicking on the Submit File tab, you can drag images from a folder on your computer or click into the browser to find the image(s) on your computer.
After the images upload, simply press Submit and the file will be in your dashboard, with results appearing seconds later.
Simply click on the result, and you will be taken to the file detection page, which will display more detailed results of the file uploaded.
From here, you can download a PDF report showing the same results. To learn how to interpret individual results from each model, simply click on the small icon next to each model name to learn precisely what they detect. (You can also read more about our multiple model detection method here.)
When uploading images via our platform-agnostic API, you simply upload files via the same firehose in which you upload all other similar visual files.
Detecting the Future
As models like Grok-2 improve and as newer generation models are introduced, the Reality Defender team will continue to work proactively in detecting the latest and most popular generative AI creation techniques as or before they become available. Our recent partnerships enable us to work with key players in the Generative AI space, giving us access to generation models before their release and the ability to detect against them upon launch.
At the same time, our research-minded approach to AI detection allows us to detect things like Grok-2 on the same day it launches. This affords clients robust protection against the damages and ills of malicious content created from this and other misused generation tools — on the same day as they’re released to the world.