For a company that detects deepfakes and AI-generated content, you might be surprised to find that Reality Defender looks at responsible use of Generative AI quite favorably.
Just as there are many smart and entertaining uses of deepfakes (Luke Skywalker’s cameo in The Mandalorian comes to mind), there are some truly incredible ways to use generative AI that augment (and do not replace) human work.
Before ChatGPT became an overnight sensation and generative AI went mainstream, our team constantly debated over our internal use of generative AI in our work. This includes everything from the emails and updates you read (written by a human, always) to select marketing assets (occasionally made with diffusion models), to the assets we train on (half generated by AI for critical functionality) and even our web platform (occasionally built with help from Co-Pilot).
The keyword in our use of generative AI is “responsible.” By being transparent about our use internally and externally, never outright replacing human capabilities or positions, and consistently reevaluating our usage, we are able to improve the Reality Defender experience as a whole without compromising on our values.
AI as a Necessity
Reality Defender uses AI to catch AI. To do this, we build in-house datasets composed of carefully curated real assets that we own outright and/or have been explicitly permissioned to us, along with assets generated by our team using generative AI models. We need both types to be able to detect AI-generated media with the hyper-accuracy we’re known for, and we use this data to train, update, and fine-tune our detection models.
Simply put, without AI-generated assets used in this case, there would be no Reality Defender.
AI as a Companion
What you’re reading right now is wholly written by a human. All of our content released to date is entirely man-made. There are, however, times when our team uses LLMs to generate ideas for headlines, metadata, and occasionally outlines prior to writing a single sentence. We found that this helps with brainstorming and content creation overall, never outright using generative text but instead taking it as a jumping-off point.
At the same time, our marketing team recently began to use AI-generated art assets when needed for a newsletter or story like this one. Previously, we would search the internet for an image to license and/or use freely, put it through Photoshop, manually edit it using a variety of filters and manual edits, and export it. When Photoshop’s Neural Filters premiered (using low-level AI techniques to augment a photo), this removed a step from the process. Now, we’ve begun to occasionally utilize ethically-trained image generators to create these images outright, saving time if in a pinch but never replacing anyone on the team.
On the engineering side, my team has explored minor usage of Github Co-Pilot in helping with development work. Like the marketing team with content, our engineers essentially use Co-Pilot as a “template” for a jumping off point, kickstarting ideas and then developing on our own from there. As we agonize over every painstaking detail in the development of our platform, we’ve found that Co-Pilot is a great tool for brainstorming, but not a replacement for even a junior-level developer.
As Generative AI advances and more productive generative tools become available to both businesses and consumers, our team will consistently reevaluate our own internal usage of these technologies. Though we certainly cannot predict the future of AI given how rapidly the space advances, we will always remain wholly transparent in our usage of these technologies, holding fast to our values of building an ethical and responsible company and product.