For the last several years, my career has been defined by protecting the most vulnerable users online while helping survivors reclaim their power.
I spent years leading teams dedicated to eradicating CSAM on social media platforms, witnessing the damaging effects of exploitative material on human lives. To assist survivors' re-entry into the workforce, I founded an organization that offers trafficking survivors career development opportunities and job training.
Yet by joining Reality Defender, I’m not simply helping to put an end to the most advanced and dangerous disinformation of our time. I’m leveraging my history to help end increasingly terrifying uses of generative AI in the creation and distribution of sexual exploitation materials — including those involving children.
Earlier this week, all 50 state attorneys general called for technology companies and lawmakers to quickly work together in combatting the use of AI to generate deeply harmful child sexual abuse materials (or CSAM). For some time, Generative AI models have been used to create realistic images and videos of child sexual abuse that do not depict any actual children. This not only opens the door to massive societal dangers if left unchecked; it makes the work of those combatting non-generated CSAM infinitely harder.
As a deepfake detection platform, we champion the use of AI to combat the absolute worst AI has to offer. We are committed to detecting and disrupting not only the spread of dangerous deepfakes and AI-generated content, but material that dehumanizes and re-victimizes the most vulnerable members of society.
As a company and as concerned citizens, we echo the sentiments shared by the attorneys general, and believe that we cannot afford to let advancements in AI outpace ethics and safety. The technology industry has long proven incapable of self-regulation. Voluntary ethical principles are no match for the exponential power of deep learning models. To maintain public trust and prevent lasting trauma, we need our policymakers to step up with a comprehensive regulatory framework grounded in essential human rights.
The creation and distribution of AI-generated CSAM is an existential threat to the health of our children and to the social fabric that binds us. Once this content is unleashed into the digital sphere, the damage cannot be undone. Survivors must live forever knowing these realistic depictions continue to spread, and the dedicated individuals working around the clock to combat non-generative CSAM must deal with a completely different and destructive problem in tandem.
No single company can address a challenge of this magnitude. It requires a collective effort, bringing together stakeholders from government, civil society, academia, and business. The European Union's proposed AI Act shows such collaboration is possible. Now the U.S. must follow suit with required deepfake detection and prevention, mandatory risk assessments for high-risk AI systems, greater transparency and accountability around datasets and modeling, and more investment into research on AI safety and ethics.
Most importantly, we need the political will to enact such reforms before it's too late. The attorneys general have issued the call. Lawmakers must respond with the urgency this threat demands.