Aug 12, 2024

Addressing the Growing Scourge of Nonconsensual Deepfakes

Person walking forward

Emerging as perhaps the most malicious byproduct of the generative AI revolution, nonconsensual deepfake pornography is the most common type of AI-generated media found on the Internet. Fueled by the advances in generative AI, deepfake pornography accounts for 98% of all deepfake videos online, and researchers found that on the 10 most popular websites hosting fake porn images, more than 415,000 of such images had been uploaded in 2023, garnering nearly 90 million views. 74% of surveyed deepfake pornography users have stated they don't feel guilty for consuming the harmful nonconsensual images of women, who make up for 99% of deepfake pornography’s targets.

What is Deepfake Pornography’s Origin?

The term “deepfake” owes its roots to the very advent of nonconsensual AI-generated porn. It was first coined in 2017 on a Reddit forum where users created and exchanged AI deepfake pornography on a small scale with the help of machine learning algorithms and computer vision techniques. The practice was built on the old method of simply editing (or “photoshopping”) photos of women onto the bodies of porn performers. But AI technology infused this grimy practice with a chilling level of realism, and enabled its creators to automate and simplify the process.

As the forums for the exchange of nonconsensual deepfake porn grew in popularity, their most opportunistic members developed deepfake creation software and offered it as pre-packaged tools that would enable any user to create such content, regardless of their technical skills. These faceswap and nudify websites and apps are now readily available and in great demand, becoming a lucrative practice for their creators as many hosting platforms have been slow to cut off access to prospective users. 

How Is Deepfake Pornography Made?

Originally, the the process of creating nonconsensual deepfake nudes began by gathering a large amount of source material of a person's face—commonly images and video—and using a deep learning model to train a Generative Adversarial Network (GAN) to create a video that convincingly swaps the face of the source material onto a nude body belonging to someone else. As nudify apps rose in prominence, they enabled the removal of clothing from victims in submitted photos, with AI generating the approximation of the victim’s physical appearance.

These methods of creation continue to evolve rapidly. Technologies like Stable Diffusion generate entirely new synthetic images from scratch based on a simple text prompt from the user. This technology has been used in many of the recent deepfake pornography scandals, including the circulation of explicit, violent deepfakes of Taylor Swift that overwhelmed social media platforms and pushed US lawmakers to take delayed steps against the proliferation of such content. 

Access to nonconensual deepfake pornography websites is shockingly easy and direct via major search engines, and such sites appear even under unassuming search keywords, such as “what is deepfake porn?” Major deepfake pornography sites offer their heinous content for as little as a $5 subscription cost, while on other platforms, individual bottom-feeders offer to make custom AI deepfake pornographic content of anyone for a single payment. As legislation to outlaw this behavior moves slowly through the halls of Congress, the nonconsensual deepfake porn industry has become a fully developed economy operating with impunity, bringing devastation into the lives of women and girls around the world.

Eliminating the AI Deepfake Porn Scourge: Legislation, Accountability, Detection

Fortunately, the slow-moving efforts to pass a bipartisan federal bill addressing the proliferation of deepfake pornography are beginning to pay off. Recently, the Senate unanimously passed the DEFIANCE Act, which allows the victims of nonconsensual deepfake pornography to seek civil damages and the removal of deepfakes from online spaces. A second bipartisan bill, the TAKE IT DOWN Act, introduced in June, strives to criminalize the creation and distribution of nonconsensual AI deepfake porn, and would require social media platforms and websites to remove this content immediately. 

These laws, which complement efforts by individual US states to develop their own legislation regarding deepfakes, get at the core of the deepfake problem. The current state of things has been enabled by the technology’s lightning-fast development outpacing the response of democratic institutions. Websites that host AI deepfake porn have benefited from a 1996 law that provides immunity to online platforms from civil liability on third-party content, as well as the legal gray area of deepfakes being entirely synthetic creations. Some victims have found that reporting nonconsensual deepfake porn to search engines, platforms, and websites has elicited a passive response, and experts have voiced concerns that these platforms do not consider the removal of these deepfakes a priority. Laws that define what is deepfake porn and hold accountable the hosting websites, the creators of this heinous content, and enlist search engines and social media platforms as partners in the prevention and removal of this content will equip our society with the means to turn the tide on this toxic phenomenon.

Leveraging the power of AI to catch AI, deepfake detection tools have a crucial role to play in this fight. To protect users, platforms and websites can integrate detection tools that identify and flag nonconsensual AI pornographic deepfakes in real time and at scale, giving moderation teams a chance to remove this content as quickly as it appears. Companies that develop detection tools perform the disturbing but necessary work of constantly researching the newest AI-enabled methods malicious actors are using to create and distribute nonconsensual deepfakes, thus allowing the industry to stay ahead of these methods. Detection technology can also confirm for the public that a media in question is a deepfake, thus empowering moderators to remove it per platform policies and applicable laws while maintaining trust with users. 

Nonconsensual deepfake porn is one of the most pressing issues caused by the misuse of generative AI, and digital platforms and lawmakers are beginning to respond accordingly. At Reality Defender, we support any and all measures to eradicate this phenomenon, and we will continue to advance the fight through industry partnerships, raising awareness on what is deepfake pornography, and staying ahead of nefarious deepfakes by developing reliable detection technology to keep them out of circulation.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter