This post was featured in the Reality Defender Newsletter. To receive news, updates, and more on deepfakes and Generative AI in your inbox, subscribe to the Reality Defender Newsletter today.
Apple Adds Deepfakes to Vision Pro Headset
Apple is releasing their much-rumored Vision Pro headset next year, complete with an AR/VR version of Facetime. To allow others you’re chatting with to see your likeness while you have the Vision Pro on, Apple will prompt users to scan their face using the device’s cameras, creating a lifelike representation of your face (a “persona”) that moves its lips and emotes as cameras sense your own movements and gestures.
This is, in essence, a combination of a lifelike 3D model and a deepfake.
Though the device will require authentication (via iris scans), we would not be surprised to see glints of misinformation perpetrated by Vision Pro likenesses early next year. (The Verge has extensive coverage on yesterday's big announcement.)
Crowdsourcing Deepfake Detection Does Not Work
A deepfake goes viral on social media. It’s reposted and seen as genuine by thousands of accounts and seen by millions. By the time someone proves it to be a deepfake and shares their findings, the original post has been reshared and viewed by millions more and the world has moved on.
This is why reactive crowdsourcing deepfake detection does not work.
Twitter recently announced their expansion of community notes into labeling synthetic and manipulated media. While it’s commendable that the platform is working on some solution to their growing deepfake and generated content problem, doing so after media causes irreparable harm has little to no impact on the bulk of original viewers who moved on to the next piece of content.
Reality Defender Co-Founder and CEO Ben Colman wrote on the issue of crowdsourcing and its flawed premise, looking into Twitter’s application while stressing the need for proactive detection before content hits a single user.
Texas Criminalizes Deepfake Pornography
Beginning September 1st, creating deepfaked pornographic materials in the state of Texas will carry criminal penalties, according to Senate Bill 1361. How the state looks to enforce this and what tools they will use to detect deepfakes is uncertain, but it shows action against deepfakes on the state level while the U.S. government looks at how to address the growing problem of deepfakes and misused AI at the federal level.
Texas Also Bans Unchecked ChatGPT in Court Filings
Judge Brantley Starr filed an order last week that bans the use of ChatGPT without peer review in court filings. This order comes after the widely-publicized (and failed) use of ChatGPT the previous week in a New York court.
More News
- iHeartMedia is asking employees to stop using ChatGPT. (RBR)
- A highly publicized story of a simulated drone AI killing its pilot has been denied. (The Guardian)
- Kids are using Snapchat’s AI as a way to bully the LLM. (TechCrunch)
- The Times’ Dealbook posits the idea of multiple parties being responsible for AI creations. (NY Times)
- Vox dives into the different mechanisms in place to prevent AI generated content from “flooding the internet.” (Vox)
- Lina Khan, Chairwoman of the FTC, highlights the need to be “vigilant” against AI in the rise of the many advanced scams plaguing companies and consumers. (Bloomberg)
- Contrary to last week’s warning, sci-fi writer Ted Chiang believes today’s AI is “not conscious.” (Financial Times)
- Japan has allegedly ruled that copyright does not apply when it comes to AI training. (Technomancers)
- A chatbot used to prevent eating disorders instead gave users diet advice. (WSJ)
- The family and friends of loved ones lost to crime are now dealing with deepfakes of the deceased. (Rolling Stone)
Thank you for reading the Reality Defender Newsletter. If you have any questions about Reality Defender, or if you would like to see anything in future issues, please reach out to us here.