A troubling new deepfake video portraying the likeness of the State Department spokesman, Matthew Miller, has drawn international attention. Just one day after the United States amended its policy on Ukraine’s use of weapons against Russia, the deepfake — distributed across Russian social media, among other channels — showed an altered version of Miller “speaking” to the media, making (fictional) claims about a Russian city being a fair target for strikes.
Despite the deepfake showing the usual signs of manipulation — among them bad lip-syncing and inconsistent color of Miller’s clothes — the video has been widely shared by international platforms. The incident was seized upon by Russia’s Human Rights Council, whose representative reacted as if the video was real, and his comments were subsequently written up by Russia’s state news agency. This simple process of turning a deepfake into a legitimized news story striving to alter the global outlook on the war has deeply concerning implications for the future weaponization of AI. The Miller video may be remembered as one of the steepest points of escalation when it comes to the use of deepfakes in disinformation campaigns, especially as we barrel towards an already-contentious presidential election in the U.S.
Assessing the Miller Deepfake
Building the deepfake of a public-facing state spokesman is relatively easy, given the amount of online footage and voice samples malicious actors can access. To legitimize the deepfake further, its creators used the familiar backdrop of the podium from which Miller provides press briefings. For viewers not actively looking for signs of deception, the footage can easily pass as authentic. If a short, flawed deepfake can spark an intense global response, it is terrifying to imagine the influence a more sophisticated deepfake could have on public opinion, elections, and international relations — especially as “official” reactions appear in real time or minutes later.
The Miller video is likely the first of many state-sponsored deepfakes to appear during the run-up to the U.S. election, and these incidents will continue to test the government’s ability to counter disinformation efforts in the busy pre-election news cycle. Regardless of how effective these efforts are in 2024, they are a mere rehearsal for the AI-fueled manipulation campaigns of the future.
Fortunately, Reality Defender’s deepfake detection platform is well-prepared to help our clients — including and especially those on the government side — in countering the newest challenges posed by AI-generated content, doing our part in protecting the democratic process from the chaos of deepfake-driven disinformation campaigns.