Human fear of the nefarious side of artificial intelligence is nothing new. Science fiction writers have warned us about the profound social implications of such technology for decades, alongside experts, public figures, and the very technologists responsible for AI’s development. When it comes to innovation, the modern tech community has forced a clear, but somewhat harrowing bargain with the public: innovators will proceed in any way they can or please, at any pace they choose, because the questions of “what” and “when” and are more important than the “why” and “how.” In exchange for the indisputable benefits that come from innovation, the rest of us — those on the defensive side of AI, as well as ordinary citizens whose lives can be impacted by AI — are responsible for urging our lawmakers and democratic systems to respond in real time, so that laws and regulations can ensure that new technologies don’t disrupt or break our societies in irreversible ways.
But this balance is more one-sided than one would hope. While technological progress benefits from the breakneck speed of startup innovation and public hunger for new discoveries, the laws and regulations meant to protect our society still move at the speed of an 18th century quill scrawling over parchment. In other words, while the development of artificial intelligence happens behind the closed doors of now-massive technology companies, the halls of government are packed with deadlocked lawmakers who seem unsure of where the technology is headed and how to regulate it on behalf of citizens.
At Reality Defender, we believe that generative AI has massive potential to improve nearly every aspect of human existence. After all, we utilize AI in our own work, though with strict rules and transparency attached. We are also decidedly pro-growth, as the lack of growth means the lack of improvements stemming from generative AI and thus society at large. Yet the kind of unchecked growth of AI and AI-generated tools that has taken over the headlines cannot continue without leading to serious, lasting damage.
Bridging the AI Governance Gap
While major AI developers refuse to reveal the details of how their AI models are trained, authors, journalists, and visual artists file lawsuits to protect their works against mass plagiarism by the same technology threatening to replace them. Deepfakes are already being deployed by engineers of disinformation across the world in efforts to derail the legitimacy of elections, as recent studies continue to show that the public is easily persuaded by deepfake content. Women and young girls, including high-profile entertainers, are targeted by the heinous trend of deepfake pornography. Fraudsters utilize the power of generative AI to steal vast amounts of money. Among these headlines, new and improved generative AI tools are making their way into the world, enabling the creation of fake content on photorealistic levels while existing and growing at the expense of livelihoods and of crucial societal tenets.
But the danger of AI misuse isn’t only in copyright theft or abuse by agents of chaos. As AI is bound to permeate every aspect of future infrastructures, it will be used by governments and companies for a diverse set of tasks. Without responsible and methodical development of AI that takes into account all possibilities of its misuse, and meticulously crafted regulation framework that will outline clearly where the technology is and isn’t allowed to go, the built-in harm could be irreversible. Only careful oversight of how generative AI models are created, trained, and integrated can ensure the prevention of AI systems enabling bias and discrimination, privacy violations, job displacement, flaws in security infrastructures, and other negative outcomes. Although the possibilities of how AI can shape the future are vast, we are less inclined to fear the catastrophe of Skynet, and more focused on the realistic, tangible risks of unchecked AI development: mass poverty and economic decline caused by joblessness, autocrats empowered by deepfake propaganda tools, AI-powered healthcare systems prioritizing certain patients over others, among many other things that will appear or are currently unfolding as little if anything is done to prevent them.
Our future need not look bleak. What we must do now, while the AI revolution is still at a “turning point,” is close the gap in the contract between those creating these tools and the rest of society. While innovation is essential to human progress, it must happen openly, transparently, at scale, and with the best interests of the entirety of humanity in mind. The future of AI is a bipartisan issue that should unite lawmakers in an energetic, fast, and all-encompassing effort to immediately pass laws that will keep up with the development of AI.
Boiled down to its essence, this is an issue of democracy. Much like the Internet, AI may be developed by private companies, but its existence would not be possible without the digital infrastructures funded by and used by citizens. And much like the Internet, AI is bound to change the life of every person living on the planet, and thus decisions about its existence, and the form it takes in our daily lives, must be made by all of us.