Policy

Oct 2, 2024

California's AI Bill and Moving Regulation Forward

Person walking forward

Late last week, California Governor Gavin Newsom vetoed the widely debated State Senate Bill 1047. This specific piece of legislation would require the creators of powerful AI systems to test their product according to rules set by the state and hold companies directly responsible if their technology was misused to harm people. The bill was considered by many the strictest regulatory effort regarding AI to date.

In his letter to the Senate, Newsom explained that while he will continue to sign bills regulating AI, he saw the bill’s focus on companies using the largest amounts of computational power for their AI systems as narrow. In his view, the bill overlooked important factors, including whether an Al system is deployed in high-risk environments, involves critical decision-making, or uses sensitive data.

In Newsom’s view, the bill would focus on even the most basic and benign functions of large systems while ignoring dangerous functions of smaller models. The opinion that the bill was too narrow in scope and would hinder innovation within a booming industry was shared by companies and politicians opposing the bill.

Proponents of the bill, on the other hand, contended that the bill would only formalize the commitments that AI companies have already agreed to informally, and that such legislation was necessary as large federal bills to regulate AI are moving slowly.

How Reality Defender Views the Veto

Since our founding, Reality Defender has advocated for formalized rules to guide ethical AI development that would introduce the element of public control over a technology that is bound to transform the world for all of us. Our view is that if used responsibly, AI makes the world better. After all, our deepfake detection technology leverages AI to fight the malicious misuse of AI.

This is why we are glad to see a robust and healthy debate about AI regulation at the state level. California has in recent weeks adopted seventeen new laws regulating AI — including two bills that make the possession of deepfake nude images of children illegal (and establishes that AI-generated explicit images of children are, indeed, child pornography), and another outlawing political deepfakes.

The passage of these bills shows that states are moving in the right direction, exploring the best ways to assert public interest while carefully weighing the implications each law could have on AI growth. The development of AI moves at a frantic pace, so far unhindered by delayed feedback from the slow processes of lawmaking. Vetoes, setbacks, and difficult discussions about the nuances of developing reality-altering technology are inevitable. What has become clear is that with some delay, governing institutions are now responding to the promise and dangers of AI with growing urgency, putting the discussion about the development of AI at the center of public life where it belongs.

We are eager to see more ambitious bills emerge at the local level and hope they can influence stalled federal efforts. The complexity of AI’s implications for our future warrant a global debate on every level, one between everyday citizens, lawmakers, experts, and tech companies. Our goal should always be to protect the value of human life and our increasingly fragile sense of reality, while exploring how new technologies can enable a better future for all.

\ Solutions by Industry
Reality Defender’s purpose-built solutions help defend against deepfakes across all industries
Subscribe to the Reality Defender Newsletter