AIAI regulationCaliforniaChatGPTGovernment & PolicyOpenAISB 1047

Sign or veto: What's next for California's AI disaster bill, SB 1047? | TechCrunch

A controversial California bill to prevent AI disasters, SB 1047, has passed final votes in the state’s Senate and now proceeds to Governor Gavin Newsom’s desk. He must weigh the most extreme theoretical risks of AI systems — including their potential role in human deaths — against potentially thwarting California’s AI boom. He has until September 30 to sign SB 1047 into law, or veto it altogether.

Introduced by state senator Scott Wiener, SB 1047 aims to prevent the possibility of very large AI models creating catastrophic events, such as loss of life or cyberattacks costing more than $500 million in damages.

To be clear, very few AI models exist today that are large enough to be covered by the bill, and AI has never been used for a cyberattack of this scale. But the bill concerns the future of AI models, not problems that exist today.

SB 1047 would make AI model developers liable for their harms — like making gun manufacturers liable for mass shootings — and would grant California’s attorney general the power to sue AI companies for hefty penalties if their technology was used in a catastrophic event. In the event that a company is acting recklessly, a court can order them to stop operations; covered models must also have a “kill switch” that lets them be shut down if they are deemed dangerous.

The bill could reshape America’s AI industry, and it is a signature away from becoming law. Here is how the future of SB 1047 might play out.

Why Newsom might sign it

Wiener argues that Silicon Valley needs more liability, previously telling TechCrunch that America must learn from its past failures in regulating technology. Newsom could be motivated to act decisively on AI regulation and hold Big Tech to account.

A few AI executives have emerged as cautiously optimistic about SB 1047, including Elon Musk.

Another cautious optimist on SB 1047 is Microsoft’s former chief AI officer Sophia Velastegui. She told TechCrunch that “SB 1047 is a good compromise,” while admitting the bill is not perfect. “I think we need an office of responsible AI for America, or any country that works on it. It shouldn’t be just Microsoft,” said Velastegui.

Anthropic is another cautious proponent of SB 1047, though the company hasn’t taken an official position on the bill. Several of the startup’s suggested changes were added to SB 1047, and CEO Dario Amodei now says the bill’s “benefits likely outweigh its costs” in a letter to California’s governor. Thanks to Anthropic’s amendments, AI companies can only be sued after their AI models cause some catastrophic harm, not before, as a previous version of SB 1047 stated.

Why Newsom might veto it

Given the loud industry opposition to the bill, it would not be surprising if Newsom vetoed it. He would be hanging his reputation on SB 1047 if he signs it, but if he vetoes, he could kick the can down the road another year or let Congress handle it.

“This [SB 1047] changes the precedent for which we’ve dealt with software policy for 30 years,” argued Andreessen Horowitz general partner Martin Casado in an interview with TechCrunch. “It shifts liability away from applications, and applies it to infrastructure, which we’ve never done.”

The tech industry has responded with a resounding outcry against SB 1047. Alongside a16z, Speaker Nancy Pelosi, OpenAI, Big Tech trade groups, and notable AI researchers are also urging Newsom to not sign the bill. They worry that this paradigm shift on liability will have a chilling effect on California’s AI innovation.

A chilling effect on the startup economy is the last thing anyone wants. The AI boom has been a huge stimulant for the American economy, and Newsom is facing pressure not to squander that. Even the U.S. Chamber of Commerce has asked Newsom to veto the bill, saying “AI is foundational to America’s economic growth,” in a letter to him.

If SB 1047 becomes law

If Newsom signs the bill, nothing happens on day one, a source involved with drafting SB 1047 tells TechCrunch.

By January 1, 2025, tech companies would need to write safety reports for their AI models. At this point, California’s attorney general could request an injunctive order, requiring an AI company to stop training or operating their AI models if a court finds them to be dangerous.

In 2026, more of the bill kicks into gear. At that point, the Board of Frontier Models would be created and start collecting safety reports from tech companies. The nine-person board, selected by California’s governor and legislature, would make recommendations to California’s attorney general about which companies do and do not comply.

That same year, SB 1047 would also require that AI model developers hire auditors to assess their safety practices, effectively creating a new industry for AI safety compliance. And California’s attorney general would be able to start suing AI model developers if their tools are used in catastrophic events.

By 2027, the Board of Frontier Models could start issuing guidance to AI model developers on how to safely and securely train and operate AI models.

If SB 1047 gets vetoed

If Newsom vetoes SB 1047, OpenAI’s desires would come true, and federal regulators would likely take the lead on regulating AI models …eventually.

On Thursday, OpenAI and Anthropic laid the groundwork for what federal AI regulation would look like. They agreed to give the AI Safety Institute, a federal body, early access to their advanced AI models, according to a press release. At the same time, OpenAI has endorsed a bill that would let the AI Safety Institute set standards for AI models.

“For many reasons, we think it’s important that this happens at the national level,” OpenAI CEO Sam Altman wrote in a tweet on Thursday.

Reading between the lines, federal agencies typically produce less onerous tech regulation than California does and take considerably longer to do so. But more than that, Silicon Valley has historically been an important tactical and business partner for the United States government.

“There actually is a long history of state-of-the-art computer systems working with the feds,” said Casado. “When I worked for the national labs, every time a new supercomputer would come out, the very first version would go to the government. We would do it so the government had capabilities, and I think that’s a better reason than for safety testing.”

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back to top button

Adblock Detected

Block the adblockers from browsing the site, till they turn off the Ad Blocker.