California AI law removes 'kill switch,' enacts guardrails

An AI lawyer dissects what Gavin Newsom's AI law means for users and companies.

Oct 8, 2025 - 06:30
 0
California AI law removes 'kill switch,' enacts guardrails

California Governor Gavin Newsom signed into law Senate Bill 53 (SB53), the Transparency in Frontier Artificial Intelligence Act, on September 29. 

The bill, authored by Senator Scott Wiener (D-Calif.) and supported by Newsom (D-Calif.), aims to improve online safety by installing guardrails on the development of frontier artificial intelligence models.

California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive, -Gavin Newsom

SB53 is quite different compared to Wiener's previous attempt (SB1047) to implement AI regulation. 

To get a better understanding of this new regulation, TheStreet spoke to an expert in the field, cybersecurity and AI attorney Lily Li, founder of Metaverse Law.

Cybersecurity and AI attorney Lily Li of Metaverse Law.

Image source: Lily Li, Metaverse Law/TheStreet

AI guardrails: The differences between SB53 and SB1047

Li explained that the new bill is narrower than SB1047. She said a previous attempt at regulation allowed the attorney general (AG) to institute civil actions and a fine due to harms resulting from a frontier developer’s failure to adhere to a safety framework. 

With SB53 now signed into law, these enforcement actions are limited to situations in which the frontier developer fails to meet its transparency and reporting obligations, she added.

SB1047 also had a prescriptive third-party testing requirement, which required independent auditing and testing of the AI system’s safety features. In addition, the AG had the discretion to impose penalties on auditors who intentionally violated their audit obligations. SB53 does not impose the same third-party audit requirements, but does allude to third-party assessments as part of a standard AI framework.

Li noted that SB1047 also included the concept of a “kill switch” by requiring the “capability to promptly enact a full shutdown,” and that this is no longer part of SB53.

"The new bill is more lenient towards the AI industry as it eliminates the 'kill switch' requirement, removes AG enforcement of resulting harms from AI, and increases the revenue threshold for large frontier developers."

Related: Analyst revamps Dell stock price target before key meeting

She said the Transparency in Frontier Artificial Intelligence Act requires large AI developers to publish AI safety frameworks, which may lead to consumers filing more lawsuits based on unfair and deceptive trade practices or false advertising claims.

Li also pointed out the new law's whistleblower protections. Employees responsible for assessing, managing, or addressing risk of critical safety incidents can initiate a lawsuit against an employer that has retaliated against them for reporting violations of the bill or critical safety incidents.

Discussing whether the law could end up backfiring on companies, she said: 

I could see this backfiring if deployers use AI systems in critical infrastructure (e.g. water, power, transportation management) without adequate testing or controls.

The AI law defines "catastrophic risk"

The Transparency in Frontier Artificial Intelligence Act defines “catastrophic risk” as "a foreseeable and material risk that a frontier developer’s development, storage, use, or deployment of a frontier model will materially contribute to the death of, or serious injury to, more than 50 people or more than one billion dollars ($1,000,000,000) in damage."

It is easy to imagine a layman reading the law and being confused by the line seemingly being drawn at 50 people.

Related: Salesforce AI faces backlash from customers

Li explained what this catastrophic risk definition really means, and how it compares to the critical safety incident.

She said the definition talks about predictable and material risks tied to specific AI actions or output.

"So, in layman’s terms, we are looking at risks that are reasonably likely to result in multiple people being harmed from a frontier AI model." 

Li said such situations include:

  • Providing expertise on the creation or release of chemical, biological, radiological, or nuclear weapons;
  • Perpetrating automated cyberattacks or murder, assault, extortion, or theft, including theft by false pretense; or
  • Evading the control of the developer or user.

Related: Iconic carmaker gets $2 billion bailout following cyberattack

"This does not mean that more than 50 people must be harmed," she added. "This means that AI developers need to consider risks that are likely to harm over 50 people, which may include risks where the likely possibility of harm could range from one person to 100 people."

Critical safety incidents represent a lower standard and do not have a numerical threshold for harm, Li noted. "The bill requires the Office of Emergency Services to establish a mechanism for any frontier developer, or a member of the public, to report critical safety incidents."

How California companies can comply with new AI law

Addressing what companies in California must do to stay compliant with the law, Li said they need to "create and implement transparent AI safety frameworks that adopt national and international AI standards like the NIST AI RMF and ISO 42001 and make these frameworks available publicly."

She also advised companies to:

  • Identify use conditions and restrictions for deployers and users of frontier models.
  • Report critical safety incidents that occur within 24 hours to an authority.
  • Ensure employee policies and contracts permit employees to report internally and externally regarding critical safety incidents.
  • Investigate adverse employment actions against covered employees to confirm that these actions are legitimate and not retaliatory.

Li advised anyone reviewing SB53 to read it in conjunction with the CCPA’s recently approved regulations.

"AI developers and systems that process personal information and meet CCPA thresholds will also be responsible for these requirements,” she said.

Related: Qualcomm shakes up AI on the edge, with a huge surprise

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow