California AI Bill SB 53 officially becomes law
The first kick at the AI legislative can in California may have failed, but a new iteration of it succeeded with the signing into law on Monday of Senate Bill 53 (SB 53), The Transparency in Frontier Artificial Intelligence Act (TFIA), by the state’s governor, Gavin Newsom.
California, he said in a statement, has proven that it can “establish regulations to protect our communities, while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.”
SB 53’s passing follows Newsom’s veto last year of Senate Bill 1047 (SB 1047), known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, which, if enacted into law, would have forced AI companies to test their systems for safety before they were released.
In an opinion piece written shortly after SB 1047 was rejected, Kjell Carlsson, head of AI strategy at Domino Data Lab, stated, “while the intention of [it] was laudable, its approach was flawed. It focused on organizations that are easy to regulate versus where the actual risk lies.”
SB 53, a release from Newsom’s office stated, is legislation that is “designed to enhance online safety by installing commonsense guardrails on the development of frontier artificial intelligence models, helping build public trust while also continuing to spur innovation in these new technologies.”
‘This could send a chilling signal.’
Among the groups who quickly reacted to the signing was the Chamber of Progress, which describes itself as a center-left tech industry policy coalition.
Robert Singleton, the group’s senior director of policy and public affairs for California and the Western US, said that the state “has always offered a fair shot for new innovators — that’s exactly why our tech sector has flourished. But this could send a chilling signal to the next generation of entrepreneurs who want to build here in California.”
Karthi P, senior analyst at the Everest Group, had a different viewpoint, describing SB 53 as “a landmark step that requires frontier AI providers to make their safety practices and incident reports public. By putting this information into the open, the law reduces uncertainty for enterprises and makes it easier to adopt AI in areas where trust and governance are essential.”
It also goes further than the EU AI Act by mandating public disclosure of issues such as cyberattacks or deceptive model behavior, ensuring regulation and technology evolve in tandem, he said.
Notably, said P, the bill “gained backing from Anthropic and avoided strong opposition from other leading providers, suggesting it strikes the right balance between oversight and innovation. This balance is likely to accelerate the momentum behind Responsible AI, turning what was already a growing priority into a faster-moving global standard.”
Could be a blueprint for other states
In addition, he said, “the knock-on effects could be far-reaching: other states may use SB 53 as a template, enterprises will raise expectations of transparency in their procurement, and global regulators may look to California just as they did to Europe after GDPR.”
Alla Valente, principal analyst at Forrester Research, agreed. “Undoubtedly, California’s SB 53 is monumental for several reasons,” she said. “Most notably, California has a disproportionate number of large AI companies. Over 50% of the world’s largest AI tech players are headquartered there, so the impact is direct. “
SB 53, she said, comes “a year after Gov. Newsom vetoed SB 1047 after tech lobbyists and large AI companies rallied against it for being too restrictive. Also, SB 53 comes on the heels of Meta’s announcement of a super PAC to fund state-level candidates that are sufficiently pro-AI, or in other words, sufficiently against regulations. This acknowledges that the AI legislation battle, for the foreseeable future, is at the state house, not Congress.”
Valente pointed out that SB 53 focuses on transparency and accountability, but doesn’t have the same degree of safety requirements as SB 1047. “In this regard, it’s the middle ground between safety and innovation,” she said. “[It] can very well become a blueprint for other states to embrace an AI policy framework focused on transparency and accountability and has support from a least one AI tech giant, Anthropic.”
States looking to govern intentional misuse of AI, but which don’t want to go as far as requiring a risk framework, will likely model their bills after Texas’s TRAIGA Act, she said.
Shane Tierney, senior program manager of governance, risk and compliance (GRC) at Drata, a cloud-based platform that helps organizations stay compliant with regulatory requirements, added that the statute, although it is aimed squarely at major AI labs, signals a shift in expectations for the industry.”
Practices such as publishing model cards, documenting risk mitigation strategies, and establishing incident response playbooks are likely to become standard features of responsible AI development, even outside the law’s immediate scope, he said.
Tierney noted, “embedding safety, security, and transparency into innovation early helps companies build and maintain trust with their customers and partners. The companies that treat responsible AI as a strategic imperative rather than a regulatory checkbox will define the next era of AI innovation.”
He also said that California’s approach “is expected to influence both state and federal policymaking. Although SB 53 contains a pre-emption clause preventing local ordinances within California from adopting conflicting AI safety requirements, it does not limit other states from enacting their own legislation. Several jurisdictions, including Colorado, New York, Massachusetts, and Washington, are actively considering AI regulatory measures, and California’s statute is likely to accelerate those efforts.”California AI Bill SB 53 officially becomes law – ComputerworldRead More