California Breaks New Ground with First-in-Nation AI Safety Law
Governor Newsom signs landmark legislation requiring transparency from major AI companies, marking a significant shift in AI regulation.
California has again positioned itself at the forefront of technology regulation, with Governor Gavin Newsom signing the nation’s first comprehensive AI safety law on September 29, 2025. The groundbreaking legislation, SB-53 or the Transparency in Frontier AI Act, represents a carefully crafted approach to regulating artificial intelligence that balances innovation with accountability.
What the New Law Requires
The legislation mandates that major AI companies disclose their safety protocols and operational procedures, bringing unprecedented transparency to an industry primarily operated without oversight. Companies must report incidents involving crimes committed without human oversight, including cyber attacks, as well as instances of deceptive behavior by AI models—requirements that go beyond even the European Union’s regulatory framework (source: Politico).
This targeted approach represents a significant departure from previous regulatory attempts. The law builds on recommendations from California’s first-in-the-nation AI report, commissioned by Governor Newsom earlier this year, demonstrating a more measured and research-based approach to regulation.

A Political Victory for State Senator Wiener
The signing represents a major political win for State Senator Scott Wiener, who has emerged as a key figure in AI regulation. Wiener, who has filed papers to run for Representative Nancy Pelosi’s San Francisco congressional seat, successfully navigated the complex political landscape after his more expansive SB 1047 was vetoed by Newsom last year amid fierce opposition from tech companies.
This time, Wiener took a different approach, directly engaging with major AI companies and working closely with the governor’s office to craft legislation that addressed industry concerns while maintaining meaningful oversight (source: Politico).
Industry Response: Cautious Support
The tech industry’s response to SB-53 has notably differed from the widespread opposition that killed Wiener’s previous attempt. Major AI companies have either remained neutral or expressed support for the legislation.
Anthropic, maker of the Claude chatbot, actively backed the measure in the legislative session’s final days. Co-founder Jack Clark praised the collaborative approach: “We’re proud to have worked with Senator Wiener to help bring industry to the table and develop practical safeguards that create real accountability for how powerful AI systems are developed and deployed.”
Meta spokesperson Christopher Sgro called the law “a positive step” toward “balanced AI regulation,” emphasizing the company’s commitment to working with lawmakers on regulations that “protect consumers and society while fueling innovation and economic growth.”
OpenAI expressed satisfaction that California had created “a critical path toward harmonization with the federal government,” describing this as “the most effective approach to AI safety.”
Ongoing Concerns from Industry Groups
Not all industry voices are celebrating, however. The Chamber of Progress, a tech industry lobbying group, continued to oppose the measure, arguing it could harm California’s innovation economy. Robert Singleton, the group’s senior director for California, warned that the law “could send a chilling signal to the next generation of entrepreneurs who want to build here in California” (source: Politico).
Venture capital firm Andreessen Horowitz also expressed mixed feelings about the legislation. While acknowledging “some thoughtful provisions that account for the distinct needs of startups,” the firm’s head of government affairs, Collin McCune, criticized the law for “regulating how the technology is developed—a move that risks squeezing out startups, slowing innovation, and entrenching the biggest players.”
Setting National Precedent
Perhaps most significantly, the California law establishes a precedent for state-level AI regulation without comprehensive federal action. As McCune noted, the measure demonstrates that states, rather than the federal government, are taking the lead in AI regulation. This development could influence how the technology is governed nationwide.
The law fills a regulatory void that Congress has been slow to address, potentially serving as a model for other states considering similar legislation (source: Carnegie Endowment).
Looking Forward
California’s new AI safety law represents a more nuanced approach to technology regulation that seeks to balance the need for oversight with the imperative to maintain the state’s position as a global innovation hub. By requiring transparency without imposing overly restrictive development constraints, the legislation may provide a template for effective AI governance in the rapidly evolving landscape of artificial intelligence.
As AI technology continues to advance at breakneck speed, California’s measured approach to regulation could prove influential in shaping how other jurisdictions address the challenges and opportunities presented by this transformative technology. The success or failure of SB-53 will likely be closely watched by policymakers, industry leaders, and safety advocates nationwide.

