California Moves to Regulate 'Frontier AI'

Join our email list today to receive alerts, articles, invitations to events and more!

Join Our Email List

2 min read

California has become the first U.S. jurisdiction to enact a law - Senate Bill 53 (Transparency in Frontier Artificial Intelligence Act) - aimed specifically at regulating high-risk “frontier” AI systems. It imposes stringent obligations on large developers, including governance frameworks, public transparency reporting, safety incident disclosures, and whistle-blower protections. While the legislation is state-level, its scope and ambition offer a preview of how AI could be governed more broadly in the absence of comprehensive federal oversight in the U.S.

For Canadian businesses, policymakers, and tech leaders, California’s approach offers an important signal: subnational governments are stepping in where national regulation lags. With Canada’s own AI legislation still evolving, watching how California balances innovation with public safety can offer valuable guidance on what might be expected here and how to proactively prepare.

Many jurisdictions are moving to regulate artificial intelligence (AI). But this is shaping up to be a repeat of what we saw with privacy laws, resulting in a piecemeal patchwork of various shapes and sizes of regulatory approach. California recently passed a raft of tech-related bills, and here are several developments that are noteworthy:

Frontier AI Risks

California Senate Bill 53, known as the Transparency in Frontier Artificial Intelligence Act, signed into law on 29 September 2025, requires large AI developers to publish safety and security frameworks, disclose specified transparency reports, and report critical safety incidents to California's Office of Emergency Services (OES). This new law also creates enhanced whistleblower protections for employees reporting AI safety violations.

The mandated internal governance practices and transparency reports must include the results of the developer's pre-deployment assessments of "catastrophic risks" of AI.

Companion Chatbots

Approved by Governor Newsom on 13 October 2025, California Senate Bill 243 requires operators of companion chatbots to (i) provide clear and conspicuous notice to users indicating that the companion chatbot is not human; (ii) maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to a user; and (iii) for minor users, provide a notification every three hours that the system is not human and encouraging the user to take a break, and prevent the production of sexually explicit material to the user.

SB 243 also requires annual reports to the California Office of Suicide Prevention detailing the number of times the operator has referred users to crisis services, as well as protocols regarding self-harm and suicidal ideation.

There are exceptions and carve-outs – for example, the law does not apply to chatbots that are only used for customer service. However, the definition of 'companion chatbot' is broad enough to catch any AI platform that provides "adaptive, human-like responses to user inputs and is capable of meeting a user’s social needs, including by exhibiting anthropomorphic features and being able to sustain a relationship across multiple interactions." It appears this is designed to catch popular platforms such as ChatGPT, Grok, and Claude.

Liability for Autonomous AI

California Assembly Bill 316, also approved on 13 October 2025, deals with a claim against someone who is sued for causing harm using AI.

If a defendant is sued and that person developed, modified, or used artificial intelligence that is alleged to have caused harm to the plaintiff, it's not a defence to assert that the artificial intelligence autonomously caused the harm to the plaintiff.

In other words, "I didn’t do it, the AI did it autonomously" is not a defence.

Takeaways

Given the size of the California market and the number of tech companies that call the state home, these new laws are worth noting. Remember that California's regulatory approach to children's privacy and EV regulations, to name a few examples, has also influenced other jurisdictions and has arguably created de facto national standards in the US, especially where the federal government has elected not to legislate. California's approach is important for Canadian business for two main reasons:

  • So far, these specific issues are not addressed anywhere in Canadian legislation, but provincial and federal lawmakers are currently reviewing their options. Even if Canada does not adopt the same regulatory regime, it's likely that California's approach will influence Canadian lawmakers.
  • AI companies, many of whom are subject to California law, will be implementing measures to comply with these new laws, which will, in turn, impact Canadian users. Since many of the most popular AI tools are subject to standard terms of use that designate California law, Canadian users can expect to enter contracts that are subject to these controls.

We regularly assist Canadian clients with AI contracts, privacy issues, intellectual property issues, AI compliance, assessing the impact of emerging regulations, and preparing AI governance frameworks. For support in understanding how California’s AI laws may affect your business, contact Richard Stobbe or any member of our Artificial Intelligence Group.