California’s Frontier AI Law: How SB 53 Could Set a National Standard for Big Tech and Startups
California is the first state in the U.S. to pass a comprehensive law focused on “frontier” artificial intelligence (AI), the cutting-edge systems with the potential to reshape economies, democracies, and daily life. The law, SB 53, requires transparency around how advanced models are developed, and governed; and accountability through reporting obligations, safety disclosures, and whistleblower protections.
In an in-depth article for International Legal Technology Association’s Peer to Peer Magazine, LP Director of Knowledge and Innovation Priti Saraswat examines what businesses could expect for the future.
What SB 53 requires of large AI developers.
SB 53 applies to the largest AI developers—those that meet defined revenue and computational thresholds—focusing on companies whose models require supercomputing resources and operate at a scale capable of influencing global markets.
CalCompute, a program in the bill to provide startups, universities, and public institutions with access to much-needed computing resources.
CalCompute expands access to frontier-level computing by lowering the biggest barrier to AI innovation: compute cost. By making advanced resources available beyond big tech and well-funded labs, it enables many more organizations to develop and experiment with cutting-edge AI models.
The potential for the regulations set out in SB 53 to become the national standard.
The economic implications of SB 53 are vast, extending well beyond California’s borders. Because many of the world’s leading AI companies are based in California, SB 53’s regulations could quickly extend beyond the state. Companies are likely to apply its requirements across all their operations, turning the bill into a de facto national standard for the AI industry.
How SB 53 could help re-establish public trust in AI, countering damage done by misinformation, deepfakes, and opaque model-training practices.
SB 53 could help restore public trust in AI by reducing secrecy around its development. By requiring companies to publish safety frameworks and acknowledge potential risks, the bill addresses concerns fueled by misinformation, deepfakes, data misuse, and opaque training practices.
The limitations of SB 53, and what may need to come next.
SB 53 is an ambitious step toward AI governance, but its limitations point to what may need to come next. The bill applies only to large developers, leaving smaller groups able to create powerful models without oversight, and its emphasis on catastrophic risks may overlook more common harms. In addition, effective enforcement will require new state-level expertise and resources. As a result, SB 53 is not the final word on AI regulation, but the opening chapter in a broader governance effort.
Read the complete article here.
Filed under: Knowledge Management
Related insights
March 31, 2025