California’s State Assembly approved the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047) last week, with a 41-9 vote. The bill represents a pioneering effort in the U.S. to establish regulatory measures for A.I. companies operating within California. It mandates the implementation of safety protocols during the training and deployment of advanced A.I. models to prevent potential harm. The bill awaits Governor Gavin Newsom’s endorsement to become law.
The recent approval of SB 1047 aligns with the ongoing global debates about A.I. regulation. In the past, various countries and regions have attempted to create similar frameworks, with mixed outcomes. Those efforts have often faced resistance from tech companies concerned about innovation stifling, similar to the reactions observed in Silicon Valley. Unlike other regional efforts, California’s bill includes specific measures, such as immediate shutdown capabilities and testing protocols for A.I. models, making it a more comprehensive and stringent regulatory approach.
Moreover, other attempts to regulate A.I. globally have lacked the detailed focus found in SB 1047 on preventing “unsafe post-training modifications.” This particular aspect can be seen as a response to real-world incidents where A.I. systems have caused unintended consequences. The directive to maintain stringent testing further differentiates California’s approach by emphasizing proactive risk management over reactive measures.
Industry Reactions
The bill has elicited diverse reactions from industry leaders. Some, including Elon Musk, recognize the necessity of such regulations for the safe development of A.I. Others argue that the bill could impede technological progress. Companies like OpenAI and Anthropic, along with political figures like Nancy Pelosi, argue that the focus on catastrophic harms could disproportionately impact smaller, open-source A.I. developers.
“The requirements will mean that investors in some A.I. startups will have a portion of their investments spent on regulatory compliance rather than on developing the technology,” Jamie Nafziger, an international data privacy attorney, told Observer.
Concerns and Amendments
Critics highlight that the bill targets A.I. models requiring over $100 million to train or with high computational thresholds. The lack of clear guidelines on calculating training costs could lead to increased expenses and strategic cost adjustments to avoid compliance. Amendments to SB 1047 included removing criminal penalties for perjury and establishing a “Board of Frontier Models” to protect startup modifications of open-source A.I. models.
“It will certainly stop the distribution of open-source A.I. platforms, which will kill the entire A.I. ecosystem, not just startups, but also academic research,” noted Yann LeCun, Meta (NASDAQ:META)’s chief A.I. scientist.
In response to the feedback, SB 1047 underwent several revisions. These changes aim to strike a balance between safeguarding against potential A.I. risks and fostering innovation. Senator Scott Wiener, the bill’s author, believes it reflects a balanced approach, considering both the potential dangers and existing industry commitments. Wiener asserts that the bill deserves to be enacted, pending Governor Newsom’s signature.
“In our assessment, the new SB 1047 is substantially improved, to the point where we believe its benefits likely outweigh its costs,” stated Dario Amodei, co-founder and CEO of Anthropic.
The passage of SB 1047 by California’s State Assembly marks a significant step in the regulatory landscape of artificial intelligence. The bill emphasizes safety measures and risk management while acknowledging industry concerns. It also highlights the complexities of balancing technological advancements with regulatory compliance. The bill’s success or failure could serve as a precedent for future A.I. regulation efforts worldwide, making its impact closely watched by both proponents and critics of A.I. governance.