Advocates say it is a modest law setting “clear, predictable, Common sense safety standardsFor artificial intelligence. Opponents say it’s a dangerous and presumptuous move that “stifling innovation“
Regardless, SB 1047 — California State Sen. Scott Wiener’s proposal to regulate advanced AI models provided by companies doing business in the state — has now passed the California State Assembly The margin is 48 to 16. In May, it passed the Senate by a margin of 32 to 1. Once the Senate agrees to the Assembly’s changes to the bill, which it is expected For soon, measure Govt. Gavin goes to Newsom’s desk.
The bill, which would hold AI companies responsible for catastrophic damages for their “frontier” models, is supported by a wide array of AI protection groups as well as luminaries in the field. Geoffrey Hinton, Joshua BengioAnd Stuart Russellwho have warned of the technology’s potential to pose enormous, even existential, dangers to mankind. It got a surprise last-minute approval Elon Muskwho runs the AI firm among his other ventures xAI.
Almost the entire tech industry lined up against SB 1047, incl OpenAI, FacebookStrong investors Y Combinator and Andreessen Horowitzand some Academic researchers Those who fear this is a threat to the open source AI model. Anthropic, another AI heavyweight, lobbied to water down the bill After many of its proposed amendments were adopted in August, the agency called the bill “The benefits probably outweigh the costs.”
Despite industry backlash, the bill appears to be popular with Californians, even though all of its surveys are funded by interested parties. A recent survey by the Bill Pro-AI Policy Institute 70 percent of residents were found On the plus side, with even higher approval ratings among Californians who work in tech. The California Chamber of Commerce has introduced a bill Finding a plurality of opposing CaliforniansBut the wording of the vote was skewed, to say the least, describing the bill as requiring developers to “pay millions of dollars in fines if they don’t comply with state bureaucrats’ orders.” The AI Policy Institute’s survey presented pro-con arguments, but the California Chamber of Commerce only weighed in with a “con” argument.
The wide, bipartisan margins by which the bill passed the Assembly and Senate, and the general support of the public (when not asked in a partisan way), may mean that Gov. Newsom can sign it. But it’s not that simple. Andreessen Horowitz, The $43 billion venture capital giantthere is Newsom hired close friend and Democratic operative Jason Kinney to lobby Against the bill, and a number of powerful Democrats, incl Eight members of the US House from California and former speaker Nancy PelosiEchoed, requested a veto Talking points from the tech industry.
So there’s a strong possibility that Newsom will veto the bill, keeping California — the hub of the AI industry — from becoming the first state with strong AI liability rules. At stake is not just AI safety in California, but the US and potentially the world.
What’s Inside SB 1047
For attracting all this intense lobbying, one might think that SB 1047 is an aggressive, heavy-handed bill — but, especially after several rounds of amendments in the state legislature, the actual legislation does little to do.
It would provide whistleblower protections for tech workers, along with a process for those with confidential information about risky behavior at an AI lab to take their complaints to state attorneys general without fear of prosecution. It also requires AI companies to spend more than $100 million to train an AI model to create security plans. (The unusually high ceiling for this requirement is meant to protect California’s startup industry, which has objected that the compliance burden would be too high for small companies.)
So will this bill likely prompt months of hysteria, intense lobbying by California’s business community, and unprecedented intervention by California’s federal representatives? Part of the answer is that the bill was stronger. The initial version of the law set a $100 million compliance limit for using a certain amount of computing power, meaning that over time, more companies would have been subject to the law as computers continued to get cheaper. It would also establish a state agency called the “Frontier Model Division” to review security plans; Art objected to perceived power grabs.
Another part of the answer is that many people were falsely told that the bill made more. A prominent critic Wrongly claimed AI developers could be guilty of a criminal act, regardless of whether they were involved in a harmful event, while the bill only provided for criminal liability in cases where developers Knowingly lied under oath. (These provisions were later removed anyway). Representative Joe Lofgren of the Science, Space and Technology Committee wrote a letter Opponents falsely claim that the bill requires compliance with guidelines that do not yet exist.
But the value exists (You can read them in full here), and The bill does not require companies to comply with them It says only that “a developer shall consider industry best practices and applicable guidelines” from the US Artificial Intelligence Safety Institute, the National Institute of Standards and Technology, the Government Operations Agency and other reputable organizations.
Much of the discussion of SB 1047 has unfortunately centered around such blatantly incorrect claims, in many cases raised by people who should have known better.
SB 1047 is based on the premise that while near-future AI systems may be extraordinarily powerful, they may be dangerous and require some oversight. This original proposition is extraordinarily controversial among AI researchers. Nothing exemplifies the divide more than the three men often referred to as the “Godfathers of Machine Learning,” Turing Award winners Joshua Bengio, Geoffrey Hinton, and Ian LeCun. Both Bengio — a Future Perfect 2023 honoree — and Hinton have in the past few years become convinced that the technology they’re building could kill us all and have argued for regulation and oversight. Hinton resignation to speak openly about his fear of Google in 2023.
LeCun, who is the chief AI scientist at Meta, has taken the opposite tack, declaring that such concerns are absurd science fiction and that any regulation will stifle innovation. While Bengio and Hinton find themselves supporting the bill, LeCun opposes it, particularly the idea that AI companies should face liability if AI is used in a mass murder incident.
In this sense, SB 1047 is at the center of a symbolic tug-of-war: Does the government take AI security concerns seriously, or not? The actual text of the bill may be limited, but to the extent it suggests the government is listening to half of the experts who think AI could be extraordinarily dangerous, the implications are big.
It’s that sentiment that’s likely driven some of the fiercest lobbying against the bill by venture capitalists Marc Andreessen and Ben Horowitz, whose firm a16z has been working tirelessly to kill the bill, and some very unusual federal lawmakers reaching out to demand their opposition. A state bill. More mundane politics probably play a role: Politico reported (That Pelosi opposed the bill because she’s trying to court tech VCs for her daughter, who could run against Scott Wiener for a representative seat.)
Why SB 1047 is so important
It may seem surprising that so many people are wringing their hands over the laws of just one US state. But remember: California isn’t just any state. It is home to several of the world’s leading AI companies.
And what happens there is especially important because, at the federal level, there are lawmakers pull out AI control process. Amid Washington’s dilemma and open elections, it’s falling to the states to pass new laws. The California bill, if Newsom gives it the green light, would be a big piece of that puzzle, setting the direction for the U.S. more broadly.
The rest of the world is watching. “Countries around the world are looking at these drafts for ideas that could influence their decisions on AI legislation,” said Victoria Espinel, chief executive of the Business Software Alliance, a lobbying group that represents major software companies. to say The New York Times in June.
Even China — often invoked as the boogeyman in American conversations about AI development (because “we don’t want to lose the arms race with China”) — is showing signs of it. Concerned about securityJust don’t want to move on. Bills like SB 1047 could telegraph to others that Americans care about security, too.
Frankly, it’s refreshing to see lawmakers smarten up to the tech world’s favorite gambit: claiming it can regulate itself. This claim may be influential in the age of social media, but it has become increasingly untenable. We need to control Big Tech. This means not just the carrot, but the stick.
Newsom has an opportunity to do something historic. And if he doesn’t? Well, he’ll face some sticks of his own. D Survey by AI Policy Institute shows that 60 percent of voters are ready to blame him for future AI-related incidents if he vetoes SB 1047. Indeed, if he runs for higher office, they will punish him at the ballot box: 40 percent of California voters said they would be less likely to vote for Newsom in a future presidential primary if he vetoed the bill.