If I build a car that is far more dangerous than other cars, does no safety checks, releases it, and it ends up killing people, I will likely be held liable and have to pay damages, if not criminally. .
If I create a search engine that (unlike Google) gives “how do I commit a massacre” as the first result with detailed instructions on how best to carry out a massacre, and someone uses my search engine and follows the instructions, I’m probably liable. No thanks, basically Section 230 of the Communications Decency Act 1996.
So here’s a question: Is an AI assistant like a car, where we can expect manufacturers to check safety or be held accountable if they kill people? Or is it more like a search engine?
This is one of the questions animating the current raging discourse on takeover SB 1047 of California, legislation in the works that would mandate that companies spend more than $100 million to train a “frontier model” in AI — such as the progressive GPT-5 — to conduct security checks. Otherwise, they will be liable if their AI system leads to a “mass murder event” or a loss of more than $500 million in a single event or set of closely connected events.
The general idea is that AI developers should be held responsible for the harms of the technology they create Extremely popular with the American public, and an earlier version of the bill—which was much more stringent— Passed the California State Senate 32-1. it was approval From Geoffrey Hinton and Joshua Bengio, two The world’s most cited AI researchers.
Will it destroy the AI industry to hold it accountable?
However, the bill has been criticized by many in the tech world.
“Regulating basic technology will kill innovation,” said Ian LeCun, chief AI scientist at Meta. 1047 wrote in an X post condemning. he to share Other posts declare that “it could destroy California’s fantastic history of technological innovation” and Surprised out loud, “SB-1047, Up for California Assembly Vote, Spells the End of California’s Tech Industry?” CEO of HuggingFace, a leader in the AI open source community, to call The bill is “a huge blow to CA and US innovation.”
Such apocalyptic comments make me wonder … have we read the same bill?
To be clear, 1047 imposes unnecessary burdens on technology firms, which I consider a very poor outcome, even though these burdens would only fall on firms with $100 million training runs, which would only be possible for large firms. It’s entirely possible – and we’ve seen it In other industries — Regulatory compliance consumes a disproportionate share of people’s time and energy, discourages doing anything different or complex, and focuses energy on demonstrating compliance rather than where it is most needed.
I don’t think the security requirements in 1047 are unnecessarily onerous, but that’s because I agree with half of the machine learning researchers who believe that strong AI systems have a high probability of being catastrophically dangerous. If I were to agree with half the machine learning researchers who dismiss such risks, I would consider 1047 to be a meaningless burden, and I would be quite strongly opposed.
And to be clear, while foreign claims about 1047 don’t make sense, there are some reasonable concerns. If you create a very powerful AI, fine-tune it to not aid in genocide, but then release the model open source so that people can undo the fine-tuning and then use it to commit genocide, 1047’s responsibility Under the formation you will still be responsible for damages.
This would certainly discourage companies from publicly releasing models when they are powerful enough to cause mass casualties, or even once their manufacturers think they might be powerful enough to cause mass casualties.
The open source community is understandably worried that big companies will decide the only legally safe option is to never release anything. Although I think any model that is powerful enough to actually cause mass casualties probably shouldn’t If released, it would certainly be a loss to the world (and to the cause of making AI systems safer) if models with no such capabilities were held back due to extra legal precautions.
Claims that 1047 will be the end of California’s tech industry are guaranteed to age poorly, and don’t make much sense on their face. Many posts denouncing the bill assume that under existing US law, you are not liable if you create a dangerous AI that causes a mass casualty incident. But you probably already are.
“If you fail to take reasonable precautions against enabling other people to do substantial harm, such as by failing to install reasonable safeguards in your dangerous products, you to do There’s a ton of liability exposure!” Yale Law Professor Ketan Ramakrishnan response To such a one post By AI researcher Andrew Ng.
1047 sets out more clearly what will constitute reasonable precautions, but it does not invent some new concepts in the law of liability. Even if it doesn’t pass, companies should expect to be sued if their AI assistants cause mass casualties or tens of millions of dollars in damages.
Do you really believe that your AI models are safe?
The other surprising thing about LeCun and Ng’s advocacy here is that both say that AI systems are actually completely safe and that there is no basis for concern about mass casualty scenarios in the first place.
“The same reason I’ve said I don’t care about screwing up AI, I don’t care about overpopulation on Mars,” Ng said. famously said. There is LeCun said One of his main objections to 1047 is that it deals with sci-fi risks.
I certainly don’t want California state government spending its time dealing with sci-fi risks, not when the state Very real problem. But if the critics are right that AI safety concerns are pointless, then the genocide scenario won’t happen and in 10 years we’ll all feel silly for worrying that AI could cause mass casualties at all. This may be very embarrassing for the authors of the bill, but it will not kill all innovation in the state of California.
So what is the reason for the strong opposition? I think that the bill has become a litmus test for precisely this question: whether AI can be dangerous and deserves to be regulated accordingly.
SB 1047 doesn’t actually require that much, but it’s largely based on the idea that AI systems will pose potentially catastrophic dangers.
AI researchers are almost laughably divided over whether that basic premise is correct. Many serious, well-respected people with major contributions to the field say that catastrophe is unlikely. Many other serious, well-respected people with major contributions to the field say the opportunity is quite high.
Bengio, Hinton and LeCun has been called The three godfathers of AI, and they now symbolize the industry’s deep divide over taking catastrophic AI risks seriously. SB 1047 takes them seriously. This is either his greatest strength or his greatest mistake. It’s no surprise that LeCun, firmly on the skeptic side, takes the “wrong” view, while Bengio and Hinton welcome the bill.
I’ve covered a lot of scientific debate, and I’ve encountered little consensus on the central question of whether truly powerful AI systems will soon be possible — and, if possible, dangerous.
Surveys repeatedly show that the field is roughly split in half. With each new AI advance, senior industry leaders seem to be constantly doubling down on existing positions rather than changing their minds.
But there’s a big deal about whether or not you think strong AI systems can be dangerous. We need to get better at measuring what AI can do to get policy responses right, and better understand under what circumstances policy responses are most valuable to harm. Regardless of where they land on SB 1047, I have a lot of respect for researchers trying to answer those questions — and a lot of frustration for those who try to treat them as already closed questions.
A version of this story originally appeared in the Future Perfect Newsletter. Register here!