spot_img
Wednesday, December 25, 2024
More
    spot_img
    HomeFuture PerfectInside the fight over California's new AI bill

    Inside the fight over California’s new AI bill

    -

    California state Sen. Scott Weiner, left, speaks during a press conference in Alamo Square Park about a new bill to close a loophole in the prosecution of automobile break-ins, Nov. 26, 2018, in San Francisco. ( | Getty Images via Lea Suzuki/San Francisco Chronicle

    California State Sen. Scott Wiener (D-San Francisco) is generally known for housing and his relentless bills public safetyA legislative record that has made him one of the tech industry’s favorite lawmakers.

    But his introduction of the “Safe and Secure Innovations for Frontier Artificial Intelligence Models” bill, Also known as SB 1047That requires companies to train “frontier models” that cost more than $100 million to test security and be able to shut down their models in the event of a security incident, drawing outrage from the same industry, including VC heavyweights Andreessen-Horowitz and Y Combinator. Public condemnation the bill

    I spoke with Wiener this week about SB 1047 and its critics; Our conversation is below (condensed for length and clarity).

    Kelsey Piper: I’d like to present the challenges I’ve heard to SB 1047 and give you an opportunity to respond to them. I think one category of concern here is that the bill would prohibit a model from being used publicly, or made available for public use, if it creates an unreasonable risk of serious harm.

    What is an unreasonable risk? Who makes reasonable decisions? Much of Silicon Valley is very regulatory-skeptic, so they don’t trust discretion to be used and not abused.

    Sen. Scott Wiener: To me, SB 1047 is a light touch bill in many ways. It’s a serious bill, it’s a big bill. I think it’s an impressive bill, but it’s not hardcore. Bill does not require a license. There are people including some CEOs who said A license will be required. I rejected it.

    There are people who think there should be strict liability. This is the majority product liability rule. I rejected it.[AI companies] No agency’s permission is required for release [model]. They have to do the security checks they all say they are currently doing or want to do. And if that security testing reveals a significant risk – and we define those risks as catastrophic – then you need to mitigate. Risk should be reduced, not eliminated.

    There are already legal standards today that if a developer publishes a model and then that model is used in a way that harms someone or something, you can be sued and it will likely be a negligence standard as to whether you acted reasonably. It is much, much broader than the liability we create in the bill. The bill allows only the Attorney General to sue, whereas under the Tort Act, anyone can sue. Model developers are already subject to potential liability that is much broader than this.

    Yes, I’ve seen some objections to the bill that seem to revolve around people’s misunderstanding of tort law saying“It would be like holding engine manufacturers responsible for car accidents.”

    and they If someone crashes a car and there is something about the engine design that contributes to that crash, the engine manufacturer can be sued. It has to prove that they have done something negligent.

    I’ve talked to startup founders about this and to VCs and people at big tech companies, and I’ve never heard a denial of the fact that liability exists today and the liability that exists today is profoundly pervasive.

    We clearly hear the conflict. Some people who were against it said “it’s all science fiction, anyone who focuses on protection is part of a cult, it’s not real, the powers are so limited.” Of course that’s not true. These are powerful models with huge potential to make the world a better place. I’m really excited for AI. I am not a cannibal by any means. And then they say, “We can’t possibly be responsible if this disaster happens.”

    Another challenge The bill is that open source developers have benefited greatly from meta putting [the generously licensed, sometimes called open source AI model] Llama is there, and they are understandably afraid that this bill will make them less willing to release Meta in the future for fear of liability. Of course, if a model is truly extremely dangerous, no one wants to reveal it. But the worry is that those concerns can make companies too conservative.

    In terms of open source, including and not limited to Llama, I take criticisms from the open source community really, really seriously. We have contacted people in the open source community and made the revisions for direct feedback from the open source community

    Shutdown provision requirements [a provision in the bill that requires model developers to have the capability to enact a full shutdown of a covered model, to be able to “unplug it” if things go south] Person after person was very high on the list of being concerned about.

    We’ve made a correction to make it clear that if you’re not in possession of the model, you’re not responsible for being able to turn it off. Open source people who open source a model are not responsible for being able to close it.

    And then the other thing we did was make a correction about those who were fine-tuning. If you make more than minimal changes to the model, or make significant changes to the model, at some point it effectively becomes a new model and the original developer is no longer responsible. And there are a few other minor fixes, but those are the ones we’ve made in direct response to the open source community.

    Another challenge I hear is: Why are you focusing on the more pressing issues in California?

    When you work on something, you hear people say, “Don’t you have more important things to do?” Yes, I work On permanent residence. i work Mental health and addiction treatment. I am working non-stop public safety. I have one Auto break-in bill and a Bills on people selling stolen goods on the street. And I’m working on a bill so that we both encourage AI innovation and do it in a responsible way.

    As a policy maker, I am very pro-technology. I am a supporter of our technological environment, which is often under attack. i have supported California’s net neutrality law promotes an open and free internet.

    But I’ve also seen with technology that we sometimes fail to move beyond very obvious problems. We have done this with data privacy. We finally got a data privacy law here in California – and for the record, the opposition to it said the same thing, that it would destroy innovation, that no one would want to work here.

    My goal here is to create a lot of room for innovation and at the same time promote responsible deployment and training and release of these models. The argument that it’s going to squash innovation, that it’s going to push companies out of California — again, we hear that with almost every bill. But I think it’s important to understand that this bill doesn’t just apply to people who make their models in California, it applies to everyone who does business in California. So you can live in Miami, but unless you’re going to disconnect from California — and you’re not — you have to.

    I wanted to talk about an interesting element of the debate over this bill, which is popular everywhere except in Silicon Valley. it is pass State Senate 32-1, with bipartisan approval. 77 percent of Californians are in favor According to one poll, more than half favor the strong side.

    But the people who hate it, they’re all in San Francisco. How did this end up being on your bill?

    In some ways I’m the best writer for this bill, representing San Francisco, because I’m surrounded and immersed in AI. The main story behind this bill was that I started talking to a bunch of front-line AI technologists, startup founders. It’s early 2023, and I’ve started a series of salons and dinners with AI people. And in that some ideas start to form. So in a way I’m the best writer for it because I have access to incredibly bright people in technology. Otherwise I’m the worst writer because I have people in San Francisco who are not happy.

    Something I struggle with as a reporter is conveying to people who aren’t in San Francisco, who aren’t in that conversation, that AI is really, really big, really high stakes.

    It’s very exciting. Because when you start trying to imagine – can we have a cure for cancer? Can we get highly effective treatments for a wide range of viruses? Can we achieve breakthroughs in clean energy that no one has imagined? So many exciting possibilities.

    But with every powerful technology comes risk. [This bill] Not about eliminating risk. Life is about risk. But how do we make sure that at least our eyes are open? That we understand that risk and if there is a way to reduce the risk, we accept it.

    That is what we are asking for with this bill, and I think the vast majority of people will support it.

    A version of this story was originally publishedFuture perfectNewsletterRegister here!

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts