spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeFuture PerfectAI companies are trying to make God. Shouldn't they get our permission...

    AI companies are trying to make God. Shouldn’t they get our permission first?

    -

    A conceptual image of a pair of hands forming a multicolored sphere with the index fingers extended near the point of contact in a similar pose to Michelangelo's painting 'Creation of Adam'. The point of contact has a bright light and swirling color that can depict energy, light or intelligence.

    AI companies are on a mission to radically change our world They are working on creating machines that can surpass human intelligence and bring about a dramatic economic change on us all.

    Sam Altman, CEO of ChatGPT-maker OpenAI, basically told us that he’s trying to create a god — or “Magic Wit in the Sky,” As he puts it. OpenAI’s official term for this is artificial general intelligence, or AGI. Altman says that AGI won’t be the only one “Breaking Capitalism” But that’s it “Perhaps the greatest threat to humanity’s continued existence.”

    Here’s a very natural question: Did anyone actually want this kind of AI? By what right can some powerful tech CEO decide that we should turn the whole world upside down?

    As I’ve written before, it’s patently undemocratic that private companies are developing technologies that aim to completely change the world without buy-in from the public. Indeed, even leaders of large companies are expressing discomfort with how undemocratic it is.

    Jack Clark, co-founder of AI company Anthropic, told Vox last year that it’s “a really weird thing that this isn’t a government project.” he too wrote That there are several key issues that he is “confused and uncomfortable with,” including, “How much permission do AI developers need to get from society before they irreversibly change society?” Clark continued:

    Technologists have always had something of a libertarian streak, and this is perhaps best illustrated by the 2010s era of ‘social media’ and Uber et al. with little regard to the societies they affect. This form of permissionless innovation is essentially the underlying preferred form of development reflected by Silicon Valley and the general ‘move fast and break things’ philosophy of technology. Should the same be true of AI?

    I’ve noticed that when someone questions the ideal of “permissive innovation”, a lot of tech enthusiasts push back. Their objections always seem to fall into three categories. Because this is such a perennial and important debate, it’s worth dealing with each of them – and why I think they’re wrong.

    Objection 1: “Our Consent to Use”

    ChatGPT is the fastest growing consumer application in history: It was 100 million active users Just two months after its launch. There’s no disputing that a lot of people found it really cool. And it’s spurred the release of other chatbots like Claude, which are being used by all kinds of people—from journalists to coders to busy parents who want someone else (or something) to make a grocery list.

    Some claim that this simple truth – we are using AI! — proves that people approve of what big companies are doing.

    This is a simple claim, but I think it is very misleading. Use of our AI systems does not equate to consent. By “consent” we generally mean informed consent, not consent born of ignorance or coercion.

    Most of the public is not informed about the true cost and benefits of this system. How many people are aware, for example, that generative AI consumes so much power? Companies like Google and Microsoft are backing away from their climate commitments as a result?

    Also, we all live in an environment of choice that forces us to use technology that we would rather avoid. Sometimes we “give in” to technology because we’re afraid we’ll be at a professional disadvantage if we don’t use it. Think about social media. I personally wouldn’t be on X (formerly known as Twitter) unless it was seen as important to my work as a journalist. in recent times SurveyMany young people say they wish social media platforms had never been invented, but they feel pressured by the fact that these platforms exist.

    Even if you think someone’s use of a particular AI system constitutes consent, that doesn’t mean they consent to the larger project of creating AGI.

    This brings us to an important distinction: there’s narrow AI — a system built for a specific task (say, language translation) — and then there’s AGI. Narrow AI can be fantastic! It’s helpful that AI systems can edit a rough copy of your work for free or let you write computer code using plain English. It’s great that AI is helping scientists better understand disease.

    And it’s amazing that AI has cracked the protein-folding problem—the challenge of predicting which 3D shape a protein will fold into—a puzzle that has stumped biologists for 50 years. The Nobel Committee for Chemistry clearly agreed: it just awarded a Nobel Prize AI pioneers To enable this progress, which will help in drug discovery.

    But this is different from the attempt to create a general-purpose reasoning machine that surpasses humans, which is “magic intelligence in the sky”. While plenty of people want narrow AI, polling shows most Americans don’t want AGI. Which brings us to…

    Objection 2: “The public is too ignorant to tell inventors how to invent”

    Here is usually a quote (though Suspiciously) attributed to automaker Henry Ford: “If I had asked people what they wanted, they would have said fast horses.”

    The claim here is that there is a good reason why talented inventors don’t ask for public buy-in before releasing a new invention: society is too ignorant or unimaginative to know what good innovation looks like. From the printing press and the telegraph to electricity and the Internet, many great technological innovations in history have occurred because a few individuals decided to make them.

    But this does not mean that decisions by fiat are always appropriate. The fact that society often allows innovators to do this may be partly due to technological solutionism, partly due to a belief in the “great man” view of history, and partly because, well, it was quite difficult to consult a wide range of people. Society in the age before mass communication – before things like the printing press or the telegraph!

    And when those innovations came perceived risk And real damage, they didn’t threaten to wipe out humanity entirely or subjugate us to a different species.

    Some of the technologies we have invented so far are to seek democratic input and establish a system of global oversight there is Tried, and rightly so. That’s why we have a Nuclear Non-Proliferation Treaty and a Biological Weapons Convention — treaties that, while a struggle to effectively implement, are vital to keeping our world safe.

    It is true that most people do not understand the ins and outs of AI. so, The argument here is not that the public should point out the pettiness of AI policy. It is that it is wrong to ignore the general will of the public when it comes to “Should governments enforce safety standards before a disaster occurs or punish companies after the fact?” and “Are there certain types of AI that shouldn’t exist at all?”.

    Daniel Colson, executive director of the nonprofit AI Policy Institute, told me last year, “Policymakers shouldn’t get specifics about how to solve these problems from voters or the content of the vote. The place I think is the voter is Although the right people are asking: What do you want out of policy? And where do you want society to go?”

    Objection 3: “Impossible to reduce innovation anyway” 

    Finally, there is the argument of technological inevitability, which states that you cannot stop the march of technological progress — it is unstoppable!

    This is a myth. In fact, there are many technologies that we have decided not to build, or that we have built but with very strict restrictions. Just think of human cloning or human genetic modification. Recombinant DNA researchers famously organized one behind the 1975 Asilomer conference adjourned In specific tests. We are, remarkably, not yet cloning humans.

    Or think of the 1967 Outer Space Treaty. Adopted by the United Nations against the backdrop of the Cold War, it prevents countries from doing certain things in space – such as storing their nuclear weapons there. These days, the Treaty boils down to debate over whether messages should be sent into space with the hope of reaching extraterrestrials. Some argue that this is dangerous because an alien species, once aware of us, could conquer and oppress us. Others argue that it would be great – perhaps aliens gifting us with their knowledge in the form of an Encyclopedia Galactica!

    Either way, it’s clear that the shocks are incredibly high and all of human civilization will be affected, prompting some to make a case for democratic deliberation before deliberately sending people into space.

    As the old Roman proverb says: Everyone should decide what touches.

    This is also true of superintelligent AI such as nukes, chemical weapons or interstellar broadcasting.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts