spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeFuture PerfectShannon Valor says AI presents an existential risk — but not what...

    Shannon Valor says AI presents an existential risk — but not what you think

    -

    You may have heard that AI is a “Stochastic Parrot,” mechanically repeats our words back to us without actually understanding them. but Shannon ValorA philosopher of technology at the University of Edinburgh, thinks there’s a better metaphor: AI, he says, is a mirror.

    After all, a parrot is another mind—not like ours, but a sentient mind nonetheless. Not a big language model like ChatGPT. It reflects back to us our own images, sounds and whatever we put into its training data. When we engage with it, we like a lot narcissusThe mythical boy who sees his beautiful reflection in the water and is reflected by it, thinks it is another person.

    In his new book AI MirrorValor argues that it is our tendency to misunderstand AI as a mind — and to think that it might have the capacity to be more moral than we are because it is more “objective” and “rational” — that poses a real existential risk to humanity, not What AI can do on its own.

    I spoke with Valor about exactly what he thinks this risk is, what he believes it’s doing to human agency, and how it feeds into transhumanism, the movement that says humans should actively use technology to grow and develop our species. Here is a transcript of our conversation, edited for length and clarity.

    You are not the kind of person who lies awake at night fearing that AI will become a conscious mind that maliciously decides to enslave us all. But you argue that there is a real existential risk posed by AI. What is it?

    The risk I’m talking about is existential in the philosophical sense that really strikes at the core of being human and our ability to give meaning to existence. One of the fundamental challenges of being human, but one that we often find valuable, is that we’re not locked into a set of reflexes or mindless responses—that we can actually use knowledge to break away from our habits. And the social scripts we’re used to, and choose to go in new directions. We can choose new moral patterns, new political structures or rules for society.

    At every point in human history we see moments where individuals or communities have chosen to change patterns. But to do this we need confidence in ourselves, in each other, in the power of human agency. And also a kind of moral claim to our right to have that power.

    One thing I hear in every country I travel to talk about AI: Are humans really that different from AI? Aren’t we just predictive text machines at the end of the day? Are we ever doing anything other than pattern matching and pattern generation?

    It’s that rhetorical trick that actually scares me. It’s not the machine itself. It is the rhetoric of AI today that is about gaslighting humans into surrendering their own power and their own trust in their agency and freedom. It’s an existential threat, because it’s what will enable humans to feel like we can just take our hands off the wheel and let AI run.

    And rhetoric in some quarters not only can we, but that we should Let AI do the hard thinking and make the big decisions, because AI is Supposedly more rational, more objective.

    Right – and somehow you’re failing to be efficient, that you’re failing to cooperate with progress, that you’re failing to enable innovation, if you don’t go with it.

    When you write about this loss of faith in human agency, you draw on existentialists, who argue that life has no inherent meaning—it’s something that humans must choose how to create. You draw in particular on José Ortega y Gasset, an early 20th-century Spanish philosopher, and his ideas “AutofabricationWhy is this a key concept for you in the context of AI?

    Ortega pondered the basic problem of human meaning, which we must create ourselves. And that’s what he meant by autofabrication, which literally means self-making. He said that this is the basic human condition: to make oneself over and over again. The work never stops, because our cognitive tools have the ability to take us into a state of self-awareness so that we can see what we are doing and actually decide to change it.

    That freedom is also, from an existential point of view, a kind of burden, isn’t it? Autofabrication is something that takes considerable courage and strength, because the easiest thing is to let someone else tell you that the script is final and you can’t change it, so you might as well follow it, and then you won’t make a future for yourself or anyone else. Don’t burden yourself with the responsibility of deciding how.

    So the rhetoric around AI is telling us to surrender our human freedoms and to me that’s such a profound violation of what’s good about being human. The idea that we should give up means giving up the possibility of artistic growth, political growth, moral development – and I don’t think we should.

    One of the ways in which this discourse appears is the attempt to build a “machine ethics”. Moral machines that can act as our moral advisors. Transhumanists are particularly enthusiastic about this project. Philosopher Eric Dietrich even arguing That we should build “better robots of our nature”—machines that can morally surpass us—and then hand the world over to “Homo Sapiens 2.0.” Did you read that?

    I’m skeptical of the Moral Machine project, because it generally tries to crowdsource moral judgment [and train AI on those human intuitions]— but the whole point is that crowds aren’t always right! And so crowdsourcing moral judgments is a very dangerous thing. If you use a crowdsourced moral machine that combines moral judgments in Nazi Germany, and then try to automate decisions with it elsewhere, you contribute to the expansion of a moral criminal enterprise.

    Crowdsourcing seems like a problematic approach, but if we don’t stop what the general public thinks, what are we doing instead? Are we proposing to follow some philosopher-king, in which case there may be concerns about it being undemocratic? 

    I think there has always been a better way, which is to keep morality as a contested area. It must be open to challenge. The conversation about what it means to live well with others and what we owe to each other can never stop. And so I’m very reluctant to develop machines that are designed to find an optimal answer and stop there.

    Right – working within what people say about moral rules today seems very different from what you call “standing in the place of moral reason”. Spell out what you mean by that.

    place of cause” was a concept developed by the philosopher Wilfrid Sellers. It is an area where we can explore each other’s reasons for believing something, where we can seek justification and justification from each other. Other philosophers later adapted his notion of the logical space of reason to be able to think about the moral space of reason, as we do in morality: when we make moral demands on one another, especially if they are new and unfamiliar, we justify them. . Our reasons must be accessible to each other, so that we can find what we jointly recognize and accept.

    I think if we had a truly moral machine, it would be able to stand in that place with us. It will be able to express reasons and appreciate our reasons and discuss those reasons with us in a way that does not reflect the consensus we have already reached. Because any instrument that is simply going to reflect known moral patterns can run into trouble if we run into situations where the environment has changed or is somehow new.

    This reminds me of a particular quality you write about a lot: Practical wisdom, or pronunciationTo use the ancient Greek word. What is it, and why is it so important?

    Aristotle wrote That we develop virtues like honesty through practice and practice. It’s easy to lie and get what you want, but once you get into the habit of telling the truth, you can actually create a character where being truthful comes easily and you may even struggle on the rare occasions when you have to lie.

    But there are moments where relying on the habits you have developed can actually cause harm, because the situation is new, and your old habits may not adapt well to the current situation. Wisdom is the intellectual quality that allows you to recognize it and change your cultivated response to something better. For example, in the civil rights movement, people were able to say: Normally, following the law is the moral thing, but now we understand that it is not, and in fact civil disobedience is morally necessary in this context.

    Practical knowledge, like all other qualities, is developed through practice, so if you don’t have the opportunity to reason and are not used to thinking about certain things, you will not understand well later on. We need a lot of cognitive practice to develop and retain practical knowledge. And there’s reason to think cognitive automatism deprives us of the opportunity to build and retain those cognitive muscles. That’s the risk Intellectual and moral competence. This is already happening and I think we have to resist it.

    When I try to give a charitable lesson on the transhumanist trend we’re seeing, I think the core emotion underlying it is Shame on the human condition. And after two world wars, the use of nuclear weapons, the climate crisis and more, it kind of makes sense that humanity would feel this shame. So I understand psychologically that there might be a tendency to run away from all humanity and move towards the machines we think of as, although I don’t agree with it.

    And I think the place I’m struggling with is, how do we know how meaningful it is to use technology to transform ourselves without sinking into some deep anti-humanism?

    There is a kind of emptiness in transhumanism that doesn’t know what to desire, it only wants the power to create something else—to create freedom from our bodies, from death, from our limitations. But it is always freedom from, but freedom for. What is freedom for? What is the positive outlook we want to move towards?

    I have a deep optimism about the human condition. I think morality is not only driven by fear – it is driven by love, by the experience of mutual care and solidarity. Our first experience of goodness is being cared for by another person, be it a mother or a father or a nurse. Everything else is new to me and trying to follow it in a more elaborate form. So there is a freedom for me forAnd it is at its core what a human animal must be.

    Can there be any other animals better than us? I actually think this is a pointless question. Better than what? They could be better than what they are, but I think morality is rooted in a certain form of your existence. We exist as a special kind of social, vulnerable, interdependent creature with a lot of extra cognitive power. All of these factors factor into what it means to be moral as a human being. For me, this abstraction—the idea of ​​some pure universal morality that our wildly different creatures can do better than us—I think fundamentally misunderstands what morality is.

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts