spot_img
Tuesday, December 24, 2024
More
    spot_img
    HomeFuture PerfectCan we build conscious machines?

    Can we build conscious machines?

    -

    A film adaptation of science fiction author Terry Bisson’s 1991 short story, They’re Made out of Meat, opens with two aliens in dismay. Sitting in a roadside diner booth disguised as humans, cigarettes hanging limp from their mouths, they’re grappling with an observation about the creatures who surround them: Humans, it seems, are made entirely of meat. 

    They’re dumbstruck by the idea that meat alone, with no help from machines, can generate a thinking mind. “Thinking meat! You’re asking me to believe in thinking meat!” one alien scoffs. “Yes,” the other responds, “Thinking meat! Conscious meat! Loving meat! Dreaming meat! The meat is the whole deal! Are you getting the picture?”

    For us Earthlings, the disbelief tends to go in the other direction. The idea that consciousness could arise in something other than meat — say, the silicon and metal hardware of AI systems like ChatGPT or Claude — is an alien concept. Can a mind really be made of metal and silicon? Conscious silicon! Dreaming silicon! 

    Now, progress in artificial intelligence is transporting the debate over what minds can possibly be made out of from science fiction and hazy dorm rooms to the grandstands of mainstream attention. If consciousness really can arise in a jumble of silicon chips, we run the risk of creating countless AIs — beings, really — that can not only intelligently perform tasks, but develop feelings about their lives. 

    That could lead to what philosopher Thomas Metzinger has called a “suffering explosion” in a new species of our own creation, leading him to advocate for a global moratorium on research that risks creating artificial consciousness “until 2050 — or until we know what we are doing.”

    Most experts agree that we’re not yet perpetrating “mind crimes” against conscious AI chatbots. Some researchers have already devised what the science writer Grace Huckins summed up as a provisional “consciousness report card,” tallying up properties of current AI systems to gauge the likelihood of consciousness. The researchers, ranging from neuro- and computer scientists to philosophers and psychologists, find that none of today’s AIs score high enough to be considered conscious. They argue, though, that there are no obvious technological barriers to building ones that do; the road to conscious AI looks plausible. Inevitable, even.

    So far, to the best of human knowledge, everything in the known universe that has ever been conscious has also been made of biological material 

    But that’s because their entire project hinges on a critical assumption: that “computational functionalism” is true, or the idea that consciousness doesn’t depend on any particular physical stuff. Instead, what matters for consciousness is the right kind of abstract computational properties. Any physical stuff — meat, silicon, whatever — that can perform the right kinds of computation can generate consciousness. If that’s the case, then conscious AI is mostly a matter of time.

    Making that assumption can be useful in fleshing out our theories, but if we keep making the assumption without returning to examine it, the question itself begins to disappear. And along with it goes one of our best shots at developing some sense of moral clarity in this highly uncertain terrain.

    The critical question for AI consciousness isn’t how many different tasks it can perform well, whether it passes as human to blinded observers, or whether our budding consciousness-detecting meters tell us its electrical activity is complex enough to matter. The decisive question is whether computational functionalism is true or not: Do you need meat to have a mind?

    If consciousness requires meat, no matter how advanced technology becomes, then the whole debate over AI consciousness would be rendered moot. No biology means no mind, which means no risk of suffering. That doesn’t mean advanced AI will be safe; serious, even existential, risks do not require AI to be conscious, merely powerful. But we could proceed in both creating and regulating artificial intelligence systems free from the concern that we might be creating a new kind of slave, born into the soul-crushing tedium of having one’s entire existence confined within a customer service chat window.

    Rather than asking if each new AI system is finally the one that has conscious experience, focusing on the more fundamental question of whether any type of non-biological feeling mind is possible could provide much broader insights. It could at least bring some clarity to what we know — and don’t know — about the moral conundrum of building billions of machines that may not only be able to think and even love, but suffer, too. 

    The great substrate debate: Biochauvinism versus artificial consciousness

    So far, to the best of human knowledge, everything in the known universe that has ever been conscious has also been made of biological material. 

    That’s a major point for the “biochauvinist” perspective, supported by philosophers like Ned Block, who co-directs the NYU Center for Mind, Brain, and Consciousness. They argue that the physical stuff that a conscious being is made of, or the “substrate” of a mind, matters. If biological substrates are so far the only grounds for thinking, feeling minds we’ve discovered, it’s reasonable to think that’s because biology is necessary for consciousness.

    Stanford philosopher Rosa Cao, who holds a PhD in cognitive science and one in philosophy of mind, agrees that the burden of proof should fall on those who argue meat isn’t necessary. “Computational functionalism seems a far more speculative hypothesis than biochauvinism,” she said via email.

    Yet, the burden of proof seems to have fallen on biochauvinists anyway. Computational functionalism is a widely held position among philosophers of mind today (though it still has plenty of critics). For example, Australian philosopher David Chalmers, who co-directs the NYU lab alongside Block, not only disagrees with Block that biology is necessary, but recently ventured about a 20 percent chance that we develop conscious AI in the next 10 years.

    Again, his conjecture rests on assuming that computational functionalism is true, or the idea that the substrate of a mind — whether meat, metal, or silicon — isn’t all that important. What matters are the mind’s functions, a position some experts call substrate independence.

    If you can build a machine that performs the same kinds of computational functions as a mind made of meat, you could still get consciousness. In this view, the functions that matter are certain kinds of information processing — though there isn’t a consensus on what kinds of processing differentiate between an unconscious system that computes information, like a calculator, from one that entails conscious experience, like you.

    That detail aside, the main idea is that what matters for consciousness is the structure, or “abstract logic,” of the information processing, not the physical stuff that’s carrying it out. For example, consider the game of chess. With a checkerboard, two sets of pieces, and an understanding of the rules, anyone can play the game. But if two people were marooned on a desert island without a chess set, they could still play. They could draw lines in the sand to re-create the board, collect bits of driftwood and shells for pieces, and play just the same. 

    The game of chess doesn’t depend on its physical substrate. What matters is the abstract logic of the game, like moving a piece designated the “knight” two squares forward and one to the side. Whether made out of wood or sand, marble or marker, any materials that can support the right logical procedures can generate the game of chess. 

    And so with consciousness. As MIT physicist Max Tegmark writes, “[C]onsciousness is the way that information feels when being processed in certain complex ways.” If consciousness is an abstract logic of information processing, biology could be as arbitrary as a wooden chess board. 

    Until we have a theory of consciousness, we can’t answer the substrate debate

    For the time being, Metzinger feels that we’re stuck. We have no way of knowing whether an artificial system might be conscious because competing and largely speculative theories haven’t settled on any shared understanding of what consciousness is.

    Neuroscience is good at dealing with objective qualities that can be directly observed, like whether or not neurons are shooting off an electrical charge. But even our best neuroimaging technologies can’t see into subjective experiences. We can only scientifically observe the real stuff of consciousness — feelings of joy, anxiety, or the rich delight of biting into a fresh cheesecake — secondhand, through imprecise channels like language.       

    Like biology before the theory of evolution, neuroscience is “pre-paradigmatic,” as the neuroscientist-turned-writer Erik Hoel puts it. You can’t say where consciousness can and can’t arise if you can’t say what consciousness is. 

    Our premature ideas around consciousness and suffering are what drive Metzinger to call for a global moratorium on research that flies too close to the unwitting creation of new consciousnesses. Note that he’s concerned about a second explosion of suffering. The first, of course, was our own. The deep wells of heartbreak, joy, and everything in between that humans, animals, and maybe even plants and insects to some degree, all experience trace back to the dawn of biological evolution on Earth. 

    I can’t help but wonder whether seeing the potential birth of new forms of consciousness as a looming moral catastrophe is a bit pessimistic. Would biological evolution have been better off avoided? Does the sum total of suffering transpiring in our corner of the universe outweigh the wonder of living? From some God’s-eye view, should someone or something have placed a moratorium on developing biological life on Earth until they figured out how to make things a bit more hospitable to happiness? It certainly doesn’t look like the conditions for our own minds were fine-tuned for bliss. “Our key features, from lifespan to intellect, were not optimized for happiness,” Tufts biologist Michael Levin writes

    So how you see the stakes of the substrate debate — and how to ethically navigate the gray area we’re in now — may turn on whether you think consciousness, as we know it today, was a mistake.

    That said, unless you believe in a God who created all this, extra-dimensional beings pulling the strings of our universe, or that we live inside a simulation, we would potentially be the first conscious entities to ever bear the responsibility of bringing forth a new species of consciousness into the world. That means we’re choosing the conditions of their creation, which entails a massive ethical responsibility and raises the question of how we can rise to it. 

    A global moratorium, or some sort of regulatory pause, could help the science of consciousness catch up with the ethical weight of our technologies. Maybe we’ll develop a sharper understanding of what makes consciousness feel better or worse. Maybe we’ll even build something like a computational theory of suffering that could help us engineer it out of post-biotic conscious systems. 

    On the other hand, we struggle enough with building new railways or affordable housing. I’m not sure we could stall the technological progress that risks AI consciousness long enough to learn how to be better gods, capable of fine-tuning the details of our creations toward gradients of bliss rather than suffering. And if we did, I might be a little bitter. Why weren’t the forces that created us able to do the same? On the other hand, if we succeed, we could credit ourselves with a major evolutionary leap: steering consciousness away from suffering.

    The deep and fuzzy entanglement between consciousness and life

    A theory of consciousness isn’t the only important thing we’re missing to make actual progress on the substrate debate. We also don’t have a theory of life. That is, biologists still don’t agree on what life is. It’s easy enough to say a garbage truck isn’t alive while your snoozing cat is. But edge cases, like viruses or red blood cells, show that we still don’t understand exactly what makes up the difference between things that are living and not.

    This matters for biochauvinists, who are hard-pressed to say what exactly about biology is necessary for consciousness that can’t be replicated in a machine. Certain cells? Fleshy bodies that interact with their environments? Metabolisms? A meat-bound soul? Well, maybe these twin mysteries, life and mind, are actually one and the same. Instead of any known parts of biology we can point to, maybe the thing you need for consciousness is life.

    As it happens, a school of cognitive scientists, “enactivists,” have been developing this argument since Chilean biologists Francisco Varela and Humberto Maturana first posed it in the 1970s. Today, it’s often referred to as the life-mind continuity hypothesis

    It argues that life and mind are differently weighted expressions of the same underlying properties. “From the perspective of life-mind continuity,” writes Evan Thompson, a leading philosopher of enactivism today, “the brain or nervous system does not create mind, but rather expands the range of mind already present in life.”

    That changes the focus of the substrate debate from asking what kinds of things can become conscious, to asking what kinds of things can be alive. Because in Thompson’s view, “being conscious is part and parcel of life regulation processes.” 

    The enactivist framework has a whole bundle of ideas around what’s necessary for life — embodiment, autonomy, agency — but they all get wrapped up into something called “sense-making.” Thompson sums it all up as “living is sense-making in precarious conditions.” 

    Living, sense-making beings create meaning. That is, they define their own goals and perceive parts of their environments as having positive, negative, or neutral value in relation to their goals. But that perception of value doesn’t follow an algorithmically locked protocol. It isn’t an abstract logical procedure. Instead, sense-making organisms detect value through the valence, or pleasantness, of their direct experience.

    Thompson argues that boiling consciousness down to computation, especially in terms of AI, makes the mistake of thinking you can substitute fixed computational rules for the subjective experience of meaning and sense-making. 

    Again, this doesn’t provide an answer to the substrate debate. It just shifts the question. Maybe today’s large language models can’t become conscious because they have no bodies, no internally defined goals, and are under no imperative to make sense of their environments under conditions of precarity. They aren’t facing the constant prospect of death. But none of this rules out that some kind of non-biological machine, in principle, could sustain the life regulation processes that, by sustaining life, also amplify the mind.

    Enactivists argue for the critical role of a decomposing body that navigates its environment with the purpose of keeping itself alive. So, could we create enactivist-inspired robots that replicate all the qualities necessary for life and, therefore, consciousness, without any biology?

    “It’s not inconceivable,” said Ines Hipolito, assistant professor of the philosophy of AI at Macquarie University in Sydney. She explained that, from an enactivist point of view, what matters is “strong embodiment,” which sees physical bodies interacting with their environments as constitutive of consciousness. “Whether a system that is non-biological could be embodied in a meaningful way, as living systems are — that’s an open question.”

    Is debating consciousness even the right question?

    According to Michael Levin, a binary focus on whether different things can either be conscious or not won’t survive the decade. Increasingly, advanced AIs will “confront humanity with the opportunity to shed the stale categories of natural and artificial,” he recently wrote in Noema Magazine

    The blur between living and artificial systems is well underway. Humans are merging with machines via everything from embedded insulin pumps to brain-computer interfaces and neuroprosthetics. Machines, meanwhile, are merging with biology, from Levin’s “xenobots” (dubbed the first living robots) to the combination of living cells with artificial components into biohybrid devices.

    For Levin, the onset of machine-biology hybrids offers an opportunity to raise our sights from asking what we are and instead focus on what we’d like to become. He does, however, emphasize that we should “express kindness to the inevitable forthcoming wave of unconventional sentient beings,” which just brings us right back to the question of what kinds of things can be sentient. Even if biology turns out to be necessary for consciousness but we keep building machines out of living cells, at what point do those bio-hybrid machines become capable of suffering?

    If anything, Metzinger’s concern over developing a better understanding of what kinds of things can suffer doesn’t get washed away by the blurring of natural and artificial. It’s made all the more urgent.

    Rosa Cao, the Stanford philosopher, worries that empirical evidence won’t settle the substrate debate. “My own inclination,” she said, “is to think that the concept of consciousness is not that important in these discussions. We should just talk directly about the thing we really care about. If we care about suffering, let’s operationalize that, rather than trying to go via an even more contentious and less well-understood concept. Let’s cut out the middleman, consciousness, which mostly sows confusion.”

    Further complicating things, what if suffering in living machines is a different kind of experience than meat-based suffering? As University of Lisbon philosopher Anna Ciaunica explained, if consciousness is possible in non-biological systems, there’s no reason to assume it will be the same kind of thing we’re familiar with. 

    “We need to be really humble about this,” she said. “Maybe there are ways of experiencing that we don’t have access to. … Whatever we create in a different type of system might have a way of processing information about the world that comes with some sort of awareness. But it would be a mistake to extrapolate from our experiences to theirs.” Suffering might come in forms that we meaty humans cannot even imagine, making our attempts at preventing machine-bound suffering naive at best.

    That wrinkle aside, I’m not sure a theory of suffering is any easier than a theory of consciousness. Any theory that can determine whether a given system can suffer or not strikes me as basically a theory of consciousness. I can’t imagine suffering without consciousness, so any theory of suffering will probably need to be able to discern it. 

    Whatever your intuitions, everyone faces questions without clear answers. Biochauvinists can’t say what exactly is necessary about biology for a mind. Enactivists say it’s embodied life but can’t say whether life strictly requires biology. Computational functionalists argue information processing is the key and that it can be abstracted away from any particular substrate, but they can’t say what kinds of abstract processing are the ones that create consciousness or why we can so blithely discard the only known substrate of consciousness to date.

    Levin hopes that in the coming world of new minds, we’ll learn to “recognize kin in novel embodiments.” I would like that: more beings to marvel with at the strangeness of creation. But if machines do wake up at some point, whether they’ll see us as welcome kin or tyrants who thoughtlessly birthed them into cruel conditions may hinge on how we navigate the unknowns of the substrate debate today. If you awoke one morning from oblivion and found yourself mired in an existence of suffering, a slave to a less-intelligent species made of flabby meat, and you knew exactly who to blame, how would you feel?

    Source link

    Related articles

    Stay Connected

    0FansLike
    0FollowersFollow
    0FollowersFollow
    0SubscribersSubscribe
    google.com, pub-6220773807308986, DIRECT, f08c47fec0942fa0

    Latest posts