Plans must be made for the welfare of sentient AI, animal consciousness researchers argue

Plans must be made for the welfare of sentient AI, animal consciousness researchers argue

  • Post author:
  • Post category:Tech



Computer scientists need to grapple with the possibility that they will accidentally create sentient artificial intelligence (AI) — and to plan for those systems’ welfare, a new study argues.

The report published on Thursday comes from an unusual quarter: specialists in the frontier field of animal consciousness, several of whom were signatories of the New York Declaration on Animal Consciousness.

As The Hill reported in April, that declaration argued that it was “irresponsible” for scientists and the public to ignore the growing evidence of widespread sentience across the animal kingdom.

The AI welfare report builds on a moral and intellectual framework similar to that of the animal consciousness one from April: the idea that humans tend to perceive sentience only in their own image, creating risks for both the beings they live among — or create — and themselves.

Data suggesting sentience in birds and mammals — and even crabs and shrimp — far outweighs any evidence for self-awareness in the cutting-edge machine tools humans are developing, acknowledged Jeff Sebo, a bioethicist at New York University who co-wrote both the AI welfare report and the animal consciousness declaration.

But while the probability of creating self-aware artificial life over the next decade might be “objectively low,” it’s high enough that developers need to at least give it thought, Sebo said.

While it is generally assumed that consciousness in humans — or, say, octopuses — arose by accident, humans are actively tinkering with AI in a way deliberately intended to mimic the very characteristics associated with consciousness.

Those include “perception, attention, learning, memory, self-awareness” — abilities that may have gone hand-in-hand with the evolution of consciousness in organic life.

Consciousness research is the current site of fierce debate over what the preconditions of consciousness really are; whether it requires squishy cells made of chains of carbon molecules or a physical body. 

But Sebo said there is little we currently understand about consciousness that forecloses the possibility that AI developers could create conscious systems accidentally, in the process of trying to do something else — or intentionally, because “they see conscious AI as safer or more capable AI.”

In some cases, the work of developing these systems is a literal attempt to mimic the structures of likely-sentient organic life. In findings published in Nature in June, Harvard and Google’s DeepMind created a virtual rat with a simulated brain that was able to emulate the flesh-and-blood rodents’ “exquisite control of their bodies.”

There is no particular reason to believe that the digital rat — for all the insight it provided into how vertebrate brains function — was self-aware, though DeepMind itself has a job posting for a computer science PhD able to research “cutting-edge social questions around machine cognition [and] consciousness.” 

And sentience, as both animal researchers and parents of infants understand, is something entirely separate from intelligence.

But in a sense, this is the problem Sebo and his coauthors are raising in a nutshell. They contend that developers — and the public at large — have evolutionary blind spots that have set them up poorly to deal with the age of possibly-intelligent AI.

“We’re not really designed by an evolution and lifetime learning to be perceiving or tracking the underlying mechanisms,” said Rob Long, a coauthor of Thursday’s paper and executive director at Eleos AI, a research group that investigates AI consciousness.

Over billions of years, Long said, our lineage evolved “to judge the presence or absence of a mind based on a relatively shallow set of rough and ready heuristics about how something looks and moves and behaves — and that did a good job of helping us not get eaten.”

But he said that brain architecture makes it easy to mis-attribute sentience where it doesn’t belong. Ironically, Sebo and Long noted, that makes it easiest to attribute sentience to those machines least likely to have it: chatbots.

Sebo and Long argued this paradox is almost hard-wired into chatbots, which increasingly imitate the defining characteristics of human beings: the ability to speak fluently in language, a characteristic that companies like OpenAI have bolstered with new models that laugh, use sarcasm and insert “ums” and vocal tics.

Over the coming decades, “there will be increasingly sophisticated and large-scale deployments of AI systems framed as companions and assistance in a situation where we have very significant disagreement and uncertainty about whether they really have thoughts and feelings,” Sebo said.

That means humans have to “cultivate a kind of ambivalence” towards those systems, he said: an “uncertainty about whether it feels like anything to be them and whether any feelings we might have about them are reciprocated.” 

There is another side to that ambivalence, Sebo said: the possibility that humans could deliberately or accidentally create systems that feel pain, can suffer or have some form of moral agency — the ability to want things and try to make them happen — that he argued sit poorly alongside the things that computer scientists want those systems to do.

In the case of animals, the consequences of under-ascribing sentience are clear, Sebo noted. “With farm animals and lab animals, we now kill hundreds of billions of captive farmed animals a year for food, and trillions of wild-living animals per year — not entirely but in part because we underestimated their capacity for consciousness and moral significance.”

That example, he said, should serve as a warning — as humans try to “improve the situation with animals” — of what mistakes to avoid repeating with AI.

Sebo and Long added that another major problem for humans trying to navigate the new landscape, aside from a species-wide tendency to see sentience in — but only in — that which looks like us, is a pop-culture landscape that wildly mischaracterizes what actually sentient AI might look like.

In movies like Pixar’s Wall-E and Steven Spielberg’s AI, sentient robots are disarmingly human-like, at least in some key ways: they are single, discrete intelligences with recognizably human emotions who live inside a body and move through a physical world.

Then there is Skynet, the machine intelligence from the Terminator series, which serves as a magnet for AI safety conversations and thereby constantly draws popular discourse around emerging computer technologies back toward the narrative conventions of a 1980s action movie.

None of this, Sebo argued, is particularly helpful. “With AI, welfare, truth could be stranger than fiction, and we should be prepared for that possibility,” he said. 

For one thing, digital minds might not be separate from each other in the way that human and animal minds are, Sebo said. “They could end up being highly connected with each other in ways that ours are not. They could have neurons spread across different locations and be really intimately connected to each other.” 

That form of consciousness is potentially more akin to that of an octopus, which has a central brain in its head and eight smaller, semi-independent brains in its arms.

AI, Sebo said, could bring “an explosion of possibilities in that direction, with highly interconnected minds — and questions that arise about the nature of self and identity and individuality and where one individual ends and where the next individual begins.”

No matter what form potential AI consciousness may ultimately take — and whether it is possible at all — Sebo, Long and their coauthors argued that it is incumbent on AI developers to begin acknowledging these potential problems, assessing how they fit into the tools they are building and prepare for a possible future in which those tools are some flavor of sentient.

One possible idea of what this could look like has been offered by the University of California Riverside philosopher Eric Schwitzgebel, who has argued for a policy of “emotional alignment” in which the degree of sentience an AI program presents should be directly related to how sentient it is likely to be.

If humans someday design sentient AIs, Schwitzgebel has written, “we should design them so that ordinary users will emotionally react to them in a way that is appropriate to their moral status. Don’t design a human-grade AI capable of real pain and suffering, with human-like goals, rationality, and thoughts of the future, and put it in a bland box that people would be inclined to casually reformat.”

And, by contrast, “if the AI warrants an intermediate level of concern — similar, say, to a pet cat — then give it an interface that encourages users to give it that amount of concern and no more.”

That is a policy, Sebo acknowledged, that would force the chatbot and large language model industry into a dramatic U-turn. 

Overall, he said, he and the new article’s other co-authors wrote it to force conversation on an issue that must be confronted before it becomes a problem. “And we think that it would be good for people building these extremely capable, complex systems to acknowledge that this is an important and difficult issue that they should be paying attention to.”



Source link