June 15, 2024

Studying Animal Sentencing Could Help Solve Sentient AI’s Ethical Puzzle

Artificial intelligence has progressed so rapidly that even some of the scientists responsible for many key developments are worried about the pace of change. Earlier this year, more than 300 professionals working in AI and other public figures issued concerns that blunt warning about the danger posed by technology, comparing the risk to that of pandemics or nuclear war.

The question of consciousness is just under the surface of these concerns. Even if there is no “nobody at home” inside today’s AIs, some researchers wonder if they might one day show a glimmer of consciousness — or more. If that happens, it will raise some moral and ethical concerns, he says Jonathan Birchis professor of philosophy at the London School of Economics and Political Science.

As AI technology advances, ethical questions fueled by human-AI interactions are taking on a new urgency. “We don’t know whether we should bring them into our moral circle, or exclude them,” Birch said. “We don’t know what the consequences will be. And I take that seriously as a real risk that we should start talking about. Not that I really think so GPT Chat is in that category, but because I don’t know what will happen in the next 10 or 20 years.”

In the meantime, he says, we might do well to study other non-human minds – such as animal minds. Birch is in charge of the university Animal Sentencing Foundations Project, an effort funded by the European Union “which aims to try to make some progress on the big questions of animal emotion,” as Birch said. “How do we develop better methods to study the conscious experiences of animals scientifically? And how can we put the emerging science of animal sentience to work, to design better policies, laws and ways to care for animals?”

Our interview was conducted on Zoom and email, and has been edited for length and clarity.

(This article was originally published Not dark. Read the base.)

Non-dark: There is an ongoing debate about whether AI can be conscious, or sentient. And there seems to be a parallel question of whether AI can appearance to be sensitive. Why is that distinction so important?

Jonathan Birch: I think it’s a huge problem, and something that should scare us, really. Even now, AI systems are capable of convincing their users of their emotions. We saw that last year with the case of Blake Lemoine, the Google engineer who came to lie that the system he was working on was sentient—and that’s just when the output is just text, and when the user is a highly trained AI expert.

So, imagine a scenario where AI is able to control a human face and human voice while the user is inexperienced. I think AI is already in a position where it can convince a large number of people that it’s sentient rather easily. And it’s a big problem, because I think we’ll start to see people campaigning for the benefit of AI, AI rightsand things like that.

And we won’t know what to do about this. Because what we want is a strong knock-down argument that proves that the AI ​​systems they are talking about no conscious. And we don’t have that. Our theoretical understanding of consciousness is not mature enough to allow us to confidently assert its absence.

UD: A robot or an AI system could be programmed to say something like, “Stop that, you’re hurting me.” But such a simple assertion is not enough to serve as a litmus test for feeling, right?

JB: You can have very simple systems [like those] developed by Imperial College London to help doctors in their training to mimic expressed human pain. And there is absolutely no reason to think that these systems are sentient. They are not really feeling pain; they are just mapping inputs to outputs in a very simple way. But the expressions of pain they produce are quite mundane.

I think we’re in a similar situation with chatbots like ChatGPT—they’re trained on over a trillion words of training data to mimic a person’s response patterns to respond in ways that a person would respond.

So, of course, if you give him a cue that a person would respond to by making an expression of pain, he will be able to skillfully imitate that response.

But I think that when we know that is the case – when we know that we are dealing with skillful mimicry – there is no strong reason to think that there is any actual pain experience behind that.

UD: This entity that the medical students are training on, I’m guessing that’s a robot?

JB: That’s right, yes. So they have something like a dummy, with a human face, and the doctor is able to push the hand and get an expression that mimics the expressions people would give for different degrees of pressure. It is to help doctors learn how to perform techniques on patients appropriately without causing too much pain.

And it’s easier for us to take in as soon as something has a human face and makes expressions like a human would, even if there’s no real intelligence behind it at all.

So if you imagine that you’re teaming up with the kind of AI that we see in ChatGPT, you have a kind of mimicry that’s really convincing, and will convince a lot of people.

UD: We seem to know the sentence inside out, so to speak. We understand our own feeling – but how would you test for feeling in others, whether AI or any other entity besides itself?

JB: I think we are in a very strong position with other people, who can talk to us, because we have a very rich body of evidence there. And the best explanation for that is that other people have conscious experiences, just like we do. And so we can use this kind of inference that philosophers sometimes call “the inference of the best explanation.”

I think we can approach the subject of other animals in exactly the same way—other animals do not speak to us, but exhibit behaviors that are naturally explained by attributing states such as pain. For example, if you see a dog licking its wound after an injury, nursing that area, learning to avoid the places where it is at risk of injury, you would naturally explain this pattern of behavior by expressing a state of pain.

And I think when we’re dealing with other animals that have nervous systems quite similar to ours, and that have evolved just like ours, I think that conclusion is entirely reasonable.

UD: What about an AI system?

JB: In the case of AI, we have a huge problem. First we have the problem that the substrate different. We don’t really know if conscious experience is substrate sensitive – must there be a biological substrate, ie a nervous system, a brain? Or is it something that can be achieved in a completely different material – a silicon based substrate?

But there is also the problem that I have called the “gambling problem”—when the system has access to trillions of words of training data, and has been trained with the goal of imitating human behavior, the types of patterns of behavior it produces. to be explained by actually having the conscious experience. Or, alternatively, they could be explained by setting it as a goal to behave as a person who responded in that situation.

So I really think we’re in trouble with AI, because we’re unlikely to be in a position where it’s clear that the best explanation for what we’re seeing is that the AI ​​is conscious. There will always be other plausible explanations. And that’s a very difficult connection to discover.

UD: What do you think might be our best bet for distinguishing between something that is actually conscious and an entity that has the appearance of speech?

JB: I think the first step is to recognize it as a very deep and very difficult problem. The second step is to try to learn as much as we can from the situations of other animals. I think that when we study animals that are close to us, in evolutionary terms, like dogs and other mammals, we are not always sure whether conscious experience may depend on very specific brain mechanisms that are specific to the brain mammal

To tackle that, we need to look at as wide a range of animals as we can. And we have to think especially about invertebrates, like octopuses and insects, where this could be another example of an independently developed conscious experience. Just as an octopus’s eye evolved completely separately from our own—there’s this amazing blend of similarities and differences—I think its conscious experiences will be like that too: an independent evolution, similar in some ways, very different. other ways.

And by studying the experiences of invertebrates like octopuses, we can get to grips with the really deep aspects of what a brain needs to be in order to support conscious experiences, things that go deeper than just having these specific brain structures . exists in mammals. What types of computers are required? What types of processing?

Then—and I see this as a long-term strategy—we might be able to go back to the AI ​​case and say, well, are there those special kinds of computation that we find in conscious animals like mammals and octopuses?

UD: Do you believe that one day we will create sentient AI?

JB: I’m about 50:50 on this. It is possible that consciousness depends on special aspects of the biological brain, and it is not clear how to test whether it does. So I think there will always be substantial uncertainty in AI. I am more confident about this: If possible consciousness in principle achieved in computer software, then AI researchers will find a way to do it.

Image Credit: Makanaya Cash / Unsplash

Leave a Reply

Your email address will not be published. Required fields are marked *