Skip to main content

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

Everyday philosophy: Do chatbots deserve moral rights?

The time may come where they’re viewed as fellow workers, treated with respect and consideration

Image: TNE/Getty

Remember that Google engineer, Blake Lemoine, who in 2022 suggested that the chatbot he was working on was some kind of conscious being? 

Commentators were quick to point out that LLMs (Large Language Models) are just stochastic parrots: they are very good at mimicking some aspects of conversation, but that is different from having a genuine perspective on the world. Thomas Nagel wrote a well-known essay on consciousness called “What is it like to be a bat?”. It’s too early to write the sequel “What is it like to be a bot?”

Two years later, and chatbots are more sophisticated. They “hallucinate” less, seem to understand better, and are becoming more convincing in dialogue. 

Some of them pass the Turing Test already – you could have a conversation with one without realising you were interacting with an algorithm rather than a person, at least for a few minutes. There’s no doubt that some deserve the label “artificial intelligence”. 

Yet intelligence isn’t the same as consciousness. Most philosophers who discuss consciousness think of it as subjective experience, having, for example, that distinctive sensation of seeing a piece of red silk, reflecting on the past or future, feeling a wave of love for someone, or enjoying the sensation of biting into a ripe apricot – the sorts of experiences most of us think only human beings can have.

In the last few weeks, the philosopher David Chalmers’s comment that it’s conceivable that some chatbots will be conscious within a decade – really conscious and not just imitating the responses of conscious beings – has gone viral on social media. This view fits with his general picture of the mind.

Chalmers doesn’t think it obvious that consciousness can only occur in living brains. There could, in principle, be a computer that has consciousness, even though that computer has yet to be built. Here are the bones of his argument:

Biology can support consciousness.

Biology and silicon aren’t relevantly different in principle (such that one can support consciousness and the other not). Therefore:

Silicon can support consciousness in principle. 

Premise 2 here is contentious. He uses the neuromorphic replacement argument to support it. 

This is the idea that you could replace a bit of your brain with silicon chips that performed the same function. If you did this microchip by microchip until the whole brain was replaced, you’d have a functioning silicon-based brain that presumably would be conscious just as the flesh and blood one was.

This thought experiment itself presupposes a lot, however. We’re nowhere near being able to do anything like this, not least because of the complexity of the interconnections, and the way in which hormonal and other biological features of the brain play a role. There’s a big difference between a conscious brain and a computer – the former is part of a living organism. 

The neuroscientist Anil Seth is more cautious than Chalmers. He suspects our capacity to be conscious is intimately connected with how we have evolved with biological brains. 

Perhaps biology-specific features play important roles that could not be replicated by a silicon-based artificial system, no matter how sophisticated. Perhaps no silicon-based computer could ever achieve consciousness because it would have been built from the wrong kind of materials and because it would not be alive. 

We are still at a highly speculative stage with all this, and there will be plenty more people like Blake Lemoine taken in by high-powered chatbots. We’re a gullible species, and LLMs are designed to play on that. 

It’s already easy to kid yourself that you are having a conversation with a real person with a perspective on the world even though all you are doing is triggering searches of vast databases for patterns that produce appropriate responses to keyed-in prompts. 

Why does this matter? Whether a bot could be conscious is an intriguing philosophical question in itself. The ethical issues this could lead to, however, are more important. 

We tend to assume that if a being is conscious at a moderately high level (think of a chimpanzee, an orangutan, a dog, a dolphin, a whale or an elephant) then it should be accorded some moral rights. If conscious chatbots become a thing, we might need to start treating them not as tools that we can use however we like, but as akin to fellow workers to be treated with a certain level of respect and consideration. 

If that does ever happen, let’s hope they feel the same way about us.

Hello. It looks like you’re using an ad blocker that may prevent our website from working properly. To receive the best experience possible, please make sure any ad blockers are switched off, or add https://experience.tinypass.com to your trusted sites, and refresh the page.

If you have any questions or need help you can email us.

See inside the Project 2025 edition

Image: TNE/Getty

Dilettante: How I learned to love the Olympics

The opening ceremony felt like France reintroducing itself to the world, warts and all