Can An AI Be Sentient?

You’ve probably heard the news about a Google engineer who was recently suspended following that engineer’s assertion that their A.I., an advanced project called LaMDA, had become sentient. You can read the details elsewhere. Not being an expert, I would guess that we are nowhere near making a sentient AI. However, that doesn’t mean we won’t be able to build a sentient A.I. or at least a convincing chatbot one day. That got me asking; does it matter?

I’ve written a bit about this before when talking about the idea of digital immortality. I’d like to talk a bit about a similar issue here. Before I questioned whether a digital copy of you is actually you and discussed whether that matters or not. Specifically, I suggested that whether the copy of you is actually you is irrelevant. A sufficiently advanced computer could probably make a convincing simulation of you that acts just like you would, so other people would still have the experience of you being alive even though you wouldn’t, thus your mark will still continue to be made on the world.

The same way of thinking can apply to whether an artificial intelligence is sentient or not. If you were texting an extremely advanced chatbot would you be able to tell whether it is human or not? This is essentially what the Turing Test entails. So the question I have to ask, is does it matter? An advanced chatbot programmed to behave as if it is sentient could potentially convince a human that it was sentient.

So does it matter if the A.I. is actually sentient? Because either way, an A.I. could conceivably produce a convincing simulation of sentience. Especially if we choose to anthropomorphize it as we are prone to do without even meaning to.

Leave a Reply