Like it or not, digital clones are coming. In fact they’re already here.
Grief tech is taking off. You can already use an app called HereAfter to record your memories. Once you die, this app will respond to your family and friends’ questions about your life – in your voice, convincingly.
That app looks as if it sticks to your script. But more disconcerting is the way AI is being used to create avatars that can go far beyond the messages that the living embed in them.
These respond in character to questions that their subjects were never asked and never expressed a view about.
Enough bereaved people want conversations with the dead for this to be a lucrative business. You no longer need a clairvoyant and a Ouija board to make this seem to happen.
I use the word “seem” advisedly here. AI doesn’t give you a real conversation. That should be obvious. It mimics one. Convincingly, perhaps, but it is just shuffling language.
If the mimicry is good enough, and it’s likely to be, there will be people who believe they are really speaking with the dead, like the many millions of gullible people who have attended seances. This could go very badly – imagine phishing taking place in the voice and character of your dead parents.
The existentialist Jean-Paul Sartre wisely observed that at death we become “prey to the other”. What he meant was that until we die we can subvert other people’s views of our lives and character.
We aren’t just what others think we are. Just because we have acted one way, doesn’t mean we will necessarily continue to do that.
We can make radical changes of direction, we can reject others’ views of us with words and deeds, we have a future, and can define ourselves.
As Sartre puts it: “Life decides its own meaning because it is always in suspense; it possesses essentially a power of self-criticism and self-metamorphosis which causes it to define itself as a ‘not yet’”.
“Not yet” because while we are alive there is always something else that we might do or feel. But at death, we move from being what he called a “being-for-itself” (roughly a self-conscious being with a capacity to choose and act) to becoming a “being-in-itself” (an object, a thing, something incapable of thought or choice).
All our actions have been performed, and we are no more than the sum of what we did; whereas in life we are always transcending what we have done, projecting ourselves into what we might or will do.
Sartre thought that once we’re dead other people become our guardians. But not necessarily in a good sense of “guardian”. They give our lives particular meanings, determine what we were, or perhaps just forget us.
They take control and we don’t get a say any more. What our lives come to mean may change, but we are no longer responsible for those meanings.
Sartre goes further. When we think about what will happen after our deaths, he says that we should think about ourselves as future prey. There is then no other way we can be. We’ll only be what others make of us – we won’t be there to resist whatever someone wants to do with what we were.
The real danger of AI clones is that they are likely to be completely convincing. It will be almost impossible to resist the impression that we are having real conversations with these “deadbots”.
Our friends and relatives will still seem to be alive and communicating with us, but in reality we’ll be interacting with algorithmically generated simulacra.
If Sartre was right about how others take control of our lives after we die, and I think he was, digital developments are going to make that situation more complicated and far worse. We’re all at risk of becoming prey to the tech companies and whoever designs the avatars that succeed us.
Film-makers are already resurrecting long-dead celebrities and pushing to feature them in new movies. But it’s not just Marilyn Monroe’s estate that should be worried about posthumous performances.
These new technical possibilities should spur us to think much harder about what we value about real living conscious beings and our interactions with them as opposed to interactions with convincing illusions.
If not, we’re at risk of re-entering Plato’s cave where people are chained facing a wall of flickering shadows which they take to be real, when in fact they are just images with no substance