A chess-playing robot in Russia recently grabbed its seven-year-old opponent’s finger, fracturing it. Bystanders prised the victim’s digit from the robot’s grasp, but the damage had been done.
This feels like the start of something momentous: AI acting independently in an almost spiteful way. But it isn’t. The robot lunged when the child took his move too quickly. Most likely this was a glitch, or the result of a disgruntled inventor building in a punitive programme that took revenge on players who made their moves without thinking things through.
The president of the Moscow Chess Federation Sergey Lazarev’s comment on this accident was: “This is bad”. I agree. Bad for the child. But it wasn’t the first move in some sort of robot rebellion. Nor was it an example of a robot thinking for itself. Far from it.
Some philosophers argue that once a machine is complex enough and responds to its environment much as human beings do, then we should think of it as a conscious being, and perhaps even as a person with rights. Others maintain there is something special about biological organisms made from flesh and blood and that replicating behaviour using silicon chips could at best produce a convincing zombie robot, one that seems conscious but isn’t. It’s too early to say who is right about this. We still don’t fully understand the basis of our own consciousness.
There are, however, widespread fears that AI robots capable of independent thought and action will soon take over. This radical transformation won’t be brought about by chess-playing machines. Anthropomorphism is always tempting, but a chess-playing robot is just a chess-playing robot, not a quasi-human with malicious intent. It’s just too specific in its expertise.
When the robot revolution comes, it will be led by complex artificial intelligence machines that adapt to different environments and contexts and are capable of learning and acting for themselves in a wide range of situations. How far we are from that day is a moot point. But these robots are coming.
More mundane AI devices are already here in factories, cars, banks, operating theatres, and the home, only occasionally injuring their users. But these are sophisticated tools with very limited and task-oriented abilities. They won’t be taking over the world.
For some time Ray Kurzweil has been predicting The Singularity, the moment when developments in computational technology and capability produce superior autonomous machine-beings. These robots will be better than humans at designing intelligent machines and will spawn even more intelligent machines and so on in a spiral of invention until super-duper-intelligent machines emerge.
More recently, in 2020, Elon Musk predicted that superintelligent AI would be among us by 2025. That’s curiously specific. Musk wasn’t fear-mongering about this though: he suggested this would just make things unstable and weird for humanity. That’s all right then.
The maverick scientist James Lovelock, who died on his 103rd birthday last week, was even more optimistic about the rise of the robots – he called them cyborgs. He thought they would soon take over and would look upon human beings much as we look upon plants. Despite this, because they’d have an interest in surviving, he argued, they’d be eco-friendly and would quickly solve the climate crisis. If he’s right, we’re on the cusp of very interesting times.
Is this just wild speculation fuelled by a sci-fi-rich diet, or a plausible account of where we’re heading? I’m inclined to be more pessimistic than Lovelock or Musk. If superintelligent robots think of us as we do plants, what’s to stop them deciding to treat us like weeds?
As long ago as 1942 Isaac Asimov proposed his three laws of robotics:
First Law – A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law – A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
Third Law – A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Yet AI is already guiding weapons designed to devastate, demolish, and kill. These devices are programmed to search and destroy.
So far robot weapons have been obedient. But if they get intelligent enough and start replicating and running wild it’s unlikely that any programmed-in safeguards will avert disaster. With all this going on, one rogue chess playing-robot that occasionally attacks its opponents should be the least of our worries.