What do we want from our policymakers? Is it better for them to see a glass half full or a glass half empty? Should they be legislating with hope in their hearts, or with a deep fear that everything that can get worse will get worse?
This has been the question hiding in plain sight in the news this week. Headlines have mostly come from Trump’s inauguration, and from Keir Starmer’s press conference about the Southport murders. Still, something else has quietly been happening.
In one corner, lobby journalists reported on Parlex, an AI tool developed for civil servants and ministers, and inspired by Yes Minister‘s Sir Humphrey. In time, it should be able to predict how the House of Commons feels about certain policies, before they have even been brought into the chamber. This would allow both Whitehall and Downing Street to plan for potential rebellions, or work on their messaging.
Though not wholly useless, it doesn’t quite feel revolutionary. Whips offices exist for a reason, and so do special advisers. There are already ways in which parliamentarians can be studied, and their behaviour predicted. A good whip will not only be able to tell you why one of their MPs is likely to rebel on a certain bill; they should also be capable of telling you why they are planning to defy the party, and whether they can be convinced not to.
Politicians aren’t bots who merely walk into voting lobbies then return to a dormant state. It isn’t clear that finding out how they may vote in a purely binary way, based on their previous voting history, is worth all that effort, especially given what is happening in the other corner. Just ask Cheryl Bennett.
Over in Wednesbury in the West Midlands, 100 miles away from SW1, the teacher had to go into hiding and stop doing her job, all for something she didn’t do. During last year’s local elections, Bennett went leafleting for the Labour party, and was caught on a household’s security camera.
Someone – there is no way of knowing who – doctored the footage to make it look like she used racist language against the homeowner, including a slur against Pakistani people. The fake video was posted on social media, and eventually went viral.
As a result, Bennett’s school received hundreds of complaints from parents and others, and fears for her physical safety meant that she had to send people to do her food shopping for her. At times, she told the press, she even considered ending her life.
That these two stories came out within 24 hours of each other should have drawn more attention than it has. On the one hand, the Westminster bubble is optimistically trying to use new technology to tinker around the edges, and make life ever so slightly more straightforward for the executive. On the other, regular people are having their lives ruined by malevolent actors we can’t even trace.
Though politicians are trying to enthusiastically embrace the future by welcoming in AI, the way they currently engage with tech feels entirely divorced from the real world, and the real worries lying within it. Of course, no-one is arguing that AI should be entirely ignored, but developing pleasant little tools trying to make whips’ lives smoother feels like they are fiddling while Rome burns.
Deepfakes are becoming more and more realistic by the day and, as any feminist campaigner will tell you, the people who will suffer most from them will be women. Falsified revenge porn is already mainstream, and soon it will be impossible to tell the difference between those videos and real ones.
As Cheryl Bennett’s case shows, people will happily be fooled even by unconvincing fakes, so what will happen once the software gets even better? AI, like any other emergent piece of technology, can be used both as a tool and a weapon. Governments, here and elsewhere, cannot solely focus on the former because it suits their agenda.
Legislating for worst case scenarios isn’t pleasant, and it won’t boost the economy, but it is something that must be done now, before it’s too late.