2024 is going to be known as the year when acknowledging that we’re in the midst of a meta-crisis stopped being controversial. Your head has to be an anatomically improbable distance down the rabbit hole of denial to remain blind to the indicators of imminent biophysical collapse, or the threat of rogue AI (or even perfectly tame AI being weaponised in service to the giant vampire squid of the markets).
More urgent – in an election year on both sides of the Atlantic – are the real-time, deliberate destruction of our trust in the democratic process that notionally keeps us civilised
This destruction is not new. In her seminal work, Democracy in Chains, Nancy MacLean outlines the intent of James McGill Buchanan, the US economist who has been called the architect of the American radical right, and those he influenced. MacLean says they want to ensure that “…the will of the majority [can] no longer influence representative government on core matters of political economy…”. And that this right wing revolution necessarily entails “piecemeal, yet mutually reinforcing, assaults on the system…”
As MacLean recounts it, Buchanan’s proposed 50-year plan to overturn democracy from the inside was progressing smoothly – until 2008 saw the election of Barack Obama, and the white supremacists, panicking, accelerated the process. In doing so, they demonstrated quite how fragile was western democracy, quite how easy it was for those who, in Steve Bannon’s words, were “going for the head wound”, to outsmart those who were “having a pillow fight”.
And while it’s easy to identify the pillow-fighters by their penumbra of pointless fluff (Keir Starmer, I’m looking at you), the head-shooters are not always the clowns flouting parliamentary, national and international law. More often, they’re hidden behind multiple floating IP addresses, popping out from under their digital bridges to rev up a thousand troll-bot accounts aimed at keeping the culture wars nicely incendiary.
But there are parts of the world where the head-shooters are facing people who actually care, who actually know what they’re doing and who have actual skills to deflect the assaults. Taiwan is this case in point.
Like the UK and the US, 2024 marked a general election year. Unlike either the UK or the US, the government acknowledged that the People’s Republic of China might want to intervene in the democratic process and took sane, thoughtful, useful action.
Led by Audrey Tang, the world’s youngest digital minister, they employed pre-emptive tactics to head off the more obvious assaults. As far back as 2022, they began “pre-bunking” the likely deepfake AI videos with counter-videos that demonstrated both the technique – “here’s how you do this on a MacBook or a mobile phone” – and then the results. “Here’s how Audrey Tang might look in a deep-fake video”.
Crucially, they also began a process of education, demonstrating how ordinary citizens could verify the authenticity of any video, interrogate its provenance and make their own decisions on what could be trusted. Knowing that any new information takes many repeats before it becomes embedded as the norm, they kept on plugging this message for two full years so that when, in early 2024, deepfake videos did indeed assail their airwaves during January’s election campaign, they didn’t gain much traction: Taiwanese citizens had “built antibodies or inoculations in their minds”.
But verifying provenance requires that there is a trusted, robust verification system in place and this is where Tang’s capacity to bring 21st-century thinking to 21st-century problems shines through.
A hacker brought to fame – and then into government – in the wake of the Sunflower Revolution of 2014, Tang is one of the sharpest digital minds on the planet. She is wholly committed to radical transparency, to creating systems of deliberative and distributed democracy, to using social media to create consensus, rather than racing to the bottom of the collective brainstem in an effort to mine our dopamine addictions for corporate profit.
Taiwan uses the social media app, Polis, based on the concept that, “with the right amount of human intelligence and the right amount of artificial intelligence, we can have the crowd moderate each other.” This balance between artificial and human intelligence is crucial.
Backed by numerous studies, it is predicated on the assumption that every individual has a view worth sharing and that, given the chance, most people will aim for consensus.
On the routes to transparency and verification, they have identified an ABC to information manipulation: Actor, Behaviour, Content.
The Actor is the state or entity creating the false data.
Behaviour defines the actions the Actor undertakes. Are they vomiting out one text message or video in high density across the board so that everyone sees it? Or are they micro-targeting specific sub-groups as happened during the Brexit campaign when, for instance, vegans were shown videos of cattle being hell-shipped to Europe with the (spurious, but effective) implication that this was the fault of the EU and that leaving would bring about a material change in the welfare of livestock managed by inhumane industrial practices?
Content considers whether the detail in the message or video appears to be real – and this in particular requires a critical assessment to have been established before the deepfakes hit. It may be that the 2016 abattoir videos were self-evidently fake, but that wasn’t the point. Like showing videos of foetal heartbeats outside abortion clinics, they aimed straight for the limbic jugular where responses take place way ahead of higher cortical actions like establishing veracity. Only in an atmosphere where everyone questions everything can this be countered.
Taiwan is encouraging critical thought at all three layers.
In terms of Action, they set up the number 111 from which all governmental communications arise. Non-governmental numbers are invariably 10 digits long, so if you get a 111 message from the national water utility reminding you to pay your bill, or a request you take part in an online poll, you can assume it’s real. If you get a message from another number and you haven’t met the owner face to face to establish personally whether a) they’re an actual human being and b) you can trust them… then don’t.
In terms of Behaviour and Content, Taiwan has set up “Cofacts” – collaborative fact-checking – whereby anyone can flag potential spam to their chat group. The effect of this spread across a whole population is to create viral mapping in real time. Whether the text or video in question is real or is fake, the language is analysed and the internal AI begins to parse out phrases that are suspect, analysing the logic of what’s approved and what’s rejected.
They are essentially crowd-sourcing the fact-checking – Tang says, “think Wikipedia, but in real time” – such that their civil society has been able to “train a language model that provides basically zero day responses to zero day viral disinformation”.
I leave you to imagine how different our world would be if Twitter/X were to do this. It wouldn’t create politicians with integrity or lessen their tendency to use social media to float the nastiest ideas, but it would definitely give those ideas less traction amongst the commentariat.
On a wider field, the same AI language models that allow real-time dissection of deepfakes are also being used to bring people together in groups of 10 such that when they are asked to talk amongst themselves and (crucially) to build bridges together between their different viewpoints, there is clearly a representative range of views in the room.
By using software to “gamify” the building of bridges – the sharing of ideas to find common ground – people are encouraged to build consensus, to find the “good enough for now, safe enough to try” alternatives and then to test them out and examine the outcomes together in a series of iterative loops.
This is governance as if people mattered; as if they could be trusted to have good ideas; as if they were, in fact, the best people to understand the needs of their local areas and even – imagine! – at a national level.
The American sociobiologist EO Wilson told us a long time ago that one of the core problems of humanity is that we have Paleolithic emotions, mediaeval institutions and gold-like technology. In Taiwan, 21st-century tech is being used to upgrade antiquated institutions to something fit for purpose in the modern world and they are being supported by emotionally literate thinking of the highest grade.
In a world where we teeter on the brink of multi-polar collapse, where we need urgently to bring power to those with wisdom and wisdom to those with power, we urgently need this level of thinking to motivate our local and national governance.
And no, China did not succeed in subverting January’s general election in Taiwan. That nation remains a beacon of intelligent democracy.
Any Human Power by Manda Scott is published by September, price £18.99