Six months ago this week, the world changed. OpenAI unleashed its ChatGPT bot upon the world, giving everyone the opportunity to interact with the latest developments in artificial intelligence. Microsoft and now Google have followed with their own versions of generative artificial intelligence.
Many of us will have experience of chatbots, most likely via the yawning chasm of ennui into which we tumble when trying to establish which far-flung part of these islands our delivery has ended up in, so the opportunity to endure this dismantling of the soul at an enhanced level seemed initially unappealing.
ChatGPT is something entirely different, however, even if it still won’t locate your parcel. Instead of keeping the user in an endlessly recurring loop of three unhelpful answers, ChatGPT responds to questions and tasks immediately in plausible prose that reads as if it could have been written by an actual human. Type a query or request into the little box at the bottom of the screen and after a short pause, as if the program has been mulling things over, a bespoke response in properly formed sentences tumbles down the page.
As well as presenting writers with a new opportunity to lie awake in the dead of night chewing the edge of the duvet fretting about their livelihood, the publishing industry is also having to decide whether the advent of sophisticated artificial intelligence presents an opportunity or a problem. For an industry traditionally happy to move at a pace that leaves it vulnerable to being struck from behind by a glacier, this could be a significant wake-up call, hence AI was easily the hottest topic of discussion at last month’s London Book Fair.
Some writers are already trying to incorporate ChatGPT into their work. It helps them formulate and proofread first drafts, they say, or put together book blurbs and biographies for book jackets. Others are going even further: there are now more than 200 titles for sale on Amazon crediting ChatGPT as co-author.
Unsurprisingly most writers are reluctant to embrace the new technology, solemnly predicting the death of their craft and bookshops being filled with titles produced by bots before the year is out.
Such doom-mongering is understandable. It’s hard enough making a living from writing as it is without some fancy new software nudging you closer towards a future wearing a barrel held up by braces. But is the threat of AI literature really enough to send the human being to the literary glue factory?
When I heard about Chat GPT I reacted as I usually do to significant breakthroughs in technology: wheelbarrowing more tins of food to my lead-lined underground bunker while shouting to the neighbours that I’d warned them we should have called it a day at the abacus. Yet for once I don’t share the pessimism about the imminent authorpocalypse.
So far my experience of ChatGPT has been fun, asking it to do daft things like turn Dylan Thomas’s Do Not Go Gentle Into That Good Night into dub poetry (“Dem get older and dem just give up the fight, Me seh, me nah go gentle into the good night”) or Beowulf as reimagined by Jane Austen (“It is a truth universally acknowledged, that a single monster in possession of a good fortune, must be in want of a hero”). I asked it to write a poem about Brexit in the style of Sir John Betjeman and it certainly delivered a poem about Brexit in the style of John Betjeman (“Oh Britain, how you’ve changed since Brexit’s vote, Our ties with Europe now forever cut, The future now uncertain, fear and doubt, A loss of unity, a nation’s rut”).
Yet while these experiments killed a bit of time in agreeable fashion, the results all seemed to lack something, a twinkle in the eye, a bit of pizzazz, a genuine sparkle of creativity. The reason for that is obvious: artificial intelligence is, by definition, artificial. Those responses, delivered swiftly and cleanly, are the result not of quick wit and a natty sense of rhythm but of scanning millions of words of text written by humans to produce a dilution of them all so weak it’s practically literary homeopathy.
In fact, for all the hoopla and hootenanny that has greeted ChatGPT, its output in literary terms is not much further forward than the first examples of computer-generated literature produced more than half a century ago.
In 1967 the American artist Alison Knowles fed scraps of information into a computer at Bell Laboratories with instructions to turn them into four-line stanzas, each starting with the phrase “A house of…”. The poem, A House of Dust, which ran to a quarter of a mile of printout before it repeated itself, began:
A House of Dust
In Michigan
Using Natural Light
Inhabited by Vegetarians
You could possibly imagine something like that being recited in a Greenwich Village coffee bar by a first-time beat poet who’d just read On The Road, but it doesn’t exactly brim with nuance and subtext.
More than 50 years later and with the entire history of poetry now at its disposal, what could ChatGPT conjure from the same instruction?
A house of memories, filled with love and laughter,
Where family gathered and stories were told after.
Each room held a piece of our collective past,
A home that will forever in our hearts last.
Well, it rhymes, we can give it that, even if it did have to perform some ungainly last-line contortions to pull it off. There’s no overlooking the fact that it’s absolutely rotten. Dreadful. You wouldn’t print that in a cheap birthday card. The 1967 effort might be just a jumble of phrases spat out of a glorified adding machine, but it reads far better than its 21st-century counterpart.
Criticising ChatGPT’s poetic output would possibly feel like standing at Kitty Hawk in 1903 blowing raspberries at the Wright Brothers for only getting a little bit off the ground if it had been produced from a standing start, but A House of Dust is 56 years old and it’s no worse than its modern counterpart.
Which brings me to the question that feels like the pachyderm in the parlour here – why on earth do we need artificially created literature? Don’t humans already have that covered?
From the self-playing piano to the self-driving car, we’ve always had a bizarre compulsion to create replicas of ourselves. Are we seeking to create the perfection that with all our flaws, foibles and raising of Ed Sheeran to the pinnacle of popular culture we know we can never attain? Or are we just naturally lazy and want machines to do everything for us? Either way, tasking our biggest brains with making ourselves obsolete seems an odd ambition for a species.
Authors have been warning of the consequences of this for centuries. Mary Shelley’s Frankenstein, for example, written in 1816, is as eloquent a cautionary tale about the dangers of bringing inanimate things to life as there has ever been.
In Gulliver’s Travels, published in 1726, Jonathan Swift even seems to predict the coming of ChatGPT itself. When Gulliver visits the island of Balnibarbi, devoted entirely to spurious scientific research, he finds scientists beavering away in laboratories trying to develop a method of extracting sunbeams from cucumbers and softening marble to use as a pillow.
One academy is working on a machine that jumbles words and phrases to create blocks of text aiming to, in the words of its inventor, “give the world a complete body of all arts and sciences”. The professor assures Gulliver that “he had emptied the whole vocabulary into his frame, and made the strictest computation of the general proportion there is in books between the numbers of particles, nouns, and verbs, and other parts of speech”.
Written almost 300 years ago as a satire on pointless endeavour, the machine sounds very much like ChatGPT, which constantly hoovers the internet for new material to improve its responses in a manner that many writers feel to be at least plagiarism-adjacent. ChatGPT makes clever things, but it’s doing so through the parroting of others’ work.
The UK Society of Authors, in effect the trade union for people who write books, is calling for AI developers to be transparent about the sources of the data being used to improve their software. It also demands consent from authors before their work is fed into the program in line with existing copyright legislation, part of a general call to regulate the chimpanzees-on-Capri-Sun frenzy of the current AI industry.
Ultimately, however, will ChatGPT ever be able to create a work of literature unbidden, without needing daft prompts by a procrastinating human eating biscuits? There was only one way to find out. I asked ChatGPT whether human novelists will be usurped by artificial intelligence.
“It is unlikely that novelists will be entirely replaced by artificial intelligence, as the art of storytelling is a deeply human endeavour that requires a complex combination of creativity, empathy, and imagination,” it smarmed. “While AI can certainly assist writers by generating ideas, suggesting plot points, or even writing sentences or paragraphs based on given prompts, it cannot replicate the human experience or fully understand the nuances of emotion and language. So, while AI may continue to advance and be integrated into the writing process, it is unlikely to fully replace the role of the human novelist in the foreseeable future.”
Well phew, that’s all right then. At that point the program started calling itself “Hal” and me “Dave”, but I’m sure that was just a glitch and absolutely nothing to worry about.