“I’m aware that the word ‘algorithm’ makes about 85% of people want to gouge out their own eyes”, says mathematician Hannah Fry, whose 2018 book Hello World explores the power and pitfalls of algorithms. What exactly does this word – once the preserve of computer geeks, but now tossed around casually in everyday speech (“the algorithm says no”) – mean?
An algorithm is a series of logical steps that converts an input to an output. A cake recipe is a kind of algorithm: input eggs, butter, flour, sugar and so on, and if you follow the steps you’ll get a cake. Many algorithms are less linear, including branching points where different paths and outcomes are possible, typically in “if… then…” fashion. Take the simple algorithm for figuring out if an input integer is even or odd: if dividing the number by two gives another whole number, then the output is “even”; if not, it’s “odd”.
Algorithms might be considered the grammar of computer science: the logical structures that allow computations to be conducted. The word is a Latinisation of the name of the ninth-century Persian mathematician Muhammad ibn Musa al-Khwarizmi, whose text of arithmetic helped to introduce the Arabic numerals to the west.
For essentially any computational task there is no unique algorithm: no single way to calculate an output from an input. The challenge is to find a good algorithm: one that gets to the result reliably but also efficiently, without taking ages about it.
This is a familiar challenge in everyday life. If we want to find our lost car keys, many strategies are possible. We could first look in all the obvious places – pockets, surfaces and so on. Or we could go systematically from one room to another, searching it top to bottom. For computer scientists and coders, algorithm design is a key skill, a common goal to figure out how to get to the desired result in the smallest number of computational steps. For complex problems, algorithm design becomes almost an aesthetic pursuit, a quest for an answer that is not only reliable but also elegant and inventive.
Many of the algorithms we encounter in daily life are predictive, such as predictive text functions on mobile phones. Here the algorithm has access to some database of correlations between words: records of which other words typically follow the one you just typed. “All” in a text message is often followed by “the best”, say. Ideally that data can become personalised, biased by your own past usage (your own preferred sign-off might be “All good wishes”, say).
Predictive text is basically a kind of recommender system: if you used that word, you might want to use this one next, just as online sites or apps recommend books, music, purchases on the basis of what you read, listened to, bought previously.
These systems are statistical: in their simplest form, they might just consult a vast database of purchases to find the other book most commonly bought by other people who also bought the one you just ordered. In general, such algorithms are now more sophisticated and personalised, for example taking into account what else you bought and matching recommendations to other consumers with similar profiles: gauging individual taste, you might say.
There’s no unique way to make that calculation, and some recommender systems might make terrible suggestions while others show an uncanny sense of what you’re looking for, suggesting things you like but didn’t even know existed.
This sort of complex computation is precisely the kind of problem today’s artificial intelligence (AI) algorithms tend to be good at. Here the machine doesn’t simply follow a series of predetermined steps to get the output, but learns how to find the right answer. It is trained on data for which the correct answer is known (“Is this image a cat or a dog?”), adjusting the connections within what is effectively a network of switches until an input can reliably elicit the right output.
The huge, even daunting, improvement in the performance of such systems in recent years, for example in large-language-model chatbots such as ChatGPT, is largely thanks to the invention of a new type of algorithm called a transformer, which works reliably with less training.
Algorithms are of course fallible in general – not because they make errors as such, but because they can only take into account what they’re given. A route-finding algorithm will happily direct you the wrong way down one-way streets unless instructed not to; biases in the training data will be reproduced in an AI’s output. But because they exude “confidence” – here’s your answer! – we are apt to trust them more than we sometimes should.