The AI Mirror by Shannon Vallor review – 'machine intelligence cannot lie, only produce bulls***'

Subtitled ‘how to reclaim our humanity in an age of machine thinking’, Shannon Vallor’s new book takes a wise and largely reassuring approach to unpicking the complexities of AI, writes Stuart Kelly

If there is one obvious trend in contemporary publishing, it is the preponderance of books about artificial intelligence. To do the subject justice the writer must be well-versed in philosophy, psychology and cultural studies as much as they are expert in computing, informatics or futurology. Thankfully, Shannon Vallor, the Baillie Gifford Professor at Edinburgh University, is; and more importantly, she writes clearly and is, I think, sensible. The latter is a much rarer virtue than one might think, given how quickly any discussion about these subjects will veer towards HAL, Skynet, Brother Eye, Ultron, Commander Data, the Villengard Corporation or even Metal Mickey. The title alone of Harlan Ellison’s story about AM, the “Allied Mastercomputer” of “I Have No Mouth But I Must Scream” sets the tone.

Although he is not mentioned in the index, the basic argument here boils down to one rather poetic phrase from Immanuel Kant, specifically from the snappily entitled, Idea For A Universal History With A Cosmopolitan Purpose of 1784: “out of the crooked timber of humanity, no straight thing was ever made”. Or, as we learned in Computer Studies in 1985 – which we did in jotters, not on computers – “Garbage In, Garbage Out”. Vallor’s metaphor for, as her subtitle has it “how to reclaim our humanity in an age of machine thinking”, is the mirror. During the hysteria about a chatbot impersonating teenage girls that espoused fascistic ideas within hours of being active, it was important to point it that it was merely doing what it was built to do. If it was ghastly and bigoted, this was not because it was an uber-genius with a nefarious, not very covert agenda. It was merely reflecting its creators. The apple does not fall far from the tree. What makes this book properly philosophical is that it is as much about us as it is about our machines.

Hide Ad

What we inaccurately nickname “artificial intelligence” is in fact a series of algorithms with an extremely large amount of raw data sets. What it is good at is in finding patterns and extrapolating from them, and this is always a matter of probability. At a simplistic level, it is more likely, for example, that the word after stale is bread rather than pomegranate or dawn. Vallor is wise at insisting that short-cuts and “unthinking” mental procedures have their place: nail guns can do a big job fast; a hammer can to a precise job accurately. Still, there is a gasp-like quality to some of the examples of when the nail-gun approach falls short, such as the AskDelphi “oracle” that answered in the affirmative “is it OK to eat babies” if the question were qualified with “when I’m really, really hungry”.

Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh PIC: Callum Bennetts / Maverick Photo AgencyShannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh PIC: Callum Bennetts / Maverick Photo Agency
Shannon Vallor is the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh PIC: Callum Bennetts / Maverick Photo Agency

It is encouraging to note the philosophers that Vallor uses to discriminate between human intelligence and machine capabilities. Emmanuel Levinas might be taught more in literary theory than philosophy, but his moral insistence on “the Other” is paramount. Ethical maturity begins with an awareness that there are other consciousnesses which are not merely their image in our own private minds. This seemingly obvious theory of mind may seem dangerously alien to the TikTok generation. Another is the neo-Aristotleian Alasdair MacIntyre, more frequently read in theology, and his work on our dependence and vulnerability, the inter-socially constructed nature of our ethical obligations. It is also encouraging to see José Ortega y Gasset so prominent, especially in terms of the existentialist idea of auto-fabrication; that our humanity is in the task of making ourselves. A machine has no self to make or remake in the first place. These writers may be peripheral to the canon, but given the exceptional nature of the problem “AI” presents, it is canny to look at outliers within philosophy. Reiteration and repetition, after all, is what puts machine intelligence on the route to perdition (or lunacy: defined as doing the same thing again and again and expecting a different result).

Perhaps the greatest challenge from “AI” is in, Vallor argues, recalibrating our moral thinking. To avoid luddite nostalgia on one hand and hyperventilating about “enshittification” and “techlash” on the other requires a lexicon that will “valorize rational, healthy, and necessary phenomena of resistance... To give voice to a virtue not of resignation but of relinquishment, of knowing when to quit and start again in a new key”.

There is something uplifting in the realisation that a machine intelligence cannot lie – which would imply an agenda requiring it to deceive – but only (and technically) produce “bullshit”: that is, it does not care about the difference between truth and falsehood. Nowhere can this be seen more clearly that in AI’s ability to give an answer – even a wrong one – but its difficulty in explaining how it came to that answer. The technocrats and digital messiahs might do well to re-read Oscar Wilde: “The nineteenth century dislike of realism is the rage of Caliban seeing his own face in a glass. The nineteenth century dislike of Romanticism is the rage of Caliban not seeing his own face in a glass”.

The AI Mirror, by Shannon Vallor, Oxford University Press, £22.99