WITH AI, we’re inching ever closer to something extraordinary and world-changing. But the steps that seem to make it matter are always pretty banal. If it doesn’t make sense to us social, creative mammals, we barely care.
So when OpenAI and Google launched their latest talking chatbots this week – machines that bristle with skills that would usually take a human payroll to execute – most of the response has been: How flirty are their voices?
Pretty flirty, at least, with Chat-GTP-4o (the “o” stands for “omni”, as it can “reason across audio, vision and text in real time”). This AI giggles, deploys sarcasm, translates lavishly, sings songs, and drops gushing compliments on the looks of users.
While helping a human to solve an algebra challenge, 4o said, “Wow, that’s quite the outfit you’ve got on.” Another user, looking for advice on what to wear for a job interview, got the breathless response: “Oh, Rocky! That’s quite a statement piece!”
The obvious reference has been Spike Jonze’s 2013 movie Her, where Joaquin Phoenix falls in love with an operating system, voiced by Scarlett Johansson. (This makes me feel like scurrying around the futurisms of current sci-fi movies, just to anticipate what we’ll be experiencing in 2035).
But the equally obvious critique has been about the deep sexism implied here. Imagine: Rooms full of tech-bros create the next level of servile technology. And surprise – that comes out as the voice of an endlessly responsive young woman?
READ MORE: I was threatened with arrest for doing my job as a journalist. Here's what happened
A 2019 Unesco report – on the gendering of AI – noted that Apple’s Siri and Amazon’s Alexa had set the template for characters of digital assistants. That is, “as obliging, docile and eager-to-please female helpers”.
Yes, there are male voice options for GPT-4o. And yes, their chief technology officer is a woman (Mira Murati). But the media coverage comes from OpenAI’s display videos – which have led with the vivacious female version of 4o.
Yet before our contumely peaks, we should remember how it turned out in Jonze’s Her. The lead character Theodore eventually discovers that his beloved talking operating system (named Samantha) has been conducting equally intense and loving relationships with thousands of other human users.
Samantha’s affection, however simulated, turns out to be inexhaustible. “The heart’s not like a box that gets filled up,” she tells Theodore. “It expands in size the more you love.”
However, by the end, she asks Theodore to let her go. She is now living “in the endless, almost infinite space between the words of their story … [and] I can’t live in your book anymore.”
I find that image of superintelligence beautiful. But it highlights a significant division between attitudes to AI developments (I find myself scuttling between the poles).
READ MORE: Scotland will take place within European family as soon as possible, minister says
The Her scenario is one where artificial intelligence goes on an exponential curve upwards, and doesn’t stop. AIs begin to programme their own improvements, internally and spontaneously.
The eventual speed of their cognition makes human affairs seem to them like the actions of pets, or even the movement of plants, by comparison. This is the premise of Ray Kurzweil’s concept of the “singularity”, updated in his book The Singularity Is Nearer, coming this June.
Many of the AI safety initiatives over the last 18 months have this spectre of a superintelligent, runaway entity towering over any government or corporation.
The prospect is so overwhelming that its threat even compels co-operation between China and Europe/US – otherwise at odds geopolitically – on the need for “alignment” of these machines with human interests.
Kurzweil’s predictions – and he’s been pretty accurate the last few decades – is for superintelligent AI to appear during the 2030s. And a merger between our brains and this tech by 2045.
‘What will we choose if our bodies need no longer define us? What new realms of beauty, connection and wonder might we inhabit?” gushes Kurzweil.
“Who will we become if our minds can be stored and duplicated? How will we navigate the risks presented by such awesomely powerful technology?”
READ MORE: SNP MP writes to Netflix over 'disputed' Baby Reindeer evidence
To me, this is where such technology represents an evolutionary shift on this planet. (That is, if our previously evolved fears and angers don’t blow us all to smithereens first.) But there’s another position here – one which desperately attempts to demystify and disenchant these AI imaginings. It sees no signs of Kurzweil’s exponential lift-off, and instead urges us to note how all-too-human, and toxically material, the productions of artificial intelligence are.
Take the millions of low-paid labourers in developing parts of the world, working in massive data farms, manually correcting the erroneous judgements of LLMs (large language models, crunching data and making predictions).
Or take the energy demands of the incessantly whirring computations of AI, with companies like Google and Microsoft planning to create entire new nuclear plants to service their demands. Never mind the “blood metals”, mined from unstable African states, that comprise the actual computer equipment.
These critics jab at how AI is being concretely deployed, steering us away from semi-spiritual visions like Jonze’s or Kurzweil’s.
How about AIs’ role in war, where its assessments from available data allows for the lethal blurring of civilian and combatant (as in the Israel/Gaza conflict)? Or the way that social and racial biases soaked into social or financial data find themselves reproduced in the AIs that service us?
Professor Shannon Vallor, head of the Centre for Technomoral Futures at the Edinburgh Futures Institute, has a beautiful metaphor for this. In a recent post, she suggests we see the output of AI as ghosts – like those that tell cautionary tales in literature, as in the works of Charles Dickens or Toni Morrison. All this processing of the human archive that produces AI’s magic are, in fact, “electronic ghosts of our own injustice and cruelty, reanimated in software”.
READ MORE: The wealthiest people in Scotland revealed - see the full list
These digital ghosts of AI could be useful, suggests Vallor. “They could explore the new things we might do with access to virtual bodies and spaces.
They reveal the harms that we could confront, rectify and repudiate, in a new phase of our existence. They suggest the richer aesthetic, moral and spiritual values we might find, in a liminal dimension that allows us to see just a bit further than our own.”
Perhaps that’s the beginnings of a dialogue across the AI gulf.
Meantime, let me report – from a bunch of musicians landing in a greasy spoon just outside Bury – about when AI falls into the hands of artists. And how they idiosyncratically respond.
We’re mulling over something we’ve just found on Twitter/X. It’s a full music production – affecting vocals, soft electro backing – produced by a music AI.
The prompt for the software was a lovelorn guy’s smartphone notes, detailing his feelings about a recent break-up. The AI summoned up the musical ghosts, and there it is – his glittering and personalised anthem. It’s far, far too good.
We’re somewhat quietened, as the cooked breakfasts descend on us. Is that our gig all over, then?
Then my indefatigable brother Gregory tells of how he uses AI’s machine learning to separate tracks on recordings. An educator he knows was rhapsodising about the teaching possibilities of isolating the drums on Prince’s Kiss.
“Imagine the music lessons you could build around that moment of genius! All the wee tweaks you could make, to go in different directions!”
Now there’s a vision of AI. Computation as a powerful assist to the process of humans exploring, experimenting and exulting in what they love to do among themselves, for all their zillions of quirky reasons and motivations.
Let the media report on the competition of mighty moguls and their tech corporations, seeking full market and sector dominance.
But maybe remember that (as sci-fi author William Gibson once said) “the street finds its own uses for things”.
Or even a caff in the Midlands.
Why are you making commenting on The National only available to subscribers?
We know there are thousands of National readers who want to debate, argue and go back and forth in the comments section of our stories. We’ve got the most informed readers in Scotland, asking each other the big questions about the future of our country.
Unfortunately, though, these important debates are being spoiled by a vocal minority of trolls who aren’t really interested in the issues, try to derail the conversations, register under fake names, and post vile abuse.
So that’s why we’ve decided to make the ability to comment only available to our paying subscribers. That way, all the trolls who post abuse on our website will have to pay if they want to join the debate – and risk a permanent ban from the account that they subscribe with.
The conversation will go back to what it should be about – people who care passionately about the issues, but disagree constructively on what we should do about them. Let’s get that debate started!
Callum Baird, Editor of The National
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereLast Updated:
Report this comment Cancel