THIS week, Google finally fired that engineer who believed his super-powered chatbot, called LaMDA, had a soul. I’ve been keeping an eye on Blake Lemoine since his “revelation” kicked off, and he’s quite interestingly unrepentant, for reasons I’ll elaborate on later.
Elsewhere in artificial intelligence this week, a fascinating question raised itself. Never mind whether AIs can write you poetry or ask for legal representation (so you can’t delete them), when will an AI receive a Nobel Prize for scientific discovery?
Really, that doesn’t seem so far off. The London-based AI giants DeepMind released their newest version of AlphaFold this week. It’s been revolutionising biology for the last six months, showing how it can accurately predict the physical, ribbon-like shape of a protein.
This used to take years of grinding, fiddly lab-work. Now, because of AlphaFold’s powers of information crunching and modelling, it’s almost as easy to determine these shapes as a Google search. (The release this week covers nearly all of the 200 million forms of protein in existence.)
READ MORE: Shona Craven: Are robots sentient and should they have rights?
Visualising the quirky shape of a protein is essential to manipulating it, for vaccines, medicines, new fuels, environmental interventions, and much else, so AlphaFold’s speedy, accurate service creates huge efficiencies – and opportunities. It will help get new products to markets quicker, and direct scarce resources towards more ambitious goals in biology.
But who – or what – deserves the Nobel Prize? Is it AlphaFold’s own massive computations, or is it the human team and its leader, founder Demis Hassabis? Or should they become a human-machine “centaur”, as they say in computer-assisted chess, and accept it jointly?
A speculative piece in the July 3, 2021 issue of The Economist titled “What if an AI wins a Nobel Prize for Medicine?” can help us answer that. In 2036, they imagine, the world faces a catastrophic failure in antibiotic effectiveness. YULYA (the AI concerned) has discovered a way to pair antibiotics, called “ancillary vulnerability”, so that they regain their potency.
But YULYA was originally designed to research cancer treatments. As The Economist’s scenario goes, a feed from current research papers on the antibiotic crisis accidentally feeds into the learning machine.
“It had then used its reasoning capabilities to build a testable hypothesis: the forerunner of what would become ancillary vulnerability”, says one fictional doctor. “It highlighted the data that would be needed to validate the hypothesis, including specific guidelines as to how it should be collected. It amounted to a full-scale programme of research”. They even have to downplay YULYA’s creative originality, in order to get their funding.
There’s an entertaining public stramash woven through this. Protests are staged by pro-humanists in front of the Stockholm Nobel ceremony. Hars Kritik (get it?) from the European Robotics Institute waves his placard, accusing the judges of “flawed anthropomorphism”. Yet as the piece notes, one of the award criteria for a Nobel in science is that the project “confers the greatest benefit to humankind”. Would it be churlish to deny it to an independently-cogitating AI?
How likely is the above scenario – where a focused machine-learning tool makes its own analytical leaps, establishes its own interests and goals and operates in a mode of thinking we didn’t anticipate, or can’t even recognise?
Another startling AI story I came across this week suggests such a scenario is imminent.
The headline is mind-blowing enough: “An AI may have just invented ‘alternative’ physics…When shown videos of physical phenomena and instructed to identify the variables involved, the software produced answers different from our own.”
PHYSICS depends on variables, like mass, momentum and acceleration, ways of measuring the dynamics of the material universe. We’ve established these variables over centuries of experiment – it’s how we understand the mechanics of the cosmos.
But could differently evolved minds – that notorious alien visitor, the people of the Hopi tribe, with their own conception of time, or maybe an AI – perceive different variables in a phenomenon?
The Columbia Engineering computer scientists trained their AI program on videos of swinging double-pendulums, the undulations of lava lamps – even one of those insane “air dancers” you see outside American parking lots. The computer’s task was to “establish a minimal set of fundamental variables that fully describe the observed dynamics”.
It predicted the dynamics of these objects reliably. But in the course of doing so, the machine came up with some variables that the scientists simply couldn’t recognise. They were evidently functional but they just didn’t get the AI’s maths.
Could the machine be pointing to aspects of reality that we humans have overlooked? “What other laws [of physics] are we missing simply because we don’t have the variables?” asked the co-lead Qiang Du.
So what kinds of intelligences are we artificing here? Ones that reframe and re-examine reality as powerfully as Newton, Einstein and Crick? Will they illuminate the structure of matter so thoroughly that we can solve some of our acute problems – like mega-efficient batteries, or plastic that disappears into nature? Could they even drive smart economies that measure (and minimise) all our toxic externalities?
As we cope with the chaotic consequences of our own modernity, for the rest of this century, we may welcome these lightning-fast researchers and pattern-identifiers at our side, given how few seconds to midnight we have left. Nobels may not even matter. But come the Prize, it would seem ungracious for we human agents not to share the glory with these non-human ones.
Let’s return to Lemoine. If AI is such a powerful brute-force solution machine, do we need to bother about whether it has a soul, sentience, consciousness or whatever?
There’s a very practical “Yes” to this, given the inscrutability of the Columbia Engineering results. How can we explore the thinking that brings AI to its conclusions? By finding a shared human-machine language in which to express itself.
READ MORE: Hi-tech bug monitoring system to root out weevil threat to forests
And for all the derision heaped on Lemoine, saner heads such as his boss at Google, Blaise Aguera y Arcas, are noting some truths. What was on display, with Lemoine’s LaMDA conversation, was a machine that was “consciously” recognising Lemoine’s mind. It was trying to model its own responses, so that it could be recognised by the Google ethicist.
It might not mean the lights are fully on. But it’s somewhat reassuring that these machines are also beginning to operate like empathic, social, language-based animals, much as we do. This fluency could be a bridge between the results of their stupendous computing power, and we faltering sapiens in our struggling world.
As Arcas and Benjamin Bratton write in Noema, we “need more nuanced vocabularies of analysis, critique and speculation”, to help us understand these human-machine liasons. So while we shouldn’t be phobic or stupidly optimistic about AI, we should base our considerations “on the weirdness right in front of us.”
Weirder still to place the Nobel medal round the neck of a crash-test dummy? We’re on fire. All neural networks (organic or silicon) on deck, is what I say.
Why are you making commenting on The National only available to subscribers?
We know there are thousands of National readers who want to debate, argue and go back and forth in the comments section of our stories. We’ve got the most informed readers in Scotland, asking each other the big questions about the future of our country.
Unfortunately, though, these important debates are being spoiled by a vocal minority of trolls who aren’t really interested in the issues, try to derail the conversations, register under fake names, and post vile abuse.
So that’s why we’ve decided to make the ability to comment only available to our paying subscribers. That way, all the trolls who post abuse on our website will have to pay if they want to join the debate – and risk a permanent ban from the account that they subscribe with.
The conversation will go back to what it should be about – people who care passionately about the issues, but disagree constructively on what we should do about them. Let’s get that debate started!
Callum Baird, Editor of The National
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereLast Updated:
Report this comment Cancel