AT first, a few months ago, we played with generative artificial intelligence, interviewed it and asked silly questions, and thought: “Isn’t it crazy what technology can do!”
We were in awe of its capabilities and found it highly entertaining.
Then we witnessed its power in action as ChatGPT passed law, business and medicine exams, wrote essays and articles in an English much better than I could ever speak.
We wondered: “Is this thing going to make us all jobless?”
Later, we were fooled by AI-generated images and some of us fell victim to the illusion of reality.
I was left stunned by the image of Pope Francis wearing a long, white puffer jacket, which turned out to be a product of AI fabrication.
Now the question arises: Is it soon going to be impossible to distinguish truth from AI generation, with serious implications for the journalism industry and democracy in general?
I write as someone who has grown to be enthusiastic about AI, a revolution that experts compare to the invention of the PC and the smartphone.
I can see the potential benefits it can bring in terms of efficiency, productivity and relieving us from repetitive and menial tasks, allowing us to focus on more creative, groundbreaking, and altogether more exciting pursuits.
Think, for example, of how it could contribute to better medical care, with AI helping with earlier diagnosis, leading to better outcomes for patients.
But still, these technologies make my head spin. Scientists are making major, exponential advances, and I am not sure we – the general public and politicians alike – have a good grasp of the sophisticated tool that is now in all our hands.
It is apparent too that AI is not only a question for the scientists, engineers and all those who are technically inclined.
What defines intelligence, learning and creation? Can all-powerful algorithms really replace everything we do?
These are the questions AI forces us to confront, like a mirror for our species. And this is why we need humanities.
While people often joke that they are useless because a philosopher is not going to send a rocket into space, in this dizzyingly rapid evolution of technology, artists, philosophers, historians, and ethics experts are essential voices to help us make sense of the tool we’ve created and to educate ourselves on its limitations as well as its potential.
Of course, many applications of AI have already found their way into our daily lives without us even noticing.
Search engines use machine-learning algorithms and natural language processing systems.
Social networks use similar algorithms to show you content specifically relevant to you.
Voice assistants use AI to understand what you say and provide a relevant answer.
But now more than ever, we are freaked out, particularly following the blunt warnings from leading experts and pioneers in the field.
The latest one, Geoffrey Hinton, stated in The New York Times that he no longer believes the prospect of artificial brains outsmarting human brains is a distant one.
We are in full uncanny valley. When a machine achieves a certain degree of anthropomorphic resemblance, we find it disturbing.
All of a sudden, we feel like we live in a science-fiction story where the machines look, sound and act increasingly like humans. This creates a fear that the next step in this evolution will be the loss of control over our creations, rendering us obsolete as human beings.
This is making me think of Real Humans – a Swedish sci-fi TV show I avidly watched a decade ago – where androids called Hubots are used as warehouse workers, house servants, even sex workers.
While the majority embraces their presence, a group seeks to dismantle and eradicate them, blaming them for stealing human jobs.
Whereas Hubots are just supposed to obey and work, some have been illegally hacked and seem to develop free will, empathy and even faith.
The show ends up raising questions such as: Are humans and super-evolved robots essentially the same? Should Hubots have equal rights?
Real Humans showed us a not-so-distant future that spooked viewers.
Fast-forward 10 years, with the rapid evolution of AI technology which is now entering the mainstream, we are pondering similar questions. It challenges us to think about the consequences of what we create, and to not wait for something tragic to happen to regulate it.
We made this mistake with social media, entrusting it to the general public before sufficiently assessing what it would mean for our privacy, our politics, and the very fabric of our society.
The future of work is also a very important question in the context of AI. It is easy to be freaked out at the prospect of AI destroying jobs – we can see how some office jobs could be completed by a machine.
But many require a high level of originality, interpersonal skills, and decision-making abilities that AI-based systems are incapable of.
Although AI systems are capable of analysing vast amounts of data, they remain unable to understand what they do.
Artificial intelligence may generate gigantic volumes of content, but it is always going to consist of imitation and good execution.
An AI can imitate Van Gogh or Velazquez, but it can’t imagine what artists are going to do in the 22nd century because it doesn’t exist yet. Only the power of the human mind can bring it into reality.
A very interesting discussion, I find, is about how much more wealth could be created, and how it could be redistributed. Interestingly, this debate is not happening much.
William Spriggs, an economics professor at Howard University and chief economist at the American Federation of Labor and Congress of Industrial Organizations, the largest federation of unions in the United States, has an idea as to why.
“Companies don’t want to have a discussion about sharing the benefits of these technologies,” he said. “They’d rather have a discussion to scare the bejesus out of you about these new technologies. They want you to concede that you’re just grateful to have a job and that you’ll pay us peanuts.”
One way or the other, we are going to have to learn to live and work with AI. It is here, and it is here to stay.
An AI probably won’t replace me as a journalist – but a journalist with a good understanding and use of AI might.
Digital illiteracy is my enemy right now – it’s high time to catch up and get in sync with the times.
Why are you making commenting on The National only available to subscribers?
We know there are thousands of National readers who want to debate, argue and go back and forth in the comments section of our stories. We’ve got the most informed readers in Scotland, asking each other the big questions about the future of our country.
Unfortunately, though, these important debates are being spoiled by a vocal minority of trolls who aren’t really interested in the issues, try to derail the conversations, register under fake names, and post vile abuse.
So that’s why we’ve decided to make the ability to comment only available to our paying subscribers. That way, all the trolls who post abuse on our website will have to pay if they want to join the debate – and risk a permanent ban from the account that they subscribe with.
The conversation will go back to what it should be about – people who care passionately about the issues, but disagree constructively on what we should do about them. Let’s get that debate started!
Callum Baird, Editor of The National
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules hereLast Updated:
Report this comment Cancel