WHILE the topic of artificial intelligence (AI) has been in the public consciousness for the past five or six years, it wasn’t really until the release of ChatGPT at the end of 2022 that the possibilities of deep learning models and neural networks exploded into the mainstream.

And with them, well-trodden questions on the nature of ethics in algorithms and what even constitutes an artificial intelligence in itself.

The ability to communicate with a machine that can mimic the human cognitive process has been a holdout of science fiction and futurism for over a century, but now it feels to be defining itself as a very real and world-changing technology akin to electricity or the internet.

And like with any paradigm-shift, the scale by which this evolving technology will change our lives is not yet known; though it is a mark of the times we live in that our strongest fears around AI revolve around how the wealthiest and powerful will use it to cut jobs and wages harder and faster than ever.

But there are other great dangers too; not in hypothesised singularities or Skynet-style robot uprisings, but in the potential for AI to exacerbate beyond the point of repair our ability to hold a shared reality – and in turn, the ability to work toward common goals on the basis of a shared understanding of the world around us.

Social media is already, by design, a conduit for rampant misinformation that rewards popularity over accuracy.

The National: Zak Field who has started an artificial intelligence company. The view through his VR headset. Picture: ANTONY KELLY

Without it, conspiracy movements like QAnon and its thematic siblings would simply not have the influence they do. But at their heart, there are bad faith actors driving the narrative forward.

Over the past few years, there have been numerous examples of fake flyers and materials, purportedly from LGBTQ+ organisations, advocating for the inclusion of “pedosexuals”, or false flag sticker campaigns put up that claim “Genital Preferences are Transphobic” with the intent of misleading the public on what trans people believe.

Contextless videos of migrants allegedly harassing young people on the street, always with little to no information on where these events supposedly happen, have led to fascist protests outside hotels where asylum seekers stay.

How much worse will it be when AI-generated video reveals to reactionaries and the radicalised the truth they’ve always known but couldn’t prove? Will it matter that it isn’t real?

The potential for AI to create extremely convincing deep fake videos could only lead to a further fractioning of our shared experiences of the world; wherein manufactured video will spread through digital ecosystems with ease, while legitimate events that contradict the desired narrative can be dismissed as fake news.

READ MORE: AI robot Ai-Da makes history speaking in Westminster on ‘a creative future’

And certainly, this will be limited by the extent to which a politician or celebrity is surrounded by trusted witnesses and sources; but what of smaller groups without the same reach, or activists who are already treated with hostility by the press?

A doctored video of Humza Yousaf wouldn’t make it far outwith more extreme conspiracy theorists – but what about a video from a small reproductive healthcare conference, where a known doctor openly admits that aborted foetuses are used in food manufacturing?

Or from a LGBT event where a charity lead privately concedes that their aim is to lower the age of consent?

US Republicans in Texas have already moved to ensure that food containing human foetal tissue be “clearly and conspicuously labelled”, despite such concerns being a total fabrication on the part of the anti-abortion movement.

And here in Scotland, a candidate for the Alba Party made false claims in 2021 – widely shared by the social conservatives of the Yes movement – that LGBT rights charity Stonewall was trying to lower the age of consent to 10.

Both of these claims, like many, many others, are uncritically spread within movements hostile toward women’s rights and LGBTQ+ equality where the need to “win the argument” transcends the need to be right or to be fair.

READ MORE: Pat Kane: The digital ‘fakeworld’ just makes reality more appealing

As it stands, I see no future where artificial intelligence won’t play a role in taking the intensity of these campaigns to ever more dangerous levels of vitriol.

And this phenomenon is already playing out in the world of online scammers. Imposters are successfully calling their victims and using artificial intelligence to imitate the voices of loved ones begging for help now, raking in thousands in stolen cash from panicked family members.

Microsoft claims its latest text-to-speech AI model can imitate anyone based on just three seconds of dialogue, long enough to tell a stranger on the phone that they have a wrong number. The technology exists.

As part of the generation that grew up when the internet was becoming the internet, until recently I’d have put myself in the category of people who are able to spot an online scam easily. Now, I’m not so sure.

On social media, I always check my sources before sharing anything about what anyone has purportedly said. Before long, I worry that the internet will be so polluted with deceptive AI-generated video and audio content, that it will render it functionally useless as a space to organise and to inform.

Particularly while billionaires such as Elon Musk remove verification features that once confirmed online accounts were who they said they were.

Huge social accounts with the sole purpose of misquoting and intentionally misunderstanding left-wing and progressive viewpoints already garner millions of views, even being cited in school shooter’s manifestos and far-right literature.

Without regulation, if that even is the solution, I see no barrier to the spread of disinformation becoming significantly worse in the coming years.

Artificial intelligence could be, and likely will be, a great boon to humanity in time; but I worry, deeply, on the contemporary consequences of its misuse in a world already polarised by splintered worldviews.