fbpx
FeaturedOpinion

AI: The most misleading compound noun known to man (and machine)

Chris Moran, Head of Content at Viola Communications, writes on the semantics behind AI and how the term can be misleading.

AI misleading
Chris Moran, Head of Content at Viola Communications, writes on the semantics behind AI and how the term can be misleading.

Ten-out-of-ten to whoever it was that coined the phrase ‘artificial intelligence’, in the sense of a machine being able to demonstrate sentience.

Less than a generation ago, this kind of speculative vision of how the future may turn out was exclusively reserved for science fiction authors and film-makers (see Rossum’s Universal Robots, Terminator, Blade Runner, The Day The Earth Stood Still, Star Wars, I Robot, Lost in Space et al.), but now, it’s impossible to go a single day without someone reminding us that ‘AI is going to take all our jobs, replace humans, become sentient and destroy us all’.

But artificial intelligence doesn’t actually exist – at least not in the truest sense of the word(s). Despite seeming to be able to reason (a major element in defining human intelligence), current AI is simply an ultra-fast sequence of complex algorithmic computation. As IBM puts it, “Artificial intelligence is a technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.”

The word doing the heaviest lifting in that sentence is, of course, “simulate”.

Why is AI such a misleading concept?

A more realistic naming of what we understand as ‘AI’ would be “Anthropomorphised Intelligence”, in other words, a machine which appears to be able to reason only by virtue of the human qualities bestowed upon it by humans themselves, in a similar way that we think a dog is smiling, a hyena is laughing or a grumpy cat is … well, being grumpy. We tend to overlay our own very human emotions onto a non-human entity.

It is also a human trait to associate received information with our own individual world experiences, so when we read an article targeting our field of expertise, we latch onto the apparent proximity of a threat to our work: graphic artists think about AI in relation to it taking over the task of design and layout, creative writers worry that they will be put out of work by OpenAI ChatGPT, accountants can only wonder at the speed that ‘AI’ tools fix broken Excel formulas, automate accounts payable processes and draft emails, musicians are extremely concerned about the number of AI-generated recordings populating streaming sites – the list goes on.

By lumping together everything that this amazing technology can achieve under the single broad description of AI, we are ignoring the fact that there are an incredible number of other real world applications in which various incarnations of ‘ultra-fast sequences of complex algorithmic computation’ are making a huge difference. Sectors that are benefiting from AI-led automation range from healthcare to transportation, telecommunications, agriculture, production and manufacturing, and cancer research.

AI misleading
Chris Moran, Head of Content, Viola Communications

AI is not human, it’s an algorithm

While we acknowledge that there is an extremely positive upside to this new technology, we also need to examine the possible pitfalls: AI relies on the premise that all rational thought can be made as systematic as algebra or geometry – but can it? Does it really have empathy, is it truly capable of nuance or humour, or is it just programmed to make us think it is?

AI operates on algorithms and data, simulating aspects of human thought but lacking true understanding, empathy, or nuance. While it can mimic humour and emotional responses, these are based on patterns, not genuine feelings or consciousness. AI’s responses are programmed and refined through training on vast datasets, enabling it to convey the illusion of empathy. However, it doesn’t experience emotions or grasp context in the same way humans do.

Thus, while AI can replicate rational thought processes systematically, it can be misleading as its capacity for authentic human-like qualities remains limited, ultimately revealing its foundational reliance on programming rather than genuine understanding.

On its own website, OpenAI-ChatGPT states that the algorithm can occasionally ‘hallucinate its answers’ and can be absolutely wrong (or lie) on known facts (this can happen if you train your AI chat-bot on Reddit and X content) creating an almost existential dilemma.

The more we depend on AI for decision-making, the more we may question our own autonomy and critical thinking skills. This raises ethical concerns about identity, agency and the nature of reality itself. Are we losing touch with what it means to be human in our quest for convenience, efficiency and cat memes?

I rest my case.

By Chris Moran, Head of Content, Viola Communications.