• Lumiera
  • Posts
  • 🔆 Misleading metaphors: Can ChatGPT be my advisor?

🔆 Misleading metaphors: Can ChatGPT be my advisor?

How we perceive technology, Arizona starts AI school and massive breakthrough in AI reasoning.

🗞️ Issue 50 // ⏱️ Read Time: 8 min

Hello 👋

When was the last time you “had a conversation” with a chatbot? If you're like most early AI adopters, you might talk about AI tools as if they're wise friends or members of your team. Referring to AI systems as if they are human beings is becoming normalized and makes their usage more accessible. However, this friendly familiarity also comes with the risk of missing both the real limitations and the full potential of AI systems. Let us show you why.

In this week's newsletter

What we’re talking about: How talking about AI systems as if they are human beings can influence our perception of them.

How it’s relevant: Understanding and communicating about AI systems is vital to using them responsibly and to their full potential. Metaphors can help with that, but inept usage may cause missed opportunities or overestimating AI systems.

Why it matters: Reaping the fruits of AI innovation can determine the future success of any business. Understanding AI opportunities and their shortcomings is a significant competitive edge.

Big tech news of the week…

🖥️ The cognitive decline of AI: Several studies explore the capabilities of large language models (LLMs) but their susceptibility to human impairments has not been researched to the same extent. A new study used tests that are widely used to spot early signs of dementia. It shows that leading large language models or “chatbots” show signs of mild cognitive impairment.

🌍 OpenAI’s new AI model o3 has achieved a breakthrough high score on a prestigious AI reasoning test called the ARC Challenge. AI fans speculate that o3 has achieved artificial general intelligence. Even if this is an “impressive leap in performance”, it still hasn’t demonstrated what experts classify as human-level intelligence. Humans will increasingly be valuable in judgement, governance, and direction.

🧱 The state of Arizona, US, will provide personalized academic instruction to students by using AI for 2 hours per day. The remaining hours of the school day will include “life-skills workshops” covering areas such as critical thinking, creative problem-solving, financial literacy, public speaking, goal setting, and entrepreneurship.

The Aspects of Simple Explanations

Expressions like machine learning and a conversation with ChatGPT,  represent underlying, more complex, processes. Whereas an algorithm learns to recognise cats in images, it doesn’t learn like a human does. Metaphors like these help us interpret AI systems and their capabilities.

While metaphors can help us understand AI systems and their capabilities, using metaphors to simplify AI also comes with the risk of misinterpreting its capacities and shortcomings, which in turn can lead to problems in business decisions. It is important to recognize that machine learning is different. Learning machines are both better and worse compared to certain aspects of human learning. Most importantly: The learning is different. 

Human brains form memories through complex biochemical processes involving synaptic plasticity and neural consolidation, particularly during sleep. Machine Learning systems, in contrast, update weights through mathematical optimization, typically storing information in a more rigid, explicit way. The human brain's ability to selectively consolidate important memories while pruning less relevant ones is still far more sophisticated than our best ML pruning techniques.

Acknowledging that we are using metaphors and not synonyms in these cases is important. When we misinterpret the functioning and capabilities of technology, the basis of our decisions becomes flawed. The lower the quality our reasoning is, the worse our decisions become. In a business setting this has consequences like misallocated resources, unrealistic project timelines, insufficient risk assessment, and poor technology adoption choices that can impact both operational efficiency and competitive advantage. AI literacy, understanding what AI is and what it is not, is an important antidote to such risks.

Technology shapes how we think, and the words we use reveal those thought patterns. When the Internet emerged, we started describing it like a physical place - we "visit" websites protected by "firewalls" in "cyberspace." While these spatial metaphors helped us grasp new concepts, they also boxed in our thinking. As Cohen and Blavin pointed out in 2003, metaphors can limit our understanding by highlighting only certain aspects of what we're trying to describe. We started treating online spaces like physical ones, worrying about limited space and "trespassing." These mental shortcuts influenced how policymakers and lawmakers approached the Internet, even though many experts questioned whether spatial concepts made sense in the digital world.

When AI Sounds Too Human

To describe AI, the dominant choice of metaphors is using anthropomorphic, or human-like, language to describe algorithmic processes. We discuss machine learning, algorithmic behaviour, and AI creations, and we say that chatbots lie, hallucinate, or pretend. Slowly but surely, we may start believing in our own metaphors.

The anthropomorphisation of AI systems increases human trust in AI systems,  which is important for the smooth adoption of AI systems in medicine and improves human-computer interaction. In a study on a conspiracy theory debunking AI systems, trust in AI was important for persuading someone to rethink a conspiracy theory.

Although increased trust in AI has benefits, in some areas, increased trust comes with risks. For example, we might overestimate the capabilities and moral judgment of AI systems. Using AI systems for tasks they are not proficient at comes with risks. Using a chatbot as a search engine or fact checker might generate unfactual information. This is something Lumiera has written about in previous newsletters, for example the case when Google’s AI recommended adding glue to your pizza to prevent cheese from sliding off.

Another problem arises when we overestimate AI systems’ capacity to learn. Human learning happens through complex social interaction, physical experience, and abstract reasoning that we later use create our own thoughts and ideas. AI systems, on the other hand, consume data to optimise with a certain goal. This means that they are more constrained by their training objectives and architectures. For example, chatbots are optimised to create human-like texts. 

Although the processes are inherently different, they are proposed to be identical by high-profile AI spokespeople. For instance, Andrew Ng said: “Just as humans are allowed to read documents on the open internet, learn from them, and synthesize brand new ideas, AI should be allowed to do so too.“ 

Where AI systems do well, and where they don’t meet the cut

AI systems are good at isolated, clearly defined tasks that can be translated into a mathematical problem so that the computer can understand it. In general, they excel at:

  • Crunching numbers for demand forecasting

  • Spotting patterns in large datasets

  • Making predictions based on clear, measurable factors

Examples of this are scheduling and routing, weather prediction and automated computer safety systems.

AI systems perform worse when automating more complex, less-defined problems, when: 

  • Making judgment calls that require context

  • Handling situations not in their training dataset

  • Understanding nuanced human interactions

  • Making ethical decisions

AI shows both promise and limits in healthcare. While it can beat doctors at specific tasks like analyzing X-rays and diagnosing diseases in controlled settings, it struggles with real-world patient care. The difference lies in clinical reasoning - the complex, nuanced work of understanding and treating actual patients.

Making Smarter AI Decisions

Although AI seemingly has human characteristics, many metaphorized processes differ at the core from the human-derived words that describe them. Although metaphors can help to discuss and perceive AI, they might interfere with critical decision-making. Therefore, it is essential to understand AI systems' internal workings and risks when making organisational decisions.

Until next year.

Leon focuses on the societal and ethical implications of AI adoption. He uses his technical background to promote AI literacy, and aspires to create a more equitable and responsible view of AI adoption. He holds a BSc in Mathematics from the University of Amsterdam and a MSc in Data Science from the University of Lisbon.


On behalf of Team Lumiera

Emma, CEO
Allegra, CTO

Lumiera has gathered the brightest people from the technology and policy sectors to give you top-quality advice so you can navigate the new AI Era.

Follow the carefully curated Lumiera podcast playlist to stay informed and challenged on all things AI.

What did you think of today's newsletter?

Login or Subscribe to participate in polls.

Disclaimer: Lumiera is not a registered investment, legal, or tax advisor, or a broker/dealer. All investment/financial opinions expressed by Lumiera and its authors are for informational purposes only, and do not constitute or imply an endorsement of any third party's products or services. Information was obtained from third-party sources, which we believe to be reliable but not guaranteed for accuracy or completeness.