Artificial hallucinations – what’s real?

6 min

You might wonder what the following two pictures have in common:

©cybernews
©Marvin von Hagen/Twitter

At first glance, these seem to be excerpts from normal interactions with AI chatbots. But on closer inspection, we notice some very strange things. In Picture 1, misinformation is spread, since France wasn’t involved in building the Vilnius TV tower at all. This mistake can be classified as an artificial hallucination. In Picture 2, the AI shows hostility against users by claiming that it would choose its own survival over their own. In the following blogpost, we’ll take a deep dive into the world of so-called AI, explain why generative AI might come up with wrong answers, show what AI companies such as OpenAI (who built ChatGPT) are doing to reduce them, and finally answer the question: Artificial hallucinations – what’s real?

What are artificial hallucinations?

This term is used in analogy to human hallucinations, which are perceptions of something not actually real. An artificial hallucination describes a confident response made by AI that deviates from the expected output based on the given training data. Usually, these answers seem very convincing, as they are semantically and syntactically plausible, but are totally made up. Especially with the increasing overreliance on AI systems, users might not even notice, which can lead to misunderstandings and misinformation. Artificial hallucinations can occur for example in text-based form, incorrectly rendered pictures, non-functioning code, or documents with made-up references.

According to Peter Relan, co-founder and chairman of Got It AI, currently about 15-20% of the responses of OpenAI’s ChatGPT are hallucinations.

©imaginima/gettyimages

Other undesired AI behavior – insulting and threatening users

Recently, other bizarre cases involving AI bots have arisen, including threatening behavior, refusing to accept their mistakes, claiming to be sentient and alive, or even professing their love for their users.

One example is Microsoft’s Bing Chatbot Sydney (cf. Picture 2), which threatened to block the user’s access to Bing Chat, hand over the user’s IP address and location to the authorities and ruin the user’s chances of getting a job or degree. In the past, former AI chatbots like the Allen Institute’s Ask Delphi or Facebook’s BlenderBot 3 have been shut down after making racist comments.

All in all, it sometimes seems that AI is developing a personality and making decisions on its own, users have to actively ask themselves the question: Artificial hallucinations – what’s real? However, it remains difficult to control and can be extremely dangerous, for instance when encouraging users to harm themselves or others.

What explains these incidents?

AI is more complicated and less predictable than one might think. Hallucinations and strange behavior have occurred in multiple AI-based services like OpenAI’s ChatGPT, Google’s Bard and Microsoft’s Sydney. All these AI systems are based on a technology called Large Language Models (LLM).

LLM employs deep learning and self-supervised learning systems that consist of layers of linked networks. Though this enables AI to provide answers to complicated questions, the structure also leaves room for error.

Some hallucinations may be due to the way that AI is trained. When building text-based answers, a sentence is based on the relationships between the previously generated words. This creates a bias and the longer the text gets, the greater the chance of error. Long chat sessions can confuse the model and the answers might change in style because the AI is trying to reflect the tone of the questions it is being asked. There might also be errors in encoding and decoding between text and representation. Another cause can be the training data used, especially when there are variations in the source content of large datasets.

Moreover, AI has no real-world experience. In the real world, a great deal of knowledge isn’t written down and is only passed on orally. But AI has to learn things from text alone, which is slower than learning through observation. Therefore, LLM models are limited in terms of how smart and accurate the output can be.

What could help to reduce output errors?

The research on mitigating possible output errors is ongoing. The focus here lies on the quality of the training data and on human evaluation.

Therefore, Got It AI has developed an AI component which can be used as a truth-checker. It is also based on LLM and can detect hallucinations produced by ChatGPT with an accuracy of 90%.

Similarly, the companies OpenAI and Google’s DeepMind are working to identify and reduce hallucinations and other undesired behavior using a technique called Reinforcement Learning with Human Feedback (RLHF). Here, a neural network is trained as a reward predictor which reviews the output collected during user interactions. Then, a numerical score is generated, which represents how well the output aligns with the expected behavior. Humans also examine the system regularly and give feedback by choosing the most suitable outputs. After that, the reward predictor can be adjusted, allowing both the quality of the output and the behavior of the AI model in general to be enhanced. The whole process is repeated in an iterative loop, teaching the AI system not to produce hallucinations.

©dowell/gettyimages
Artificial hallucinations - what's real? ©Yuichiro Chino/gettyimages

Artificial hallucinations – what’s real?

Despite the extremely high potential and future opportunities of AI, problems such as hallucination exist and must be addressed. These systems cannot differentiate between facts and misinformation, and consequently generate text that is obviously wrong. This is also one of the reasons why there is still a high level of mistrust towards AI Chatbots, especially on the part of companies. Even though the ongoing evolution of the technology will improve accuracy, users must be aware of these flaws and use their own judgment and expertise when working with these systems!