AI language models seem to have the answer for everything. But how accurate are its answers? The AI models often speak with such authority that we seldom bother to check how accurate their output is. Especially now, as they get progressively smarter, one would assume that their margin for error has only decreased over time. Yet, researchers are finding that AI hallucinations are more prevalent now than ever. Considering how many people use these models as references for information, this could potentially spell disaster in many situations, from legal to medical. Why do these models hallucinate, and are the companies that made them doing anything to fix the issue?
The Problem With AI Hallucinations

Imagine this scenario. You are in school and you need to write a piece on a famous individual from history, which you will present orally the following day in front of your class. Unfortunately, you haven’t had time to visit the school library and take out any books on the subject. You get home and make something to eat, scroll through social media, and watch something on YouTube. Before you know it, it’s 10 pm and you suddenly remember your assignment for the next day. Well, for many people nowadays, the answer to this problem seems pretty straightforward – use Chat GPT or another AI language model to do the homework for you. So, you type in a basic prompt, asking the model to generate the assignment for you, and you go to bed. However, the next day at school, everyone, including the teacher, looks at you in bewilderment as you read out your assignment.
What Are AI Hallucinations?

In humans, hallucinations are sensory distortions; they hear or see things that are not really there. For AI systems, however, these hallucinations are text-based and situational. AI hallucinations refer to the tendency for large AI systems to generate text that is completely fictional, yet sounds very plausible. These are not little mistakes either, We aren’t talking about spelling mistakes or incorrect punctuation. These hallucinations comprise highly detailed accounts and facts about events or people that have no grounding in reality whatsoever. While this may not be a major issue in certain situations, it could cause some real issues in some cases. There was already a legal team that got into serious trouble for citing cases in a court that the AI had simply fabricated.
The Progression of AI Hallucinations

At first, it was quite easy to notice when the AI was acting strangely. The output often didn’t make any sense, or the grammar was stilted and awkward. However, as these models have increased in complexity, so has their ability to create increasingly more incorrect answers. The only difference is that the output sounds so plausible, it’s often very hard to detect whether it’s true or not. They can now produce long, detailed replies that have very little in connection with reality. Furthermore, it’s not just the typical models that suffer from AI hallucinations; many of the tweaked versions, like the reasoning models released by OpenAI, have been shown to hallucinate nearly 50% of the time! No wonder your teacher shook her head as you told them about the time Abraham Lincoln invented candyfloss.
Read More: 9 Risks and Dangers of Artificial Intelligence
The Reason AI Often Hallucinates

Even the engineers of these systems aren’t entirely sure why AI hallucinates. However, there are several factors that they believe contribute to this issue. For one, these systems are trained on data sourced from reams of online text. If this data is outdated, biased, or missing crucial facts, the AI system might use plausible-sounding fabrications to fill those gaps. Secondly, when faced with unfamiliar questions, it will often produce answers that fit a certain pattern based on what it perceives should come next in a string of words. These systems don’t have real common sense and therefore often struggle to know when something they are saying is true or not. The answers are also often presented with such an authoritative voice, it feels like it surely must know what it’s talking about. This often leads people to not bother verifying the output to make sure it is accurate.
The Consequences of AI Hallucinations

At the moment, these AI systems are used around the world for a large range of different applications. Many use it for personal reasons, such as discussing their mental or physical health, or help with daily planning. Others use it for business reasons or educational purposes. In each case, however, relying on these systems for accurate information can prove to have negative consequences. This is particularly true when it comes to medical advice, or relying on an equation that could seriously damage your work if incorrect. It would be helpful if AI would rather say it was unsure of something than provide an incorrect reply, but this is not the case.
Shouldn’t the Developers Be Doing Something To Fix This?

Probably the biggest problem is that not even the creators of these systems know exactly why the AI hallucinations occur. Additionally, the more complex these systems become, the harder it is to identify what is causing them to make things up. It seems to be an intrinsic part of the current architecture underlying these systems. While engineers work to try to fix these issues, some feel that they will never really go away. Despite these issues, these companies continue to build ever more complex and larger models. Yet, as we have observed, size and complexity has not equated to improved accuracy.
The Bottom Line

It seems that the issue of AI hallucinations may be a built in side effect of the current AI technology. Not only may the data it was trained on be biased and incomplete, but it is also finite. As a resource, real-life quality data will soon run out. Many companies will then turn to synthetic data to train on. Since that data is generated by AI, we may soon see the snake eat its own tail. If the synthetic data contains hallucinations, this will only compound the problem even more.
Read More: Boy Visits 17 Doctors Over 3 Years for Chronic Pain—ChatGPT Sheds Light on Diagnosis