Skip to main content

The AI Hallucination Problem: How to Protect Your Work

Dan O'Keefe, Appian
February 16, 2024

Artificial intelligence has been instrumental in helping businesses streamline and speed up business operations by automating repetitive tasks. It also frees up workers to focus on higher-level tasks that require human intervention. And when paired with other intelligent automation tools, such as robotic process automation, it can improve business processes and provide a competitive advantage.

But artificial intelligence is not perfect. AI can make mistakes. In particular, AI hallucinations can cause serious problems for organizations if left unchecked. Some hallucinations can be innocuous: a generative AI might create an image of a dog with five legs. But AI hallucinations cause bigger issues when applied to mission-critical processes such as risk management or fraud detection.

This post will cover the AI hallucination problem and, more importantly, how you can prevent it from resulting in serious fallout.

[Hallucinations aren’t the only risks to watch out for. Hear from eight AI leaders on overcoming risks and the top AI trends in our 2024 AI Outlook.]

Understanding AI hallucinations.

First off, what are hallucinations? AI hallucinations refer to an AI model producing an incorrect or unusual output. These mistakes can go unnoticed if the error isn't obvious to the person who's interpreting the model. For instance, if someone is new to their job role, they may not immediately notice that the model gave an incorrect answer to a question. 

But why does this occur? How does AI hallucinate in the first place?

AI models are trained using a process called machine learning. Machine learning can involve multiple techniques, but generally, it involves teaching an AI model how to recognize patterns in data and then generate responses based on what it learns. The AI model can then make predictions based on what it has learned from its training dataset. 

For example, consider large language models (LLMs). LLMs predict the likelihood of the next word in a given sentence. For example, if you ask it to write a review of the film Oppenheimer by Christopher Nolan, the LLM might say the lead actors are Cillian Murphy and Christian Bale. It might choose these two actors knowing that they have worked with Nolan on previous films (Inception and The Dark Knight Trilogy). But while it predicted Cillian Murphy correctly, Robert Downey Jr. would be the supporting actor in this case. If you're unfamiliar with the film, you might take the review seriously and be fooled by the confident and even plausible nature of the output.  

[AI can boost your organization’s productivity. But where should you apply it for maximum impact? Become more efficient with these 6 applications of AI.]

Three problems caused by AI hallucinations.

Of course, giving incorrect information as part of a movie review is a fairly benign mistake (unless you're a professional movie reviewer). When mission-critical processes use AI-powered automation, the stakes get much higher and the margins for error grow slim. What are some examples of more serious issues caused by AI hallucinations? 

1. Efficiency loss.

AI and other automation technologies promise extreme efficiency gains. But hallucinations introduce errors that can lead to rework, siphoning some of those efficiencies away from organizations. 

For example, consider an example from the realm of financial services: fraud detection. AI is often used to look for spending patterns on bank accounts and have a set criteria for behavioral patterns that indicate fraud. However, the AI model may start to generate false positives by believing a benign behavior is malicious if it's distantly similar to a known fraud indicator, such as making a purchase out of state. That could be a fraud indicator, but it’s more likely the person is just on vacation. This weakness of AI requires the organization's team to review more transactions and potentially spend more time on the phone with customers to resolve issues.

2. Lack of trust.

Mistakes erode trust. This is particularly true in the case of customer service, where AI has become increasingly widely adopted, from chatbots to knowledge management tools.

For example, consider an insurance customer service rep who uses a generative AI chatbot to search for answers to customer queries. The organization might train a model on their own policy documents and knowledge they've learned from previous work. But let's imagine a customer asks a question about coverage outside of their state, and the AI hallucinates the wrong answer. If the rep doesn't know the answer is incorrect, they could give the person bad information that could lead to a claim denial. These issues can cause the customer to lose trust in the organization and begin searching for a new insurance provider.

3. Bad predictions and increased risks.

AI makes predictions for everything from risk management to loan approvals. AI-powered automation can help make processes like these more efficient and, most of the time, more effective. 

However, AI hallucinations introduce risks into the process. For example, if you use AI to make credit determinations and the AI hallucinates, it could approve a credit decision for a risky candidate. Or, conversely, it could cause credit applications to be wrongly denied en masse, which could harm customer sentiment or even lead to accusations of credit approval bias (and have you run afoul of regulators). Either way, widespread hallucinations can have downstream impacts that seriously harm your organization.

Protecting your organization from the AI hallucination problem.

Single instances of hallucination may not necessarily sink an organizational process. But over time and with repetition, the damage can accumulate. So how do you manage and minimize AI hallucinations?

1. Embraced mixed autonomy.

AI doesn’t exist in a vacuum. While news headlines often stoke fears around AI, the truth is far more measured. Companies must embrace mixed autonomy, a future where digital workers and human workers collaborate to accomplish work together. Beyond the fact many problems still require human intelligence to solve, AI hallucinations specifically demonstrate the need for human supervision of AI-based systems.

2. Train on your own private datasets.

The current innovations around generative AI tools and LLMs have driven a lot of renewed focus on AI. And due to the size of the data sets involved, many may believe that larger models always produce better results. But this isn't true—it’s best to use the right data for the right task in the right amounts. In fact, smaller models trained on your own organization's data can produce more targeted, accurate results. This can also lead to fewer hallucinations, as the model will have a much better understanding of acceptable outputs.

3. Recheck and retrain models over time.

AI models suffer from drift. This occurs because most AI models are trained on data sets and parameters as they exist at a single point in, typically outside of a production environment. In live scenarios, models quickly become outdated, leading to poor predictions. The distance between an AI model output and the expected result in the real world is known as drift. 

To mitigate this, organizations must actively monitor model performance. If you’ve embraced mixed autonomy, recruit your employees to regularly report errors. Additionally, tracking metrics like increased complaints or reopened support tickets can potentially alert you to model drift. Finally, try to revisit your models on a regular basis—monthly, quarterly, or even in real time. Keeping models trained on up-to-date data greatly reduces the occurrence of hallucinations.

Minimizing AI hallucinations.

Organizations must use artificial intelligence in their operations or risk falling behind those that do. This makes AI hallucinations an occupational hazard that businesses must contend with. However, with the right governance and human oversight, you can limit both the occurrence of and fallout from AI hallucinations. 

One way of limiting AI hallucinations that we mentioned involves training your own AI models. Doing so not only makes your AI models more accurate, it also bolsters data privacy. Download our guide Implementing Private AI: A Practical Guide to find out how.