Loading...

Why AI hallucinations are posing a risk to the enterprise

Why AI hallucinations are posing a risk to the enterprise
Photo Credit: Pixabay
Loading...

In early 2023, Google’s Bard made headlines for a significant error, which we now call an AI hallucination. During a demo, when asked, "What new discoveries from the James Webb Space Telescope can I tell my 9-year-old about?" the chatbot incorrectly responded that the JWST, which launched in December 2021, took the "very first pictures" of an exoplanet outside our solar system. However, the European Southern Observatory’s Very Large Telescope took the first picture of an exoplanet in 2004.

More recently, in October 2024, OpenAI's Whisper transcription tool, widely used in healthcare, was found to generate hallucinations, posing risks to patient safety. A study by researchers from Cornell University, the University of Washington, and others revealed that Whisper “hallucinated” approximately 1.4% of its transcriptions, sometimes fabricating entire sentences, nonsensical phrases, or even harmful content, including violent and racially charged comments.

There are many more such instances. So, what exactly is an AI hallucination? Experts cite it as an instance when a large language model (LLM), like a generative AI tool, produces an incorrect answer, which can be a complete fabrication or simply wrong, as seen with the Bard incident.

Loading...

“The causes of hallucinations are varied, but a primary factor is the incorrect data used for training—AI's accuracy is directly tied to the integrity of its input data. Additionally, input bias can lead the model to identify false patterns, resulting in inaccuracies,” explained Namit Chugh, Principal at W Health Ventures, a Boston-based healthcare-focussed venture capital firm.

As businesses and consumers increasingly rely on AI for automation and decision-making, particularly in crucial sectors like healthcare and finance, the risk of errors is significant. According to Gartner, AI hallucinations undermine decision-making and damage brand reputation while contributing to the spread of misinformation. Such incidents erode trust in AI, which can have extensive consequences, particularly as businesses continue to adopt these technologies.

AI hallucinations can also have a severe impact on cybersecurity. They may lead organisations to miss real threats due to the AI's bias from flawed training data, which could result in a cyber-attack. Conversely, hallucinations can generate false alarms; if a tool misidentifies a non-existing threat, employee trust in the AI diminishes, potentially diverting resources away from genuine threats. Each inaccuracy further erodes confidence in AI, making it less likely for teams to rely on its outputs.

Loading...

Moreover, hallucinations may result in mistaken recommendations that hinder threat detection or recovery. For instance, if an AI tool detects suspicious activity but suggests incorrect follow-up actions, the cybersecurity team may fail to stop an attack, allowing threat actors to exploit vulnerabilities.

Ajay Goyal, Co-founder and CEO, of Erekrut, an AI-based job portal, believes that when AI provides erroneous data, it can lead to flawed decision-making, which can be exploited by malicious actors.

Combating this requires a multi-faceted approach, he said, for example, incorporating human oversight, enhancing model training with high-quality data, and diverse datasets, and deploying real-time monitoring to identify and correct AI outputs, said Goyal.
For example, training employees in prompt engineering is key; the quality of AI outputs largely depends on the specificity and clarity of the prompts used. Many staff may lack formal training in crafting effective prompts, but targeted instruction can improve outcomes and reduce hallucinations, said Chugh.

Loading...

Maintaining data cleanliness is crucial as well. “Hallucinations can often stem from using compromised data; ensuring the AI model is trained on accurate and reliable data can reduce instances of erroneous outputs,” believes Rashesh Mody, EVP of Business Strategy and Technology industrial software provider AVEVA.

“Given the current maturity of generative AI tools, errors should be anticipated. Establishing a protocol to verify the accuracy of information like setting up industry standards for responsible AI usage and transparency also plays a crucial role in mitigating the security risks associated with AI hallucinations,” he added.

While often viewed as a drawback, analysts noted that hallucinations in AI—especially Generative AI—can be beneficial for idea generation, quickly inspiring innovative concepts for products, ads, and narratives. They also help create synthetic data to enhance real-world datasets for training machine learning models and simulating complex environments, improving AI resilience testing.

Loading...

However, scepticism is necessary when dealing with AI-generated content, as hallucinations can lead to inaccuracies. Human oversight is essential before applying speculative outputs. Today, advancements in coding and innovative solutions, such as Retrieval-Augmented Generation (RAG), are still required to minimise the risk of AI hallucination, said Visakh ST, CTO of Simplify3X, a Bengaluru-based software testing firm. Visakh believes that as models improve, we can anticipate more innovative and responsible uses of hallucinations.


Sign up for Newsletter

Select your Newsletter frequency