Loading...

Tackling AI hallucinations in Indian enterprises with automated reasoning

Tackling AI hallucinations in Indian enterprises with automated reasoning
Photo Credit: Image generated using AI
Loading...

Artificial Intelligence (AI) adoption in Indian enterprises is accelerating, but so is a critical problem: AI hallucinations. Businesses across banking, healthcare, and customer service are integrating AI into their workflows, yet many are discovering that AI-generated content is not always factual. From financial predictions to automated medical advice, inaccurate AI outputs can lead to serious compliance risks, reputational damage, and customer distrust. While AI excels at pattern recognition and automation, its tendency to generate misleading or fabricated information raises concerns about its reliability in high-stakes industries.

According to Balakrishna D. R. (Bali), Executive Vice President, Global Services Head, AI and Industry Verticals, Infosys, AI hallucinations are not just errors but a feature of large language models (LLMs). While beneficial for creative applications, hallucinations pose risks in sectors requiring factual accuracy. He emphasises the need for mitigation strategies like Retrieval-Augmented Generation (RAG), knowledge graphs, hallucination benchmarks, and structured validation techniques to enhance AI reliability.

Similarly, Vijayant Rai, Managing Director at Snowflake India, identifies hallucinations as a major blocker for AI adoption. Many Indian enterprises are keeping generative AI tools restricted to internal use due to concerns over accuracy. He highlights that businesses must improve data readiness, implement strong governance, and use AI guardrails to prevent unintended outputs. 

Understanding Automated Reasoning 

Loading...

To address AI hallucinations, Amazon Web Services (AWS) has been developing automated reasoning techniques to improve AI reliability. Unlike traditional machine learning models that rely on probabilistic data patterns, automated reasoning applies formal logic and mathematical proofs to verify AI-generated responses. This approach ensures that AI outputs follow predefined rules and remain consistent with factual information. 

Infosys is seeing growing adoption of automated reasoning in Indian enterprises, particularly in banking, financial services and insurance (BFSI) and healthcare, where factual accuracy is critical. The company is working with customers to implement fact verification with domain-specific knowledge bases, consistency checks for AI outputs, and self-refinement loops using multi-agent models, where one AI generates responses and another critiques them before finalisation. 

How Automated Reasoning Works 

AWS has incorporated automated reasoning into its AI solutions to enhance accuracy, particularly through tools like Amazon Bedrock Guardrails. These systems analyse AI-generated content and validate it against established facts and business policies before it is presented to users. For example, in the financial sector, automated reasoning can help AI-driven advisory systems cross-check recommendations with regulatory requirements, reducing the risk of misinformation.

Loading...

Indian startups are also leveraging AI verification techniques. Krishna Tammana, CTO at Gupshup, notes that prompt engineering, fine-tuning, and RAG have significantly reduced hallucination risks. Some AI models are now designed to acknowledge uncertainty, responding with "I don’t know" instead of generating misleading information. 

Implications for Indian Enterprises 

For Indian businesses integrating AI into core functions, ensuring AI reliability is becoming a priority. Financial institutions using AI for risk assessment, e-commerce platforms deploying AI chatbots, and healthcare providers leveraging AI diagnostics all need to mitigate the risk of hallucinations.

While AI regulation is evolving under initiatives like the "IndiaAI Mission," there is no mandatory framework requiring AI verification in enterprise applications. Infosys and Snowflake suggest that India should focus on AI reliability standards tailored to industry-specific risks rather than adopting a broad EU-style AI Act. Businesses currently rely on industry-led governance, but there is growing consensus that BFSI, healthcare, and public safety applications may require enhanced regulatory oversight. 

Challenges in Implementing Automated Reasoning 

Loading...

Adopting automated reasoning in India comes with challenges. Snowflake points out that limited expertise, infrastructure costs, and regulatory uncertainty have slowed adoption. Manish Jha, CTO at Addverb, highlights that while cloud-based AI services reduce infrastructure costs, reasoning algorithms require high-quality datasets and specialised expertise, making implementation complex for startups.

Sridhar Mantha, CEO of Generative AI Business Services at Happiest Minds, explains that AI reasoning capabilities have evolved significantly, with techniques like grounding, step-by-step reasoning, and multi-agent validation models improving AI accuracy. However, he emphasises that accuracy requirements vary—while minor deviations may be acceptable in customer support, industries like healthcare require near-100% precision. 

The Way Forward 

As AI continues to shape business processes in India, enterprises must prioritise accuracy and reliability alongside innovation. Investing in automated reasoning and AI verification tools can help organisations build AI systems that are both efficient and trustworthy.

Loading...

Experts from Infosys and Snowflake suggest that India should adopt a balanced approach, strengthening sector-specific AI regulations while allowing flexibility for innovation. AI-powered decision-making is set to become mainstream, but enterprises must ensure that AI systems produce factual, consistent, and unbiased outputs to build long-term trust in the technology.  


Sign up for Newsletter

Select your Newsletter frequency