Brillio’s Muthumari S on the growing importance of human-centric AI
Despite the global impact of artificial intelligence (AI) in industries, it has been criticised for lacking emotional and social intelligence. Human-centric AI models — that focus on designing and implementing AI systems that prioritise the needs and well-being of humans — are being developed to address this issue, though it's still early stages. In an interaction with TechCircle, Muthumari S, Senior Director — Global Head of AI Studio at Brillio, stresses the need for AI to empathetically understand and respond to human needs. She further explained how companies should create an ethical and responsible AI framework and the importance of design thinking in enhancing AI. Edited excerpts:
What is the concept of human-centric AI and why it matters to the enterprise?
We've shifted from "building AI" to "building with AI" by integrating foundation models from hyperscalers into our projects. The market trend is now towards "humanising AI", which is our focus at Brillio. We prioritise human-centred projects, with AI in the loop instead of humans. This approach emphasizes human needs, ethics, and holistic solutions to unleash AI's potential. We believe this shift is crucial because all AI applications are ultimately for humans. Creating intuitive experiences is key to their success. Human-centric AI focuses on user experience, considering emotions, preferences, and context to create meaningful interactions.
Ethical and responsible AI is another important aspect. We assess if AI is necessary and prioritise human touch points for sensitive conversations. Privacy-preserving techniques like differential privacy and federated learning ensure data security. In today's world, ethical AI requires consideration of human emotions, intelligence, privacy, fairness, and transparency.
Which sectors are currently using human-centric AI models and which ones are lagging in terms of adoption, especially in India?
Various consumer sectors, particularly healthcare, life sciences, retail, and insurance, are quickly embracing a human-centric AI approach. While industries like insurance rely heavily on human interaction, there is a noticeable shift towards using AI to enhance customer experience and loyalty. However, the focus on backend operations and efficiency enhancements is not as strong currently. The real surge in adoption is seen in industries prioritising end-consumer experience and a human-centric approach.
With the growing dependence on AI models and their effect on data security, how does Brillio ensure the protection of customer data?
The power of data for positive change and innovation in the industry is clear, but it also carries risks without the right mindset, security measures, and governance. When starting an AI project with a company, we create a Data Readiness Index to assess data governance components like quality, security, transfer processes, and anonymisation. By assigning scores based on these checks, we can pinpoint areas needing attention before proceeding. For example, if data security is lacking, we advise the client to make improvements first. Each aspect of data readiness is evaluated to identify areas for enhancement and suggest actions to take alongside project implementation. It is also increasingly important to define data ownership rights and establish transparent data agreements, especially when managing data transfers between consumer and producer teams in a business setting. Prioritising data security concerns before moving forward to the MVP (minimum viable product) stage ensures that data readiness is given priority in AI implementations for successful project results.
How are businesses ensuring the ethical implementation of AI in their corporate strategies?
Most organisations, especially large enterprises, already have data governance and data security guidelines in place. However, the challenge arises when datasets are being collected without a clear understanding of where they fit into the data architecture. Some data may be left unused or unorganised until it enters the data lake and is actively used. This shift in mindset requires enterprise customers to reconsider collecting data that does not currently serve a purpose.
Storage and data usage within the architecture are separate issues. While the latter is typically managed, the former often leads to oversight. Additionally, data literacy poses a challenge as multiple teams within an organisation may have their own data strategies. For example, a sales and marketing team may collect and strategise data differently from a customer experience team, leading to misalignment in data usage. During our readiness discovery phase, we address these issues to ensure that data is collected purposefully and used in alignment with organisational guidelines.
In the event of a malfunction, who should be accountable or responsible for the failure of these AI models?
I will analyse the situation from a legal standpoint, as legal advisors are now involved in almost every project. We carefully review data ownership, especially in producer-consumer relationships, to determine responsibility for data collection and quality. Legal protocols must be established before using any AI system, including disclaimers about data sources and user access. These disclaimers are tailored to each project and industry, with collaboration between legal teams to ensure compliance with regulations. We take into account four primary perspectives, regardless of specific regulations: data protection, bias, transparency, and privacy. We examine data to prevent bias and monitor metrics for transparency. We also tackle subjectivity and ethical challenges by monitoring metrics like toxicity and faithfulness scores. Privacy is a central concern, and we employ methods such as differential privacy and federated learning to safeguard customer data.
Is there any chance that AI will be smart enough to outsmart humans?
Not yet, however, Artificial General Intelligence (AGI) is gaining more prominence, with many claims suggesting that it will soon begin to exhibit emotions. Currently, we have models that closely resemble humans, but customers can easily discern that they are interacting with AI. Nevertheless, progress in Artificial Super Intelligence (ASI) is rapidly advancing. Theoretically, ASI will surpass human intelligence in all aspects, including creativity, problem-solving, and emotional comprehension. This advancement is taking place in the background, with numerous hyperscalers and industries making substantial investments to achieve ASI's capabilities, surpassing human intelligence in most economic tasks.
Three key trends will emerge: increased autonomy in business processes, personalised services in various sectors, and ethical concerns regarding privacy and job displacement. It is important for policymakers, researchers, and industry leaders to work together to ensure that AI technologies are developed and used in a way that benefits humanity as a whole.
What are the new job titles that are emerging in companies that adopt human-centric AI?
Legal advisors specialising in AI policies, regulations, data regulations, and disclaimers are essential for AI projects and are in demand. Technology architecture is important, but competencies like critical thinking, interpersonal skills, customer perspective, and design thinking are important. Initiating projects now involves design-thinking consultants and technology-skilled business analysts. AI literacy and readiness are vital for enterprises transitioning to AI, requiring workforce training. Trainers promote AI literacy within projects. Demand is high for prompt engineers and specialists in data security and integration. LLMOps engineers are also replacing DevOps engineers, focusing on detailed metrics and a human-centric approach. Ethical AI experts, including AI psychologists and AI anthropologists, who study the societal impacts of AI projects are also emerging roles.
What type of investment are you putting into AI research?
At Brillio, we are focusing on three key areas: AI literacy, research and development, and responsible AI. Our AI literacy efforts include training sessions on market trends and integrating AI into daily activities. We are also preparing for future innovations in research and development, particularly in generative AI. Additionally, we are committed to supporting ethical AI practices through our responsible AI initiatives.
What according to you is the future of human-centric AI?
I would stress that Design thinking — a human-centred approach to problem-solving that focuses on collaboration between designers and users — and AI can be a powerful combination of creative problem-solving and innovation. AI augments the Design Thinking process by providing advanced data analysis, pattern recognition, and predictive capabilities. These enhancements assist teams in navigating complex problems with greater precision and speed. For instance, AI can quickly analyse vast datasets to reveal user behaviour patterns, informing more accurate empathetic insights and needs analysis. Ultimately, RoI (return of investment) should not only be measured in terms of monetisation but also human experience and sustainability — an area we are currently exploring with some of our larger clients.