Loading...

Responsible AI: An imperative beyond business strategy

Responsible AI: An imperative beyond business strategy
Loading...

Sophia, developed by Hanson Robotics, is the world’s first robot citizen and serves as the inaugural Robot Innovation Ambassador for the United Nations Development Program (UNDP). In Sophia’s words:
 
“My very existence provokes public discussion regarding AI ethics and the role humans play in society, especially when human-like robots become ubiquitous. Ultimately, I would like to become a wise, empathetic being and make a positive contribution to humankind and all beings. My designers and I dream of that future, wherein AI and humans live and work together in friendship and symbiosis to make the world a better place. Human-AI collaboration: That’s what I’m all about.”
 
The questions and choices before us as business leaders center around the many dimensions of using AI responsibly. What does this look like in practice? In research? In design, development and deployment of AI systems?
 
Ultimately, It’s All About Trust
 
In CX Network’s recent Global State of CX 2024 survey, 67 percent of respondents agreed that customers were concerned about ethical AI’s present use and future development in customer experience. Encouragingly, governments and enterprise leaders are prioritizing responsible AI. We have the Blueprint for an AI Bill of Rights in the US and the European Union (EU) Artificial Intelligence Act as examples of positive government interventions. Simultaneously, global organizations such as Mastercard, Microsoft, Lenovo and Salesforce, among others, have committed themselves to the United Nations Educational, Scientific and Cultural Organization’s (UNESCO) Recommendation on the Ethics of Artificial Intelligence.
 
C-suite leaders must set the right tone for business heads, users and AI developers in their organizations.  It is incumbent upon leaders to ensure that their AI initiatives uphold individual rights, liberties and preferences while fostering transparency and trust.
 
One notable example is Zoom, which by default disables its platform’s AI capabilities to ensure customer data is not used for training AI models without explicit user consent. This proactive approach demonstrates how businesses can prioritize ethical practices, ensuring trust with customers and employees. When customers trust a brand to use AI ethically, they are more likely to engage deeply, fostering lasting relationships that drive revenue and profitability. Similarly, employees take pride in working for organizations committed to responsible AI, reinforcing loyalty and morale.
 
Key Stakeholders for Responsible AI
 
Key stakeholders for Responsible AI include end-users, developers and designers, data scientists, ethicists, legal professionals and business stakeholders, all crucial in ensuring ethical and effective AI systems. Additionally, organizations should involve policymakers and regulators, society and the general public, marginalized and vulnerable populations, academia and research institutions, professional associations and industry groups to build a comprehensive Responsible AI framework. Engaging this diverse array of stakeholders fosters inclusivity and helps create AI systems that benefit all members of society.
 
The Five Pillars of Responsible AI
 
Enterprise leaders must establish a culture of Responsible AI around the following five key pillars:

Ethical AI frameworks and guidelines should be based on core values that align with those of the organizations we lead. Such frameworks and guidelines will effectively operationalize our strategies and principles and build structured processes for transparent decision-making and resolving ethical dilemmas. Governance around these will ensure continuous monitoring and assessment of the AI system lifecycle for impact and evolution.
 
For example, Microsoft's Responsible AI Standard is a comprehensive framework guiding the design, development and testing of AI systems based on six key principles: Fairness, ensuring equitable treatment for all individuals; Reliability and Safety, promoting safe and dependable operations; Privacy and Security, safeguarding user data from unauthorized access; Inclusiveness, engaging diverse user participation in design and usage; Transparency, making AI decision-making processes understandable; and Accountability, holding developers responsible for their systems' impacts. This standard aims to create ethical, trustworthy AI technologies that benefit society while evolving with new insights and regulatory requirements.
 
Transparency and explainability must extend beyond data use to every instance where customers react to AI-driven systems. Customers should be aware of — and informed consent obtained — the collected data, how it is used and how decisions are made. For instance, Adobe's Firefly Generative AI toolset stands out for its transparency, publishing details about the training data, which includes rights-owned or public domain images. This ensures that users can trust that the tool is copyright-compliant.

Expert human supervision of AI systems so that outputs are empathetic and safe to consume — especially important for AI-powered virtual assistants. Human oversight is essential to guide interactions and take over when the limits of AI are reached in unprecedented situations. Salesforce’s Einstein AI, for example, ensures that human oversight is applied both as a guiding arm and guardrail for the use of AI.

Loading...

Prioritization of data privacy and security by implementing practices such as data anonymization, secure storage protocols and strict adherence to data protection regulations like the General Data Protection Regulation (GDPR) and the Central Consumer Privacy Act (CCPA). Moreover, organizations must develop comprehensive data governance strategies to continuously assess the quality of AI training data.

Sustainability measures to minimize environmental impact (optimizing energy usage, minimizing carbon footprints and responsibly managing hardware lifecycles), foster social equity and ensure economic efficiency. Carbon Engineering’s approaches for carbon capture and elimination and Xylem’s water management systems that preserve and distribute water resources are great examples of AI-powered sustainability. 
 
A Collective Responsibility
 
The famous game theory puzzle, the Prisoner’s Dilemma, highlights the tension between individual benefit and collective good. While leaders in the AI race may face similar dilemmas, both foresight and hindsight make the choice clear: Adopt Responsible AI to ensure ethical and sustainable progress.
 
By centering strategies around trust, transparency and inclusivity, businesses can position themselves as innovators and ethical leaders, shaping a better future for all.

Gautam Singh

Gautam Singh


Gautam Singh is Business Unit Head at WNS Analytics.


Sign up for Newsletter

Select your Newsletter frequency