Only half of AI devs believe they're approaching responsible AI
Artificial intelligence (AI) has become an everyday reality, even among those unaware of how often they use the technology. That said the technology and its awareness, still being at a nascent stage, increases challenges on the privacy front and a new study shows only responsible AI, which is the practice of building AI that is transparent, accountable, ethical, and reliable, can help create trust and privacy among users.
A recent online survey conducted by data privacy, and cybersecurity services provider Tsaaro noted that only 61% of the participants are aware of what bias in AI is, nearly two-thirds (65%) of the participants stated that they try avoiding AI-enabled features such as phone unlocking using face ID, digital voice assistance, etc. Yet only 50% of AI developers believe that they are on the verge of developing responsible AI.
“While privacy concerns are usually always a major concern when using new technology, the scope and applicability of AI presents a particularly challenging scenario,” said Akarsh Singh Co-founder CEO, Tsaaro. The lack of clear, comprehensible procedures on building and implementing AI models often add to this challenge, he said.
An August 2022 report by Appen on the state of responsible AI and machine learning also said that 93% respondents believe, responsible AI is the foundation of all AI projects. But Mark Brayan, CEO at Appen noted that the “key problem developers are facing is when trying to build great AI with poor datasets, and it’s creating a significant roadblock to reaching their goals”. The report further said, “42% of technologists report the data sourcing stage of the AI lifecycle as being very challenging, while they emphasise the importance of data accuracy.”
In another instance, Shubhangi Vashisth, senior principal analyst (AI) at Gartner emphasised on the need to recognise the challenges in AI-systems that deliver biased results. She cites an example of a hiring tool that reinforces racial discrimination or entrenches these prejudices against certain communities. Oftentimes users and developers are not aware of the system’s process to reach the output. “This opacity increases the bias in datasets and decision systems,” believes Vashisth, as she added that bringing more diversity into the team can however mitigate the bias often developed by algorithms, according to Vashisth. Transparency and explainability are also the keys to developing responsible AI applications.
As India still lacks regulations related to data protection specifically to cater to the needs in the wake of rapid technological changes, the Tsaaro study also noted that the government has a key role to play in order to foster safe and ethical AI in tandem with other emerging technology advancements. “A comprehensive, multidisciplinary approach is necessary to strike the correct balance to much because the wrong kind or the wrong kind of regulation could hinder the adoption of AI or ignore its real problems,” said Singh.
Researchers unanimously agreed that when AI is developed responsibly, and stakeholders are made to understand the importance of responsible AI, the outcomes turn out to be positive.