IBM framing policy on generative AI chatbots
IBM is in the process of drafting a policy that will define how third-party generative artificial intelligence (AI) tools such OpenAI’s ChatGPT and Google’s Bard are used by its employees, three senior executives at the technology giant said at its AI Innovation Day event in Bengaluru on 20 June.
Speaking on the rise of generative AI and how such tools are used for internal processes, Gaurav Sharma, vice president at IBM India Software Labs, said the company is evaluating the segment and its veracity, “since these tools are built on untrusted sources that can’t be used.” He added that a policy is “still being framed” around the use of generative AI applications such as ChatGPT.
Vishal Chahal, director of automation at IBM India Software Labs, further affirmed the development of an internal policy on the use of such tools.
Work on the policy remains under development, but so far, no outright bans have been put in place. “A general education has been conducted around not putting our code into ChatGPT, but we haven’t banned it.” Shweta Shandilya, director at IBM India Software Labs said.
“With every new technology such as the use of other generative AI tools (beyond ChatGPT), deliberations around its usage are an ongoing process,” a spokesperson for IBM said respond to a query on the framing of the internal policy on ChatGPT.
IBM won’t be the first company to look at regulating the use of ChatGPT. On 2 May, Bloomberg reported that South Korea’s Samsung Electronics decided to ban the use of ChatGPT among employees after sensitive internal data was deemed to have been leaked. On 25 January, Insider reported Amazon to have issued a similar internal email, asking staff to not use ChatGPT due to concerns with the security of sharing sensitive internal data with OpenAI. On 18 May, The Wall Street Journal reported Apple to have also taken a similar route.
Global banks Goldman Sachs, JP Morgan and Wells Fargo are also deemed to have restricted internal use of ChatGPT, out of concern regarding leakage of sensitive client and customer data to OpenAI’s test bed of data.
IBM’s policy comes as a report, published on 20 June by Singapore-based cyber security firm Group-IB, claimed that data from over 100,000 ChatGPT accounts were scraped and sold on dark web marketplaces.
However, on 22 June, OpenAI said the stolen data was a result of “commodity malware on devices, and not an OpenAI breach.”
Explaining why such internal bans are taking place, Jaya Kishore Reddy, co-founder and chief technology officer at California-based AI chatbot developer Yellow.ai said, “There are a lot of chances that generative AI tools can generate misinformation. There is an accuracy problem, and people may even misinterpret the generated information. Further, the data fed into these platforms are used to train and fine-tune responses — this may result in leakage of a company’s confidential information.”
On 27 February, Mint reported that companies are wary of deploying tools such as ChatGPT, with concerns including factors such as hallucination of data, potentially inaccurate and misleading information, and no safeguards on retrieval or deletion of sensitive corporate data.
Bern Elliot, vice-president and analyst at Gartner, said at the time, “It is important to understand that ChatGPT is built without any real corporate privacy governance, which leaves all the data that it collects and is fed without any safeguard. This would make it challenging for organizations such as media, or even pharmaceuticals, since deploying GPT models in their chatbots will leave them with no safeguard in terms of privacy. A future version of ChatGPT, backed by Microsoft through its Azure platform, which could be offered to businesses for integration, could be a safer bet in the near future.”
Since then, OpenAI has introduced better privacy controls. On 25 April, the company said via a blog post that users can turn off conversation history to have their usage data permanently deleted from its servers after 30 days. It also affirmed that a “for business” version of ChatGPT is under development, which would allow companies greater control over their data.
Yellow.ai’s Reddy added that companies are presently opting for enterprise-grade application programming interfaces (APIs) from companies like OpenAI that ensure data security, or building their own in-house models.