Generative AI putting firms’ sensitive information at risk: Report
Generative artificial intelligence (AI) tools like OpenAI’s ChatGPT may be putting companies’ confidential customer information and trade secrets at risk, Bloomberg reported quoting a study by Israeli venture firm Team8.
The report said that companies using such tools may leave them susceptible to data leaks and laws. The chatbots can be used by hackers to access sensitive information. Team8’s study said that chatbot queries are not being fed into the large language models to train AI since the models in their current form can’t update themselves in real-time. This, however, may not be true for the future versions of such models, it added.
The report also added that integrating generative AI tools in third-party applications like in the case with Microsoft Bing search engine and Microsoft 365 tools pose a heightened threat of information leakage. Further, generative AI can also increase discrimination, expose to copyright-related legal issues, and harm a company’s reputation.
Team8’s report lists many leaders and executive of US companies, including Ann Johnson, the corporate vice president at Microsoft, as contributors. Microsoft has invested billions of dollars in ChatGPT creator OpenAI in 2019.
Notably, it was reported earlier this month that electronics company Samsung’s employees were under fire for accidentally leaking confidential data. Three separate incidents in 20 days were observed, reported by The Register. At Samsung, engineers in the semiconductor division have been allowed to use ChatGPT to fix source code issues.
Samsung has now restricted employees’ usage of the AI chatbot. Further, the company is also said to be working on a ChatGPT-like tool for coding assistance and boosting productivity. This will be available for internal usage by company employees so that activities on this tool can be supervised for security.