ServiceNow taps Nvidia to build custom gen AI models for enterprises
American software firm ServiceNow on Thursday said that it is partnering with chipmaker Nvidia to develop generative AI capabilities for different enterprise functions in an effort to optimize business processes and workflows.
The partnership will see ServiceNow leveraging Nvidia’s software, services and infrastructure to develop custom large language models (LLM) trained on data specifically for its Now Platform that can help automate IT workflows across departments.
While popular generative AI models (ChatGPT, GPT4 etc.) can be leveraged for similar functions, they often learn from public-domain data to deliver results, for enterprise use cases, this may not be very effective, as the models have not been exposed to internal company data, the company said. For example, if an employee asks about connecting to a company’s VPN or about an internal policy, public models may not be able to answer the question accurately.
With this partnership with Nvidia, ServiceNow looks to address this gap by building custom generative AI models for enterprises that are customised to learn from a company’s vocabulary and provide accurate, domain-specific answers. Likewise, the model can be used to deliver customized learning and development recommendations, like courses, based on natural language queries and information from an employee’s profile.
“As adoption of generative AI continues to accelerate, organizations are turning to trusted vendors with battle-tested, secure AI capabilities to boost productivity, gain a competitive edge, and keep data and IP secure,” said CJ Desai, president and chief operating officer of ServiceNow.
“Together, NVIDIA and ServiceNow will help drive new levels of automation to fuel productivity and maximize business impact,” he added.
As a customer of ServiceNow, Nvidia also plans to share its data for initial research and development of custom models aimed at handling IT-specific use cases. The companies are starting off with ticket summarization, a process that takes about seven to eight minutes when done manually by agents but could be instantly handled by AI models.
For this, ServiceNow is using Nvidia AI Foundations cloud services and the Nvidia AI Enterprise software platform, which includes the NeMo framework, a cloud-based toolkit for developers to build, customize, and deploy generative AI models. The custom models will be running on hybrid-cloud infrastructure consisting of Nvidia DGX Cloud and on-premises Nvidia DGX SuperPOD AI supercomputers, the company said in a statement.
“IT is the nervous system of every modern enterprise in every industry,” said Jensen Huang, founder and CEO of Nvidia in a statement adding that the companies’ collaboration to “build super- specialized generative AI for enterprises will boost the capability and productivity of IT professionals worldwide using the ServiceNow platform”.