HPE introduces AI cloud for large language models
American technology company, Hewlett Packard Enterprise (HPE), on Wednesday, launched an AI supercomputer cloud service that will enable organisations of all sizes to pursue AI projects without the cost and complexity of having to buy, install, and manage AI-specific hardware themselves.
“We are announcing HPE GreenLake for Large Language Models (LLMs), an HPE-hosted, subscription-based cloud service that will enable enterprises to access HPE Cray XD supercomputers and the AI software they need to build and run large-scale AI models,” the company said in a statement.
The company also announced a set of new services to expand its GreenLake platform, its on-premises solutions that are offered through a cloud and subscription-based, as-a-service model.
These announcements were made at the company's flagship event, HPE Discover 2023 conference in Las Vegas, being held from June 20-22, 2023.
The announcements put HPE into direct competition with cloud computing providers such as Amazon Web Services (AWS), Microsoft Azure and Google Cloud, among others, all of whom are betting big on AI. Besides, in May, chipmaker Nvidia launched a new AI supercomputer for generative AI workloads, and prior to that in March, the company partnered with Oracle Cloud, Microsoft Azure, Google Cloud, and others to make its AI supercomputers available over the cloud.
“HPE GreenLake for Large Language Models allows our customers to rapidly train, tune, and deploy large language models on demand using a multi-tenant instance of our supercomputing platform — truly a supercomputing cloud combined with our AI software," said Justin Hotard, HPE’s executive vice president and general manager of High Performance Computing, AI & Labs, said in a news briefing.
Justin Hotard, executive vice president and general manager of HPE's high-performance computing and artificial intelligence unit, said the company will use its experience in supercomputers to offer a service specifically for what are called large language models, the technology behind services like ChatGPT. "HPE GreenLake for LLMs allow our customers to rapidly train, tune, and deploy large language models on demand using a multi-tenant instance of our supercomputing platform," he said.
Hotard said that the company hopes to attract enterprises of all sizes — from startups to Fortune 500 companies and even public sector organisations — to use HPE GreenLake for LLMs. Some customers may run AI workloads on-premises, but they may want to turn to the AI supercomputing cloud service for bursting capability, he said.
The company expects HPE GreenLake for LLMs to be available by the end of 2023 in North America and early 2024 in Europe. Users will be able to support AI and HPC jobs on hundreds or thousands of CPUs or GPUs at once, he said.
To recall, in 2019, HPE announced plans to make every product it sold available as a service — and its success prompted other hardware makers, including Dell and Cisco, to pursue similar strategies. Over time, HPE has added more products and services to GreenLake, including introducing software-as-a-service (SaaS) offerings.
At this year's HPE Discover event, the company also unveiled its enhancements to HPE GreenLake for Private Cloud Enterprise, a managed service introduced last year in which HPE designs, installs, and manages private clouds for customers in their data centers, edge locations, and colocation sites. The new version called the Business Edition, will support public clouds, allowing customers to spin up VMs on their private clouds and on public clouds, he added.
That said, HPE is adding support for AWS, Azure, and Google Cloud, allowing customers using the private cloud to also self-provision workloads on the public clouds.
HPE also announced its expanded partnership with data centre company Equinix which will offer both versions of HPE GreenLake for Private Cloud Enterprise as a pre-provisioned offering in its data centres worldwide. Equinix will begin offering the service in August, the company said.