Microsoft signs deal with CoreWeave for computing infrastructure: Report
Microsoft is considering investing billions of dollars over several years in computing infrastructure from startup CoreWeave, according to sources familiar with the matter, CNBC reported on Thursday.
Earlier this year, the Windows maker reportedly signed a deal with CoreWeave to secure sufficient computing power for OpenAI, which operates ChatGPT chatbot, as per the report. OpenAI relies on Microsoft's Azure cloud infrastructure to fulfill its extensive computational requirements.
Recently, CoreWeave made headlines with the announcement of a successful $200 million fundraising effort. The company has secured financing slightly over a month after achieving a $2 billion valuation. The company also offers streamlined access to Nvidia’s graphics processing units (GPUs), widely regarded as the most effective option for executing artificial intelligence models.
Amidst high demand for its infrastructure, Microsoft is seeking alternative ways to leverage Nvidia’s GPUs. According to CNBC, in a recent interview, CoreWeave CEO Michael Intrator refrained from commenting on the Microsoft deal. However, he did reveal that the company's revenue has experienced a significant increase from 2022 to 2023.
According to CoreWeave's website, the company is capable of providing computing power at a cost that is 80% lower than traditional cloud service providers. CoreWeave provides Nvidia's A100 GPUs, which are also available on Amazon, Google, and Microsoft Clouds, among other cards.
Last year, a frenzy over generative AI was sparked by the launching of ChatGPT by OpenAI. The technology showcased the ability of AI to generate advanced responses based on human input. Many tech firms, such as Google, have swiftly incorporated generative AI into their offerings. Microsoft has recently launched chatbots for its platforms, including Bing and Windows.
Generative AI chatbots like ChatGPT require significant computing power to function due to the complex algorithms and models it uses to produce updated information. The training of algorithms and models involves large datasets, which may lead to significant computational costs. Furthermore, it is crucial for generative AI products to have the capability of producing current information in real-time, resulting in a greater number of computational resources.