Loading...

Nvidia AI Summit: TCS, TechM, Infy, Yotta, L&T and others partner Nvidia for India's AI advancement

Nvidia AI Summit: TCS, TechM, Infy, Yotta, L&T and others partner Nvidia for India's AI advancement
Loading...

Chip behemoth Nvidia and Mukesh Ambani-led Reliance said that they will be jointly attempting to build artificial intelligence (AI) infrastructure in India, Nvidia's CEO Jensen Huang announced during a chat session with Reliance Industries Chairman Mukesh Ambani at the Nvidia AI Summit 2024 on Thursday.

Huang said that it makes complete sense that India should manufacture its own AI, while Ambani agreed, “We can use intelligence to actually bring prosperity to all the people and bring equality to the world.” He believes that apart from the US and China, India has the best digital connectivity infrastructure.

Notably, some of the other leading technology firms also announced new or expanded partnership with chipmaker Nvidia to advance their AI initiatives and accelerate AI adoption among enterprises across industry sectors at the Nvidia India AI Summit 2024.

Loading...

TCS Launches Nvidia Business Unit to speed up enterprise AI Adoption

India's biggest TCS has expanded its partnership with Nvidia to create customised solutions for the manufacturing, banking, financial services and insurance (BFSI), telecommunications, retail, and automotive sectors, facilitating the swift adoption of AI technologies. This will be managed by a new business unit within TCS' AI-Cloud division, leveraging both companies' strengths.

The unit will develop tailored AI strategies using global centers of excellence, investments in the Nvidia AI platform, and skilled professionals, while integrating TCS' proprietary framework with Nvidia's technology.

Loading...

Jay Puri from Nvidia noted that this unit will enhance AI and simulation capabilities, promoting innovation in India and globally. Siva Ganesan from TCS emphasised that their expertise at the intersection of business and technology allows them to identify key opportunities for clients.

TechM to set up CoE using Nvidia platform to advance sovereign AI

Tech Mahindra has announced the establishment of a Center of Excellence (CoE) leveraging Nvidia platforms to advance sovereign large language models (LLMs), agentic AI, and physical AI. Utilising the Tech Mahindra Optimised Framework and Nvidia AI Enterprise software, including NeMo, NIM microservices, and RAPIDS, the CoE aims to deliver customised enterprise AI applications that integrate agentic AI, which enhances productivity through autonomous learning and reasoning.

Loading...

The CoE also employs the Nvidia Omniverse platform to develop interconnected industrial AI digital twins across sectors such as manufacturing, automotive, telecom, healthcare, and finance. Additionally, Tech Mahindra has launched Project Indus 2.0, an advanced AI model focused on Hindi and its dialects, serving sectors like retail and healthcare in India. The LLM aims to improve conversations in Hindi and plans to incorporate agentic workflows and support more dialects.

Atul Soneja, COO of Tech Mahindra, stated, “We are redefining AI innovation by integrating GenAI, industrial AI, and sovereign LLMs into global enterprises.” The company also plans to use the new Nvidia NIM Agent Blueprint for customer service.

Infosys unveils small language models built on NVIDIA AI stack

Loading...

Infosys has launched its small language models — Infosys Topaz BankingSLM and Infosys Topaz ITOpsSLM — built using the NVIDIA AI Stack.
The collaboration leverages NVIDIA AI and Infosys Topaz offerings for scaling enterprise AI. These models are developed as part of the Infosys centre of excellence dedicated to NVIDIA technologies and built to help businesses quickly adopt and scale AI.
The small language models utilise general and industry-specific data, enhanced by NVIDIA’s AI Enterprise and NVIDIA AI Foundry in collaboration with Sarvam AI.
The models are fine-tuned with Infosys data and integrated into existing offerings, like Infosys Finacle and Infosys Topaz for business and IT operations, creating foundational models for industry-specific applications.
Jay Puri, Executive Vice President, Worldwide Field Operations, NVIDIA, said, “Generative AI and the recent advancements in agentic and physical AI are ushering in a new era of innovation and productivity for enterprises worldwide. NVIDIA’s full-stack AI platform combined with Infosys Topaz empowers businesses to build and deploy custom AI applications that will transform industries, helping businesses unlock their full potential.”
 

LTTS partners with Nvidia for new AI-led experience zone

L&T Technology Services Limited has launched an AI-driven Experience Zone at its Bengaluru design hub to support clients in the Mobility and Technology sectors using the Nvidia AI platform. The Experience Zone offers live demonstrations, interactive displays, and expert consultations to address complex challenges. For example, in healthcare, LTTS' Software Defined Architectures, powered by Nvidia Holoscan, aim to enhance AI-driven diagnostics and real-time data analysis, improving efficiency and access. The telecommunications sector will benefit from generative AI and Nvidia solutions to boost connectivity and 5G integration.

Loading...

In Mobility, the partnership with Nvidia focuses on safety, automation, and predictive maintenance to enhance operations and passenger experience. LTTS also plans to upskill over 1,000 engineers on the Nvidia platform. "The AI Experience Zone creates an immersive environment for exploring transformative AI applications," said Vishal Dhupar, Managing Director, Asia South, Nvidia.

Yotta, Sarvam to leverage Nvidia tech to develop India's first open-source AI model

Yotta Data Services, a subsidiary of the Hiranandani Group, has partnered with AI startup Sarvam to develop India’s first open-source foundational AI model, Sarvam 1. The foundational AI model, named Sarvam 1, will be developed using a LLM trained on a dataset comprising 4 trillion tokens, all managed by an Indian entity with computational resources located in India. This model, which leverages Nvidia AI technology, will be supported by Yotta’s Shakti Cloud infrastructure.

Loading...

Sarvam 1 is the first foundational model built from scratch by combining the AI capabilities of both companies. The AI agents are capable of processing ten Indian languages — Hindi, Tamil, Telugu, Malayalam, Punjabi, Odia, Gujarati, Marathi, Kannada, and Bangla — and are designed to enhance processes such as customer support, feedback collection, and employee engagement, among others.

Sunil Gupta, CEO of Yotta Data Services, noted that Shakti Cloud provides Sarvam AI with access to Nvidia accelerated computing, enabling the efficient development of large-scale models. This partnership enhances Sarvam AI's access to advanced AI technology in India while ensuring data sovereignty and security, giving it a competitive edge with Shakti Cloud's pricing model and infrastructure. With a competitive pricing model, Shakti Cloud is poised to provide Sarvam with a substantial competitive advantage, supported by some of the most advanced AI infrastructure available in the country.

Sify Unveils GPU Cloud to Boost AI Projects

Sify Technologies has launched its new GPU cloud platform, CloudInfinit+AI, to help businesses handle AI workloads more efficiently. Announced at the NVIDIA AI Summit in Mumbai, this service provides GPU-as-a-Service (GPUaaS), allowing companies to rent powerful Graphics Processing Units on a pay-as-you-go basis.The platform is built for tasks that require heavy computing power, like machine learning, data analytics, and deep learning. CloudInfinit+AI offers flexibility for businesses to scale up or down depending on their needs, without having to invest in expensive hardware.Sify’s new service also supports hybrid cloud setups, providing low-latency connections to larger cloud services. This launch comes after Sify became the first service provider in India to get certified as an NVIDIA DGX-Ready Data Center.

F5 Boosts AI Delivery with BIG-IP Next and Nvidia DPUs

F5 has launched BIG-IP Next for Kubernetes, a solution designed to enhance, secure, and optimise data traffic for AI applications in large enterprises and service providers. By integrating Nvidia’s BlueField-3 Data Processing Units (DPUs), F5 aims to improve the efficiency and scalability of AI workloads while ensuring strong security.

This partnership enables BIG-IP Next to manage high-performance networking and traffic control, reducing latency and enhancing resource utilisation for handling large data volumes required by AI models. As a result, organisations can achieve faster AI inferences and a better customer experience.

BIG-IP Next, combined with Nvidia’s DPUs, offers centralised management of AI data traffic, optimising data center resources for quicker AI processing, particularly beneficial for sectors like telecommunications and cloud-native environments. The technology streamlines AI processes, reduces hardware needs, lowers energy consumption, and enhances multi-tenancy, making it ideal for AI-centric environments.

Kunal Anand, F5's Chief Technology and AI Officer, emphasised that this integration enhances observability, control, and performance for AI workloads, providing a powerful solution for managing large-scale AI infrastructure.

ServiceNow expands tie with Nvidia to adoption of agentic AI

Technology company ServiceNow has expanded its partnership with Nvidia to boost enterprise adoption of Agentic AI. They will use Nvidia NIM Agent Blueprints to develop native AI Agents on the ServiceNow platform, allowing customers to activate business-driven use cases.

Nvidia CEO Jensen Huang noted that the combination of accelerated computing and generative AI is transforming enterprises. The collaboration aims to identify various AI agent use cases, building on six years of joint innovation.

The Now Platform is becoming essential for enterprise transformation in the generative AI landscape. By leveraging Nvidia's AI infrastructure and ServiceNow’s AI platform, the partnership enhances productivity and streamlines workflows across industries.

ServiceNow CEO Bill McDermott emphasised that GenAI offers a significant advantage, with the partnership introducing next-generation agentic AI to enterprises. He highlighted the urgency for CEOs to modernise operations and embrace an AI-driven future through their collaboration. 

Zoho to leverage Nvidia platform to build LLMs for businesses

Chennai-based Zoho Corporation plans to use Nvidia NeMo, part of Nvidia AI Enterprise software, to develop LLMs for its Software-as-a-Service (SaaS) offerings. The company has invested over $100 million in NVIDIA GPUs and AI solutions, with an additional $100 million pledged. These LLMs will be available to Zoho's 700,000 global customers, including ManageEngine and Zoho.com.

Ramprakash Ramamoorthy, Director of AI at Zoho, said that the need for LLMs tailored for business applications, stating that their technology stack allows for enhanced AI effectiveness through context. Zoho will leverage Nvidia's accelerated computing platform and Hopper GPUs to improve its LLMs and optimise processes like speech-to-text. The company is also accelerating other workloads like speech-to-text on Nvidia accelerated computing infrastructure.


Sign up for Newsletter

Select your Newsletter frequency