VMware and Nvidia have expanded their alliance to make it easier for enterprises to run GPU-accelerated AI applications in existing data centre infrastructure.
The virtualisation giant announced the expanded alliance with Nvidia Tuesday, saying that the GPU maker has exclusively certified VMware’s vSphere 7 Update 2 to support the new Nvidia AI Enterprise offering, a software suite of enterprise-grade AI tools and frameworks that enables Nvidia GPU-accelerated applications to run in virtual machines and containers.
VMware’s vSphere 7 Update 2 is also introducing support for Nvidia’s A100 Tensor Core GPU and its multi-instance GPU feature, which will allow data centre operators to split up individual A100s into as many as seven distinct instances for use by multiple users.
Nvidia said it developed Nvidia AI Enterprise alongside VMware as part of a first-of-its-kind industry collaboration that will allow enterprises to move AI applications from bare-metal servers to virtualized environments with very little performance impact. The suite includes solutions that support a broad range of industries, from health care and manufacturing to financial services.
Justin Boitano, vice president and general manager of Enterprise and Edge Computing at Nvidia, told CRN USA that the vSphere exclusivity agreement means that the Nvidia AI Enterprise offering is not available on any other virtualization platform. He declined to say when the exclusivity agreement would expire.
“We’re working together to make this as easy to consume as possible for enterprises that want to go on this journey of using AI to improve efficiency,” he said in a briefing with journalists last week.
The new software certification work means that Nvidia AI Enterprise is primed to run on servers from Dell Technologies, Hewlett Packard Enterprise, Supermicro and other server makers that has been certified through the GPU maker’s recently launched Nvidia-Certified Systems program.
Lee Caswell, vice president of marketing of VMware’s cloud platform business unit, said the new collaboration around Nvidia AI Enterprise “is going to be incredibly powerful in reducing both real and perceived risk” when it comes to running AI applications in data centres.
“We’re doing the work behind the scenes to reduce the real risk through certification, integration, testing, and then perceived risk is how we come to market together to show that our companies are both ready to support the joint solution,” he said.
Boitano said Nvidia AI Enterprise can significantly reduce the amount of work required for companies to deploy AI applications into production environments. For instance, he said, Nvidia’s Transfer Learning Toolkit, which is available in the suite, can reduce the amount of time it takes for a company to build an AI model from 80 weeks to eight weeks.
“We’re taking this cloud-native deployment to heart in trying to make sure everything’s containerized and can be orchestrated in cloud native ways going forward to make setup and deployment easy,” he said, adding that support for Nvidia Enterprise AI will eventually come to vSphere Tanzu for running AI applications in containers from the virtualization platform.
Nvidia AI Enterprise is now available as a perpetual license at US$3,595 per CPU socket, and enterprise business standard support for the software suite is US$899 a year per license.