YOU ARE AT:Telco CloudNvidia AI Enterprise containerizes AI with Tanzu support

Nvidia AI Enterprise containerizes AI with Tanzu support

Validates Domino Data Lab Enterprise MLOps software

Nvidia has updated its software suite for enterprise AI. Nvidia AI Enterprise 1.1 adds production support for VMware’s vSphere with Tanzu. vSphere is VMware’s cloud computing virtualization platform. vSphere with Tanzu creates a Kubernetes control plane in the vSphere hypervisor layer to help manage containerized cloud applications. 

“Now, enterprises can run accelerated AI workloads on vSphere, running in both Kubernetes containers and virtual machines with NVIDIA AI Enterprise to support advanced AI development on mainstream IT infrastructure,” said John Fanelli, vice president of Nvidia’s virtualization product group.

Fanelli said that vSphere with Tanzu support was a top customer-requested feature. He said it will also be added to Nvidia LaunchPad, its AI fast-tracking project. Qualified enterprises can test and prototype AI workloads at no charge through curated labs designed for the AI practitioner and IT admin via nine different global Equinix locations.

Enterprise AI use cases

Common enterprise use cases for AI include systems to power chatbots used in customer service, explained Anne Hecht, Nvidia’s senior director of Enterprise projects. Image classifications, sentiment analysis and price prediction are also common uses, Hecht told RCR Wireless News.

“We’ve built out curated labs to address those mainstream use cases that we see the most often and have the most value right away,” said Hecht. “We’ve built out infrastructure in nine data centers so customers can come test labs remotely.”

This saves Nvidia’s customers from getting held up by supply chain issues before they can get hardware on premises, Hecht said.

Other enhancements to AI Enterprise 1.1 includes validation with for Domino Data Lab Enterprise. Domino Data Lab Enterprise is a platform used to centralize enterprise data science work and infrastructure using MLOps. MLOps uses the DevOps model of continuous integration and continuous deployment to refine and iterate Machine Learning (ML) functions.

Nvidia has also certified systems from Cisco and Hitachi Ventara for use with AI Enterprise, including the Cisco UCS C240 M6 rack server and Hitachi Advanced Server DS220 G2. Both systems are made with Nvidia’s A100 Tensor Core GPUs. Cisco’s solution is aimed at big data analytics and high-performance computing (HPC); Hitachi’s is aimed at a blend of compute and storage needs.

Telco interest in AI Enterprise

Hecht told RCR Wireless News that Nvidia is working with telco partners to make AI Enterprise ready for operator use. Operators value the control and security afforded by AI Enterprise. Hecht said that the combination of Tanzu and AI Enterprise is an attractive combination to CSPs.

“We have a number of large telcos and we’re working on a solution with them, because there is additional functionality some of them want that crosses beyond AI but also uses this platform to run visualization workloads as well,” said Hecht.

Hecht stopped short of offering a timetable for those new telco-specific features, however.

VMware announced Tanzu Application Platform earlier this month. The app modernization toolkit combines various components to make it easier for developers to build and manage cloud apps using Kubernetes.

ABOUT AUTHOR