HPE Private Cloud AI with NVIDIA AI Computing

 

HPE Private Cloud AI

NVIDIA  AI Computing by HPE, a portfolio of jointly created  AI solutions and go-to-market integrations that help businesses embrace generative AI more quickly, was unveiled today by Hewlett Packard Enterprise and NVIDIA.

One of the portfolio’s standout products is HPE Private  Cloud AI, a first-of-its-kind offering that combines HPE’s AI storage, compute, and the HPE GreenLake cloud with the most extensive integration of NVIDIA AI processing, networking, and software to date. With the help of this solution, businesses of all sizes can create and implement generative AI applications in a sustainable manner while also saving energy and gaining flexibility. HPE Private  Cloud AI comes with a self-service cloud experience with complete lifecycle management and is available in four right-sized configurations to handle a wide range of  AI workloads and use cases. It is powered by the new OpsRamp  AI copilot, which helps IT operations optimise workload and IT efficiency.

Through a combined go-to-market strategy that includes sales teams, channel partners, training, and a global network of system integrator that can assist businesses in a range of industries in managing complex AI workloads, such as Deloitte, HCLTech, Infosys, TCS, and Wipro, all NVIDIA AI Computing by HPE offerings and services will be made available.

NVIDIA founder and CEO Jensen Huang joined HPE President and CEO Antonio Neri in announcing NVIDIA AI Computing by HPE during the HPE Discover keynote. This announcement signifies the growth of a multi-decade collaboration and underscores the significant effort and resource commitment from both organisations.

“Fragmented AI technology provides too many dangers and impediments to large-scale industry adoption, yet generative  AI has great promise for enterprise transformation and potentially threaten a company’s most valuable asset. its proprietary data,” Neri stated. “HPE and NVIDIA co-developed a turnkey private cloud for  AI to unleash the immense potential of generative AI in the enterprise. This will enable enterprises to focus their resources on developing new AI use cases that can boost productivity and unlock new revenue streams.”

According to Huang, “as every industry rushes to join the industrial revolution, generative AI and accelerated computing are fueling a fundamental transformation.” “Together with HPE’s private cloud technology, NVIDIA and HPE have never before so thoroughly integrated Nvidia technologies, giving enterprise clients and  AI professionals access to the most cutting-edge computing infrastructure and services to push the boundaries of AI.”

A Private Cloud AI portfolio co-developed by HPE and NVIDIA

With HPE Private  Cloud  AI, enterprise risk from AI is managed while innovation and return on investment are accelerated through a unique cloud-based experience. The resolution provides:

  • Assistance with RAG AI workloads that use private data, inference, and fine-tuning.
  • Enterprise control for requirements related to data security, privacy, and governance.
  • Proven cloud computing background with ITOps and AIOps capabilities to boost output.
  • Quick route to flexible consumption to take advantage of upcoming AI growth and opportunities.

Data software stack and curated AI in HPE Private Cloud AI

The NVIDIA  AI Enterprise software platform, which includes NVIDIA NIM inference microservices, is the starting point for the AI and data software stack.

Production-grade copilot and other GenAI application development and deployment are streamlined and accelerated by NVIDIA  AI Enterprise. Easy-to-use microservices for optimised AI model inferencing are provided by NVIDIA NIM, which is included with NVIDIA AI Enterprise. This allows for a seamless transition from prototype to safe deployment of AI models in a range of use cases.

With a unified control plane that offers adaptable solutions, continuous enterprise support, and trusted AI services like data and model compliance and extensible features that guarantee AI pipelines are in compliance, explicable, and reproducible throughout the  AI lifecycle, HPE AI Essentials software complements NVIDIA AI Enterprise and NVIDIA NIM.

HPE Private  Cloud AI provides a fully integrated AI infrastructure stack that includes NVIDIA Spectrum-X Ethernet networking, HPE GreenLake for file storage, and HPE ProLiant servers with support for NVIDIA L40S, NVIDIA H100 NVL Tensor Core GPUs, and the NVIDIA GH200 NVL2 platform in order to deliver optimal performance for the  AI and data software stack.

HEP GreenLake Private cloud enables a cloud experience

Thanks to HPE GreenLake cloud, HPE Private  Cloud AI provides a self-service cloud experience. HPE Greenlake cloud services offer manageability and observability to automate, orchestrate, and manage endpoints, workloads, and data across hybrid environments via a single, platform-based control plane. Workload and endpoint sustainability measurements are part of this.

Observability of the OpsRamp AI infrastructure, HPE GreenLake cloud, and copilot assistance

Observability and AIOps are provided to all HPE products and services through the integration of OpsRamp’s IT operations with HPE GreenLake cloud. The whole NVIDIA accelerated computing stack, comprising NVIDIA NIM and AI software, NVIDIA Tensor Core GPUs and  AI clusters, NVIDIA Quantum InfiniBand and NVIDIA Spectrum Ethernet switches, is now observable with OpsRamp. IT managers may monitor their workloads and  AI infrastructure in hybrid and multi-cloud settings by gaining insights to spot irregularities.

With a conversational assistant, the new OpsRamp operations copilot analyses massive datasets for insights using NVIDIA’s accelerated computing platform, increasing operations management productivity. In order to provide customers with a single service map view of endpoint security across their whole infrastructure and applications, OpsRamp will also interface with CrowdStrike APIs.

Use AI to speed up time to value and increase cooperation with international system integrators

As part of their strategic  AI solutions and services, Deloitte, HCLTech, Infosys, TCS, and Wipro announced their support of the NVIDIA AI Computing by HPE portfolio and HPE Private  Cloud  AI, with the goal of accelerating the time to value for enterprises in developing industry-focused AI solutions and use cases with evident business benefits.

Support for NVIDIA’s most recent GPUs, CPUs, and Superchips is added by HPE

Server Hewlett packard enterprise

  • The HPE Cray XD670 is perfect for LLM builders and supports eight NVIDIA H200 NVL Tensor Core GPUs.
  • For larger models or RAG users, the HPE ProLiant DL384 Gen12 server with NVIDIA GH200 NVL2 is the best option.
  • For LLM users seeking flexibility in scaling their GenAI workloads, the HPE ProLiant DL380a Gen12 server, which supports up to eight NVIDIA H200 NVL Tensor Core GPUs, is a great option.
  • HPE will be ready to support the new NVIDIA Blackwell, NVIDIA Rubin, and NVIDIA Vera architectures in addition to the NVIDIA GB200 NVL72 / NVL2.

Certified for NVIDIA DGX BasePOD and NVIDIA OVX systems, high-density file storage

With its NVIDIA OVX storage validation and NVIDIA DGX BasePOD certification, HPE GreenLake for File Storage offers customers a dependable enterprise file storage solution for scaling up  AI, GenAI, and GPU-intensive workloads. Regarding future NVIDIA reference architecture storage certification programmes, HPE will be a time-to-market partner.

Post a Comment

0 Comments