The powerful 7U dual-socket ESC N8A-E12 NVIDIA HGX server


NVIDIA HGX servers from ASUS


The GenAI POD Solution, a holistic approach to address the growing demand for AI supercomputing, was launched by ASUS and its subsidiary Taiwan Web Service Corporation. Leading the AI revolution, ASUS will showcase its NVIDIA MGX-powered AI servers, which include the ESC NM1-E1 and ESR1-511N-M1 as well as the ESC N8A-E12 and RS720QN-E11-RS24U HGX GPU servers. The distinct resource management architecture and software stacks of TWSC enable these solutions to easily handle a wide range of generative AI and large language model (LLM) training workloads. These integrated solutions come with robust software platforms and state-of-the-art thermal designs that are adaptable to organisations' specific requirements. This provides clients with complete data centre solutions, enabling them to excel in their AI endeavours.


ASUS NVIDIA MGX servers Personalised AI system to meet specific needs. The NVIDIA MGX powers the ASUS ESC NM1-E1, which is powered by an NVIDIA GH200 Grace Hopper Superchip. This combination of NVIDIA NVLink-C2C technology and its potent 72 Arm Neoverse V9 CPU cores guarantees exceptional performance and efficiency. As such, it offers dramatic advances in memory capacity and performance, making it a fantastic choice for AI-driven data centres, HPC, data analytics, and NVIDIA Omniverse applications.

Another highlight of the ASUS presentation is the ASUS ESR1-511N-M1 server, which enables deep-learning (DL) training and inference, data analytics, and high-performance computing. It is designed to handle large-scale AI and HPC applications. It accomplishes this by making use of the NVIDIA GH200 Grace Hopper Superchip's power. The ESR1-511N-M1 has a lower power usage effectiveness (PUE) and an enhanced thermal solution for superior performance, in keeping with ESG trends. Its flexible architecture facilitates fast and simple data transfers by combining three PCI Express (PCIe) 5.0 x16 slots with up to four E1.S local drives via NVIDIA BlueField-3 and a 1U design with the maximum compute density.

HGX servers from NVIDIA


With its optimised servers, data centre infrastructure, and AI software development capabilities, the ASUS ESC N8A-E12 enhances AI with end-to-end H100 eight-GPU capability. It is a potent 7U dual-socket server that uses two AMD EPYC 9004 CPUs and eight NVIDIA H100 Tensor Core GPUs. Its purpose is to support generative AI. Reduced PUE and maximum efficiency are guaranteed by its enhanced thermal solution. This powerful HGX server has a unique one-GPU-to-one-NIC configuration that maximises performance for compute-intensive tasks. It was created for advancements in AI and data science.
Designed for high-performance and compute-intensive operations, the ASUS RS720QN-E11-RS24U is a high-density server using an NVIDIA Grace CPU Superchip with NVIDIA NVLink-C2C technology. For data centres, web servers, virtualized clouds, and hyperscale scenarios, the RS720QN-E11-RS24U is an inventive solution with a small design and four node capability within a 2U4N chassis. It also provides exceptional performance for dual-socket CPUs and compatibility with PCIe 5.0.


ASUS D2C cooling solution: Direct-to-chip (D2C) cooling offers a quick and easy solution that leverages current infrastructure and permits rapid implementation with reduced PUE. The ASUS RS720QN-E11-RS24U supports cool plates and manifolds, providing a range of cooling solutions. Moreover, ASUS servers have a rear-door heat exchanger that works with standard rack-server designs, so all that has to be changed to enable liquid cooling in the rack is the rear door.

As a result, there's no need to swap out every rack. In order to facilitate the design and building of greener data centres, ASUS is committed to reducing data centre PUE, carbon emissions, and energy consumption. The company provides enterprise-grade complete cooling solutions. To do this, the business collaborates closely with top cooling solution suppliers in the sector.


AI-generated POD systems

 
Using the FORERUNNER 1 and TAIWANIA-2 supercomputer series from the National Centre for High-performance Computing (NCHC), TWSC has extensive expertise configuring and overseeing large-scale AIHPC infrastructure for NVIDIA Partner Network cloud partners (NCP). Furthermore, by enabling the quick deployment of AI supercomputing and flexible model optimisation for AI 2.0 applications, TWSC's AI Foundry Service enables users to tailor AI demand to their own needs.

TWSC's generative AI POD solutions offer enterprise-grade AI infrastructure with fast rollouts and comprehensive end-to-end services, all while upholding strict cybersecurity and high availability protocols. Success stories in scientific, medical, and educational contexts are made possible by ASUS products. Businesses seeking a stable and sustainable generative AI platform will find TWSC technology intriguing due to its all-inclusive cost-control capabilities, which minimise OPEX and optimise power utilisation.

News Source:  NVIDIA HGX

Post a Comment

0 Comments