What is NVIDIA MGX?
NVIDIA MGX stands for NVIDIA Modular Server Architecture. Consider MGX as a template for creating flexible, high-performance servers for data centers. With this architecture, one may build servers that meet the increasing demands of demanding applications like AI and HPC.
ASUS has announced that it will be present at booth #730 at the NVIDIA GTC global AI conference, where it will be showcasing its solutions. The pinnacle of ASUS GPU server technology, the ESC NM1-E1 and ESC NM2-E1, which elevate AI supercomputing to new heights thanks to their NVIDIA MGX modular reference architecture, will be on display.
As a means of addressing the growing need for generative AI, ASUS leverages the most recent NVIDIA technologies, including as the GB200 Grace Blackwell Superchip, the B200 Tensor Core GPU, and the H200 NVL, to provide optimal AI server solutions that promote AI adoption in a variety of sectors.
Customized AI solutions using the brand-new ASUS server with NVIDIA MGX technology
ESC NM1-E1 and ESC NM2-E1, the newest ASUS NVIDIA MGX-powered 2U servers, use the NVIDIA GH200 Grace Hopper Superchip, which delivers excellent performance and efficiency. NVIDIA NVLink-C2C technology powers the NVIDIA Grace CPU, which consists of Arm Neoverse V9 CPU cores with Scalable Vector Extensions (SVE2). Enterprise AI development and deployment is made possible by ASUS MGX-powered servers, which integrate with NVIDIA BlueField-3 DPUs and ConnectX-7 network adapters to give a blistering 400Gb/s data speed.
For AI-driven data centers, HPC, data analytics, and NVIDIA Omniverse applications, the MGX-powered ESC NM1-E1 offers unmatched flexibility and scalability when combined with NVIDIA AI Enterprise, an end-to-end, cloud-native software platform for developing and implementing enterprise-grade AI applications.
Cutting-edge liquid cooling technology
The increased use of AI applications has increased the need for sophisticated server cooling systems. Direct-to-chip (D2C) cooling from ASUS stands out from the competition with its fast and easy solution. The power-usage effectiveness (PUE) ratios of data centers may be decreased by quickly implementing D2C. The ASUS ESC N8-E11 and RS720QN-E11-RS24U servers accept cool plates and manifolds, allowing for a variety of cooling options.
AI software solutions with confidence
With its cutting-edge knowledge in AI supercomputing, ASUS offers rack integration and efficient server architecture for tasks requiring a lot of data. A no-code AI platform with an integrated software stack will be on display at GTC by ASUS under the ESC4000A-E12, allowing companies to expedite AI development on LLM pre-training, fine-tuning, and inference while lowering risks and time-to-market without having to start from scratch. Furthermore, ASUS offers a complete solution with tailored software to handle various LLM tokens from 7B, 33B, and even over 180B, enabling smooth server data dispatching.
The software stack makes sure that AI workloads and apps may function without wasting resources by optimizing the allocation of GPU resources for fine-tune training. This helps to optimize efficiency and return on investment (ROI). Additionally, ASUS’s software-hardware synergy gives organizations the freedom to choose the AI capabilities that best suit their requirements, enabling them to maximize return on investment.
By optimizing the allocation of dedicated GPU resources for AI training and inferencing, this novel software technique improves system performance. With the help of the integrated software-hardware synergy, enterprises of all sizes including SMBs can effectively and easily use sophisticated AI capabilities. This meets a variety of AI training demands.
ASUS will present its newest innovations, including the ASUS ESC NM1-E1 GPU server. This cutting-edge server, using NVIDIA’s MGX modular reference architecture, will revolutionize AI supercomputing with unmatched capabilities and performance.
ASUS offers a wide range of server solutions to help organizations construct powerful generative AI systems. ASUS offers entry-level machines, high-performance GPU servers, and liquid-cooled rack solutions for a variety of computing demands. ASUS’s experience in MLPerf benchmarks allows them to optimize hardware and software ecosystems for large-language-model (LLM) training and inferencing, providing smooth integration of holistic AI solutions for AI supercomputing.
ASUS NVIDIA MGX-powered server for tailored AI solutions
ASUS ESC NM1-E1, a 2U server powered by NVIDIA MGX, has the GH200 Grace Hopper Superchip for exceptional performance and efficiency. NVIDIA Grace CPUs include 72 Arm Neoverse V9 CPU cores with Scalable Vector Extensions (SVE2) and NVLink-C2C technology. ASUS MGX-powered servers with NVIDIA BlueField-3 DPUs and ConnectX-7 network adapters provide 400 Gb/s data throughput for corporate AI development and deployment. With NVIDIA AI Enterprise, an end-to-end, cloud-native software platform for designing and delivering enterprise-grade AI applications, ESC NM1-E1 gives AI-driven data centers, HPC, data analytics, and NVIDIA Omniverse apps unprecedented flexibility and scalability.
Large-scale AI and HPC compact GH200 1U server
In a 1U and 800 mm chassis, ASUS ESR1-511N-A1 supports deep-learning (DL) training and inference and HPC with the NVIDIA GH200 Grace Hopper Superchip. The newest NVIDIA Superchip with NVLink C2C technology provides coherent memory, high bandwidth, and low latency. Compact size, high density, good scalability, and ideal rack design allow smooth integration into current infrastructure.
AI advancement with end-to-end Eight-GPU NVIDIA HGX H100 server
The ASUS ESC N8A-E12 is a powerful 7U twin-socket server with two AMD EPYC 9004 CPUs and eight NVIDIA H100 Tensor Core GPUs. For maximum compute-heavy job throughput, it uses a unique one-GPU-to-one-NIC setup to power AI and data science. With its superior cooling methods and unique components, ESC N8A-E12 is a thermal efficiency, scalability, and performance powerhouse that will save operating expenses.
Advanced AI infrastructure empowers businesses
The ASUS ESC8000A-E12P server has top-tier GPUs, fast GPU interconnects, and a high-bandwidth fabric for corporate AI workloads. This powerful server supports up to eight dual-slot GPUs and offers scalable configurations for diverse workloads. It supports NVIDIA NVLink Bridge or AMD Infinity Fabric Link for enhanced performance scaling, making it ideal for AI and HPC environments.
Next-generation SupremeRAID servers
AMD EPYC 9004 processors power the ASUS RS720A-E12-RS24 server, which has SupremeRAID by Graid Technology for high throughput, low latency, and scalability, establishing industry standards. In collaboration with BeeGFS, the parallel file system known for its scalability and performance, RS720A-E12-RS24 removes the RAID bottleneck to deliver maximum SSD performance without CPU cycles. ASUS and Graid Technology combine their strengths to deliver unparalleled storage efficiency and reliability.
This powerful server has 128 Zen 4c cores, up to 4800 MHz DDR5, and 400-watt TDP per socket. This server has 24 disk bays and nine PCIe 5.0 slots for upgrades. Advanced air cooling, remote administration, and multiple GPU support make it excellent for AI and HPC applications.
Innovative direct-to-chip cooling multi-node solutions
Four-node ASUS RS720QA-E12-RS8U servers are suited for CDN, HCI, and cloud applications, improving IT flexibility and performance. Direct liquid cooling optimizes data center performance for high-TDP CPUs. Each node has twin CPUs, plenty of memory, and many communication possibilities. This HPC data center server optimizes space and uses direct-to-chip (D2C) cooling to minimize PUE and operating costs, supporting sustainable energy objectives.
ASUS is thrilled to work with industry partners to provide a broad range of server solutions, cooling modules, and bespoke data-center designs with unmatched efficiency and performance for the digital age.
News Source : NVIDIA MGX
0 Comments