New ultrafast memory has been added to Intel Data Center Chips.
A new, efficient module architecture offered by DDR5 MRDIMM enhances system performance and data transfer rates. Multiplexing expands the capacity without requiring additional physical connections by combining and transmitting several data signals over a single channel, allowing applications to reach DDR5 RDIMM data speeds.
How Intel and industry partners were able to deploy high-end Xeon CPUs with a plug-and-play solution by ingeniously tripling the memory bandwidth of traditional DRAM modules.
Although Intel's primary focus is on the processors, or brains, that drive computers, system memory, or DRAM, is crucial to performance. Given that the number of processing cores has grown more quickly than the memory bandwidth (i.e., the memory bandwidth available per core has decreased), this is especially true for servers.
In demanding computing jobs like weather modeling, computational fluid dynamics, and some types of artificial intelligence, this discrepancy could act as a bottleneck.
What is MRDIMM?
Intel specialists have worked with industry partners for years to find a solution to overcome that limitation. They have created the fastest system memory ever using a novel technique that is anticipated to become a new open industry standard. The recently introduced Intel Xeon 6 data center CPUs are the first to employ this new memory, called MRDIMM, for enhanced performance in the most plug-and-play manner imaginable.The type of activity that MRDIMMs are most likely to assist with is memory-bandwidth bound, which accounts for "a significant percentage of high-performance computing workloads," according to Intel's Xeon product manager in the Data Center and AI (DCAI) division.
For narrative efficiency, here is the story of the DDR5 (Multiplexed Rank Dual Inline Memory Module)MRDIMM. It almost sounds too good to be true.
Using Friends to Bring Parallelism to System Memory
What Are RDIMMs?
It turns out that the most popular memory modules for data center operations, known as RDIMMs, include parallel resources, just like modern computers. They simply aren't used that way.In most DIMMs, one of the two rankings for performance and capacity is a senior principle engineer in memory pathfinding in DCAI. It's the perfect place.
The following is one approach to think about ranks:
Banks: a module's memory chips would be divided into two ranks, with one set belonging to one rank and the others to the other. RDIMMs can be used to store and retrieve data across many ranks, but not simultaneously.
The mux buffer combines each MRDIMM's electrical load, allowing the interface to operate faster than RDIMMs. Additionally, since both ranks may now be read simultaneously, memory bandwidth is expanded.
The fastest system memory ever built is the consequence of this jump, which normally takes many generations of memory technologies to complete (in this example, peak bandwidth jumps by nearly 40%, from 6,400 mega transfers per second (MT/s) to 8,800 MT/s).
Faster version of the same standard memory module
Now, you might be wondering if Intel is going to make a comeback to the memory business. No. Although Intel started out as a memory company and invented technologies like EPROM and DRAM, it has since abandoned its many memory product businesses, some of which are quite well-known.What makes the MRDIMM special is how easy it is to use. Since it utilizes the same form factor and connection as a typical RDIMM, there is no need to alter the motherboard (even the small mux chips fit in previously unused spots on the module).
MRDIMMs have the same error-correcting and reliability, availability, and serviceability (RAS) features as RDIMMs. No matter how different requests are multiplexed across the data buffer, Vergis maintains data integrity.
All of this suggests that when ordering a new server, data center clients can select MRDIMMs or they can take the server out of the rack and swap out the RDIMMs with new MRDIMMs. Without altering a single line of code, take pleasure in the enhanced performance.
MRDIMM + Xeon 6
The Intel Xeon 6 processor with Performance-cores, code-named Granite Rapids, which was introduced this year, is the first CPU on the market that is compatible with MRDIMMs.In recent independent testing, two identical Xeon 6 systems one with MRDIMMs and the other with RDIMMs—were assessed. The machine with the MRDIMM did up to 33% more work.
Among the AI workloads that can easily run on Xeon and profit from the bandwidth enhancement that MRDIMM offers are small language models (SLMs) and traditional deep learning and recommendation system workloads.
Leading memory vendors have launched MRDIMMs, and it is expected that additional memory manufacturers will do the same. High-performance computing centers such as the National Institute for Fusion Science and the National Institute for Quantum Science and Technology are rapidly adopting Xeon 6 with P-cores because of MRDIMMs, with support from OEMs like NEC.
0 Comments