Netlist Benchmarks Demonstrate Superiority of HyperCloud Memory vs. LRDIMM

Nov 12, 2012

We have concluded that through a series of comprehensive performance benchmarks of our flagship memory product, HyperCloud™ (HCDIMM) vs. LRDIMMs, all results show a significant performance advantage of the HyperCloud architecture in the order of over 40%.

HyperCloud™ memory utilizes a unique distributed buffer architecture to reduce memory bus latency incorporating Netlist’s patented rank multiplication and load reduction technologies. More DRAM capacity can be achieved by this method and the load reduction enables the memory interface to run at faster speeds.

Earlier this week, we have announced that Netlist will be presenting a Sandra 2012 benchmark (by SiSoftware) at the Supercomputing Show 2012 in Salt Lake City. This benchmark reveals a 39% throughput improvement for streaming data applications with HyperCloud™ memory, demonstrating the superior performance behavior of the Netlist patented virtual rank technology. As the two identical servers – except for the memory configuration, HCDIMM and LRDIMM - run fully loaded with three DIMMs per memory channel at the same frequency of 1333MHz, the 39% advantage can be attributed to the lower latency and higher throughput of the HyperCloud™ memory system. Sandra is a public and widely used benchmark software, indicating that it can be easily reproduced. The Sandra benchmark measures sustained memory bandwidth, not burst or peak, and is therefore directly applicable to many HPC software programs where fast throughput is paramount.

Many HPC server users today are focused on having runtime advantages (leading to productivity gains in their production or development environment) by using their particular software packages. However, it’s an arduous task to test a memory configuration with all available software packages, so Netlist decided to have third parties run a few representative packages for their customers and end users.

One example can be seen in Fig. 1. This graph represents a modal analysis at a major automobile manufacturer using finite element analysis (FEA) software for car crash simulations. It was performed on an IBM x3650 M4 server using Optistruct™ software by Altair. Attaining a desired safety in modern cars is an important task for which just building and crashing a large number of pre-production cars would be prohibitively expensive. In order to test structural variations of the car construction, FEA simulations are the better answer, and to obtain the desired accuracy a finer grid size must be chosen which requires a larger amount of memory. In standard server configurations with 128GB RDIMM, four of those simulation jobs would take up to 16 hours of runtime. Using HyperCloud™ memory reduces the runtime to four hours. This tremendous reduction by a factor of four has massive productivity gains and will shorten development times considerably.


Fig 1 Productivity increase with HyperCloud™ by running jobs in parallel

Another example is shown inFig 2.


Fig 2 Runtime comparison with several simulation jobs with Altair software

This simulation contained an Eigen value analysis that involved fluid-structure interaction with four simultaneous jobs of 2,190,933 Degrees of Freedom each. It was performed with 16GB DIMMs comparing 384GB HyperCloud™ total density vs. the same amount of LRDIMM at the same memory bus frequency of 1333MHz. Because of the streamlined architecture of HyperCloud™ and the fact that LRDIMM features dead waiting cycles, the runtime advantage is 24% amounting to a quarter of development time savings when simulating and optimizing new products requiring fluid dynamic interactions. 

In an earlier white paper conducted by Deopli, we had already highlighted the 54% bandwidth improvement,( measured on an HP DL380p system with all three memory variations. This benefit resulted in a 25% runtime improvement for typical EDA applications. 

In addition to these benchmarks, we can point out latency and throughput measurements performed by the major server manufacturers themselves. HP has published a white paper (see pointing out a latency advantage of HyperCloud™ vs. LRDIMM, resulting in 78%. This very same number has been proven independently by IBM in a direct comparison  (see

The consistent performance advantage of HyperCloud™ can be seen in the comparison in Fig 3:


Fig 3 Summary of benchmark evaluations

The consistency is an important factor as it tells the users that electric advantages  (see can be extrapolated over a variety of different server OEMs, software packages, DIMM densities and memory bus frequencies resulting directly into productivity gains in a vast area of software applications.

A constant theme for future server architectures is the need for more memories as caching requirements are increasing exponentially. Memory performance in servers is not keeping pace with the evolution of processor technology, creating a high density memory cliff. HyperCloud™ support for the highest capacity and speed configurations enables today’s key applications such as financial trading, big data analytics, virtualization, and simulation applications with optimal efficiency, which translates into an improved financial performance.

The strong consistency of all benchmarks and throughput measurements taken in the course of this year suggests clearly that customers using HyperCloud™ DIMMs will be able to improve their IT ROI by achieving the highest possible memory densities at industry standard speeds at the best possible pricing points.

HyperCloud™ is available to ship with the world’s top three selling servers, and has been validated by both IBM and HP with qualifications for their latest generation servers. It is also being successfully implemented across a number of high-performance computing applications in industries such EDA, financial services, oil & gas, aerospace and automotive.

Category: HyperCloud Memory