Netlist’s patented HyperCloud technology is used to created high density DDR3 HCDIMMs for use in servers to achieve maximum density at maximum performance. Currently, 16GB and 32GB HCDIMMs are available for dual processor servers to scale to 384GB or 768GB system memory at 1333MT/s speeds.
HCDIMMs utilize a distributed buffer architecture to optimize performance and is recognized as a two rank module instead of a four rank module. Although physically a four rank module, two smaller density DRAM are presented as one larger density DRAM which leads to a two rank module and results in less loading on the server memory subsystem and higher performance. The distributed buffer architecture has been adopted by the industry for use in DDR4.
HyperCloud can be used in any dual processor server and fits in the same RDIMM sockets. For maximum performance, HCDIMMs should not be populated with RDIMMs.
No they are not.
HyperCloud is 100% compliant with the DDR3 JEDEC memory specification.
Yes but for optimum performance, please follow the population guidelines provided by your OEM server vendor.
32GB RDIMMs are 4 Rank modules and limited to 2 DIMM per channel. The maximum memory density for 32GB RDIMM in a dual processor server is 512GB at reduced speeds.
HCDIMMs have been designed to address the density limitations of DRAM technology and performance limitations of server memory subsystems. The inherent architectural advantages of HCDIMMs result in significant throughput and performance at maximum densities.
When multiple processors or threads are accessing the DRAM simultaneously, the memory access time increases and this condition is termed loaded latency. Compared to LRDIMM, the HyperCloud architecture significantly reduces the loaded latency which is a key factor in application performance.
A white paper published by a major OEM resulted in LRDIMM overall performance dropping 41% from a 2DPC 1333MT/s 139ns loaded latency configuration to a 3DPC 1066MT/s 235 ns loaded latency configuration. The actual data transfer throughput dropped from 68.1GB/s to 40.4GB/s and although reported as operating at 1066MT/s, the results are equivalent to running at 800MT/s. HCDIMM on the other hand dropped only 6% and if HCDIMM had been tested at its maximum 1333 configuration, there would have been no drop at all. Applications will run significantly slower on servers with LRDIMM versus HCDIMM in fully configured servers.
Key applications that will have significant returns from higher memory bandwidth using HCDIMMs are EDA, FEA, CFD, financial analysis, seismic analysis, virtualization and big data in-memory databases.
As core counts increase per CPU, additional memory is required to support the increasing number of virtual servers being utilized on individual servers. With HCDIMM, maximum memory at maximum performance is available for each core which enables faster execution across all virtual servers.
Depending on application, maximizing memory density at maximum speeds will provide a significant return on investment and potentially save millions of dollars annually.
HCDIMMs are available through major OEMs on leading server platforms as well as through system builders. Contact email@example.com for assistance.