• Overview
  • Resources

Learn how to use NVDIMM-N and PCIe based non volatile memory solutions to accelerate your applications to a new level. Empowering technology to drive your business – the Netlist Advantage!

Sysbench OLTP

Benchmark

We demonstrate the actual work difference between Industry leading PCIe NVMe NAND device and Netlist NV4 NVDIMM-N using a Sysbench write intensive OLTP workload. The Sysbench tool is used to update, insert, and delete database records from 32 different tables consisting of 10 million records each. Each update/insert/delete sequence is considered a transaction. The key metric is the number of database transactions committed per second. We compare two test cases: one with the entire database on Netlist NV4 and one with the entire database on the leading PCIe NVMe competitor. The NV4 achieved a transaction rate over 3x higher than the Best in Class Enterprise PCIe NVMe solution.

Test Configuration Details

Configuration 1 Configuration 2
  • Hardware Configuration
    • Supermicro SSG-2028R-E1CR24L chassis, X10DRH-iT motherboard, BIOS 1.1
    • 4x HMA42GR7AFR4N-TF 16GB DIMMs @ 1866Mhz (64GB)
    • 12x NV4 8GB DIMMs @ 1866Mhz (96GB)
    • Best in Class Enterprise PCIe NVMe device
  • Software Configuration
    • Centos 7.1.1503
    • Linux Kernel 4.4.0
    • In-kernel NVMe driver
    • In-kernel NVDIMM driver
    • MySQL Community Edition 5.6.29
    • Sysbench 0.5 OLTP
  • Benchmark Configuration
    • 32 tables/10 million records per table
    • 10 GB MySQL buffer pool

Download PDF >>

TPC-C Like OLTP

Benchmark

We demonstrate the actual work difference between Industry leading PCIe NVME NAND device and Netlist NV4 NVDIMM-N using a PerconaTM TPC-C like workload. The PerconaTM tool is an abstract version of a complex enterprise that sells items and all of the related warehouse management transactions that are required to manage stock in the database, 800 warehouses in this example. The key metric is the number of New Orders per minute that can be fulfilled. We compare two test cases: one with the entire database on Netlist NV4 and one with the entire database on the leading PCIe NVMe competitor. The NV4 achieved a transaction rate 5x higher than the Best in Class Enterprise PCIe NVMe solution.

Test Configuration Details

Configuration 1 Configuration 2
  • Hardware Configuration
    • Supermicro SSG-2028R-E1CR24L chassis, X10DRH-iT motherboard, BIOS 1.1
    • 4x HMA42GR7AFR4N-TF 16GB DIMMs @ 1866Mhz (64GB)
    • 12x NV4 8GB DIMMs @ 1866Mhz (96GB)
    • Best in Class Enterprise PCIe NVMe device
  • Software Configuration
    • Centos 7.1.1503
    • Linux Kernel 4.4.0
    • In-kernel NVMe driver
    • In-kernel NVDIMM driver
    • MySQL Community Edition 5.6.28
    • Percona TPC-C Like benchmark
  • Benchmark Configuration
    • TPC-C Like 800 Warehouses (~85 GB)
    • 10 GB MySQL buffer pool

Download PDF >>

Key-Value Store

Benchmark

We demonstrate the actual work difference between Industry leading PCIe NVME NAND device and Netlist NV4 NVDIMM-N using a Key-Value Store implemented with Yahoo Cloud Serving Benchmark (YCSB). YCSB is a benchmarking framework and common set of workloads for evaluating the performance of different Key-Value and “cloud” serving stores. We use YCSB Workload A, which is 50% read, 50% update. The Key-Value store contains 50 million 1K records. We compare two test cases: one with the entire database on Netlist NV4 and one with the entire database on the leading PCIe NVMe competitor. The NV4 achieved a transaction rate 2x higher than the Best in Class Enterprise PCIe NVMe solution.

Test Configuration Details

Configuration 1 Configuration 2
  • Hardware Configuration
    • Supermicro SSG-2028R-E1CR24L chassis, X10DRH-iT motherboard, BIOS 1.1
    • 4x HMA42GR7AFR4N-TF 16GB DIMMs @ 1866Mhz (64GB)
    • 12x NV4 8GB DIMMs @ 1866Mhz (96GB)
    • Best in Class Enterprise PCIe NVMe device
  • Software Configuration
    • Centos 7.1.1503
    • Linux Kernel 4.4.0
    • In-kernel NVMe driver
    • In-kernel NVDIMM driver
    • MySQL Community Edition 5.6.28
    • YCSB 0.6.0
  • Benchmark Configuration
    • YCSB Workload A
    • 50 million 1K records (~96 GB)
    • 10 GB MySQL buffer pool

Download PDF >>

Decision Support

Benchmark

We demonstrate the actual work difference between Industry leading PCIe NVMe NAND device and Netlist NV4 NVDIMM-N using a TPC-H Like read-intensive Decision Support System (DSS) Workload. HammerDB is used to execute 22 different DSS-type queries on a TPC-H like database schema. Each query is a different combination of complex join, sort, limit, and range operations. The key metric is the time it takes to complete all 22 queries, shorter is better. We compare two test cases: one with the entire database on Netlist NV4 and one with the entire database on the leading PCIe NVMe competitor. The NV4 completed the query list 4x faster than the Best in Class Enterprise PCIe NVMe solution.

Test Configuration Details

Configuration 1 Configuration 2
  • Hardware Configuration
    • Supermicro SSG-2028R-E1CR24L chassis, X10DRH-iT motherboard, BIOS 1.1
    • 4x HMA42GR7AFR4N-TF 16GB DIMMs @ 1866Mhz (64GB)
    • 12x NV4 8GB DIMMs @ 1866Mhz (96GB)
    • Best in Class Enterprise PCIe NVMe device
  • Software Configuration
    • Centos 7.1.1503
    • Linux Kernel 4.4.0
    • In-kernel NVMe driver
    • In-kernel NVDIMM driver
    • MySQL Community Edition 5.6.29
    • HammerDB 2.19
  • Benchmark Configuration
    • TPC-H like 30 scale
    • 10 GB MySQL buffer pool

Download PDF >>

NoSQL Document DB

Benchmark

We demonstrate the actual work difference between Industry leading PCIe NVMe NAND device and Netlist NV4 NVDIMM-N using YCSB Workload A on a MongoDB Document Collection. YCSB Workload A consists of 50% read operations and 50% update operations. The read and update operations are being executed against a MongoDB collection containing 50 million documents. The key metric is the number of database operations performed per second. We compare two test cases: one with the entire database on Netlist NV4 and one with the entire database on the leading PCIe NVMe competitor. The NV4 achieved an operation rate over 5x higher than the Best in Class Enterprise PCIe NVMe solution.

Test Configuration Details

Configuration 1 Configuration 2
  • Hardware Configuration
    • Supermicro SSG-2028R-E1CR24L chassis, X10DRH-iT motherboard, BIOS 1.1
    • 4x HMA42GR7AFR4N-TF 16GB DIMMs @ 1866Mhz (64GB)
    • 12x NV4 8GB DIMMs @ 1866Mhz (96GB)
    • Best in Class Enterprise PCIe NVMe device
  • Software Configuration
    • Centos 7.1.1503
    • Linux Kernel 4.4.0
    • In-kernel NVMe driver
    • In-kernel NVDIMM driver
    • MongoDB 3.2.1
    • MongoDB Labs YCSB
  • Benchmark Configuration
    • 50 million documents @ 1K per
    • 10 GB MySQL buffer pool

Download PDF >>

Ceph Journal

Benchmark

Here, we demonstrate the performance advantage achieved by using Netlist NV4 as a Ceph journal. Using the Ceph Benchmarking Tool (CBT), we exercise a 128GB RBD volume using fio with librbd. The test was run on a Ceph cluster consisting of 4 OSD nodes containing 8 OSDs each, for a cluster total of 32 OSDs. 2 client servers were used to exercise the RBD volume. For this test, we compare two cases. One is the Ceph default, which is keeping the journal on the OSD device. The other case is putting the Ceph journal on Netlist NV4. When NV4 is used as the journal, we demonstrate a 2x improvement in write bandwidth at a variety of block sizes, including 64K block random writes (as shown).

Test Configuration Details

  • OSD Node Configuration (x4)
    • 2x Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
    • 128 GB DRAM @ 2133 MHz
    • CentOS 7.2.1511
    • Ceph 9.2.1 (Infernalis)
    • 8x Seagate ST900MM 900GB SAS HDD
    • Each HDD configured as 1 OSDs
  • Software Configuration
    • Centos 7.1.1503
    • Linux Kernel 4.4.0
    • In-kernel NVMe driver
    • In-kernel NVDIMM driver
    • MongoDB 3.2.1
    • MongoDB Labs YCSB
  • Client Node Configuration (x2)
    • 2x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
    • 256 GB DRAM @ 1866 MHz
  • Benchmark Configuration
    • Ceph Benchmark Tool
    • Fio 2.8 compiled with librbd support

Transaction Logs

Results

Standardized testing was performed against four distinct configurations. All tests were performed on the same system with identical testing parameters except for the differences noted below.

  • Baseline SATA Array
    • 3x Enterprise SATA SSD, rated at 3 DWPD
    • Software RAID 0 using md
    • Database files and logs on SSD Array
  • Baseline Enterprise PCIe
    • Single Enterprise PCIe Storage Card
    • Database files and logs on PCIe Card
  • Accelerated SATA Array
    • 3x Enterprise SATA SSD, rated at 3 DWPD
    • Software RAID 0 using md
    • Database files on SSD Array
    • Redo logs on EV3
  • Accelerated Enterprise PCIe
    • Single Enterprise PCIe Storage Card
    • Database files on PCIe Card
    • Redo logs on EV3

Test Configuration Details

  • Hardware Configuration (x4)
    • Dell PowerEdge R630, dual-socket Intel(R) Xeon(R) CPU E5-2643 v3 @ 3.40GHz, 128GB DDR4 RAM, PERC H370 Mini
    • Storage Configurations:
      • 3x Enterprise SATA rated at 3 DWPD, configured in a software RAID 0 using md
      • 1x Enterprise PCIe SSD AIC
      • 1x Netlist EXPRESSvault3 PCIe
  • Software Configuration
    • Operating System: CentOS Linux release 7.1, x86_64, kernel 3.10.0-229.20.1
    • 5.6.27 MySQL Community Server, 10GB Buffer Pool
    • Sysbench 0.5 built from source
    • 22GB Sysbench OLTP database

Download PDF >>