Benchmark
Here, we demonstrate the performance advantage achieved by using Netlist NV4 as a Ceph journal. Using the Ceph Benchmarking Tool (CBT), we exercise a 128GB RBD volume using fio with librbd. The test was run on a Ceph cluster consisting of 4 OSD nodes containing 8 OSDs each, for a cluster total of 32 OSDs. 2 client servers were used to exercise the RBD volume. For this test, we compare two cases. One is the Ceph default, which is keeping the journal on the OSD device. The other case is putting the Ceph journal on Netlist NV4. When NV4 is used as the journal, we demonstrate a 2x improvement in write bandwidth at a variety of block sizes, including 64K block random writes (as shown).
Test Configuration Details
- OSD Node Configuration (x4)
- 2x Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
- 128 GB DRAM @ 2133 MHz
- CentOS 7.2.1511
- Ceph 9.2.1 (Infernalis)
- 8x Seagate ST900MM 900GB SAS HDD
- Each HDD configured as 1 OSDs
- Software Configuration
- Centos 7.1.1503
- Linux Kernel 4.4.0
- In-kernel NVMe driver
- In-kernel NVDIMM driver
- MongoDB 3.2.1
- MongoDB Labs YCSB
- Client Node Configuration (x2)
- 2x Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz
- 256 GB DRAM @ 1866 MHz
- Benchmark Configuration
- Ceph Benchmark Tool
- Fio 2.8 compiled with librbd support