Memory & Storage
Engage with Intel professionals on Memory & Storage topics
21 Discussions

Postgres Database Performance Benchmark Using HammerDB–DAS vs NVMe over TCP

Kavya_Kushnoor
Employee
2 0 34.1K

In this blog post, we will discuss database performance benchmarking using the HammerDB PostgreSQL OLTP (Online Transaction Processing) workload on Intel Xeon servers. We are examining the performance output of the benchmarking runs by comparing the following specific storage configurations: 

  • Direct Attached Storage (DAS) 
  • NVMe over TCP provisioned storage volumes (using Lightbits disaggregated storage) 

Lightbits, from Lightbits Labs, is a software-defined disaggregated block storage solution. It delivers cloud, high-performance, scale-out, and redundant NVMe over TCP storage that performs like local NVMe flash storage. As a disaggregated storage solution, it ensures that storage can be shared across multiple applications. It scales independently from compute. It maintains high IOPS (input/output operations per second), low latency and offers rich data services including storage replicas, clones, and data recovery. 

Workload Configuration: 
Direct Attached Storage 

Kavya1_0-1669767785483.png

For this benchmarking activity PostgreSQL and its associated Write-Ahead Log (WAL) were deployed on two storage devices.  In the direct attached Storage benchmark run, we assigned a 3.84TB PCIe Gen 4.0 NVMe SSD to store the database instance itself and a separate NVMe SSD of the same model to store the WAL. 

NVMe over TCP (using Lightbits storage) 

LB_cluster.png

 

In the benchmarking run with Lightbits provisioned storage we created two 3.8 TB storage volumes and attached them to the system under test to store the database instance itself and the WAL. 

We followed the following steps to set up and ready the database server for benchmarking: 

  1. Disk partitioning to enable users to read and write data to the 2 disks for WAL logging and data  

parted /dev/nvme1n1 

print 

mklabel gpt 

mkpart primary 2048s 100% 

align-check opt 1 

print 

 

parted /dev/nvme2n1 

print 

mklabel gpt 

mkpart primary 2048s 100% 

align-check opt 1 

print 

 

  1. Create file system and mount disk partition to it 

mkfs.xfs /dev/nvme1n1p1 

mkfs.xfs /dev/nvme2n1p1 

 

mkdir /inst1 

mount /dev/nvme1n1p1 /inst1 

 

mkdir /pg_wal_1 

mount /dev/nvme2n1p1 /pg_wal_1 

 

  1. Set-up database server, install PostgreSQL from binary, create user, initialize the database, and start the server using configuration details at the end. 

 

  1. Set up the client system, install HammerDB and PostgreSQL from binary as in Step 4 above, start the GUI. 

 

  1. Build schema, configure + load driver options, start the autopilot run  
    1. Build schema
       

      build schema.png

      build schema_2.png 
    2. Configure and load driver options 

       Load Driver Options.png

    3. Start the autopilot run 

Autopilot Run.png

 

Performance Benefits of Lightbits using Hammerdb-Psql: 

Normalized Throughput.png

  • In the above graph we can observe the bandwidth achieved by Lightbits (compression ON and compression OFF) when normalized to the bandwidth achieved by direct attached storage.
  • The 2 Lightbits volumes used in the experiment (WAL and database) have 3x replicas enabled during the benchmarking providing 6 9’s availability. 
  • Compression was turned ON & OFF for both Lightbits volumes used in the experiment 
Virtual user Scaling curve: 
  • There is minimal performance difference between Lightbits compression ON and OFF. 
  • The delta between Lightbits and direct-attached NVMe SSD at the maximum throughput is:
    • 4.52% at 120 virtual users for Lightbits Compression ON
    • 2.10% at 128 virtual users for Lightbits Compression OFF 
Conclusion: 
  • We observed that despite all the features highlighted above, the fall in throughput values is only in the range of 2.10% - 4.52% when compared to direct-attached storage.
  • With compression turned ON, we observed a data compression ratio of 0.05. Thus, consuming less of the cluster storage capacity without losing significant performance. 
Take-away: 
  • Lightbits storage volumes provide similar performance to direct-attached NVMe SSDs with data services like volume data compression and volume replicas enabled. Additionally, the storage flexibility offered by Lightbits including the ability to scale storage capacity independently of compute infrastructure makes Lightbits the better option if high performance block storage is required. 

Common Configuration Details: CPU Model=Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz, Sockets = 2 (32 cores, 2.0 GHz), 256 GB, 128 vCPUs, 2 threads per core, 205 watts, 16x16GB DDR4, Ubuntu 20.04.4 LTS - kernel 5.4.0-122-generic , Storage=2 * 3.8 TB INTEL PCIe Gen 4 NVMe P5510 SSDs || 2* 3.8 TB (Virtual LB storage) 

Software Configuration: Postgresql version=13, HammerDB version=4.5, HammerDB warehouses=800, ramp-up & test time = 2 mins, Performance governor=performance 

References: 

https://www.hammerdb.com/blog/uncategorized/hammerdb-best-practice-for-postgresql-performance-and-scalability/ 

https://www.lightbitslabs.com/blog/the-rise-of-disaggregated-storage/ 

https://www.lightbitslabs.com/resources/solutions-brief-lightos/ 

About the Author
Kavya Kushnoor is a Cloud Software Engineer at Intel's Business Innovation Office. She works on benchmarking, automation and tool development, workload characterization and optimization, supporting customers and enhancing performance in servers on-prem and in the public cloud.