Cloud
Examine critical components of Cloud computing with Intel® software experts
134 Discussions

Improve Latency of NGINX in AWS using Intel’s QAT Engine for OpenSSL - Part 1 of a 5 Part Series

RajivMandal
Employee
1 0 3,052

This is Part 1 of a 5 Part Series

I recently read a solution brief showing Intel acceleration in the cloud. It inspired me to build the suggested setup to speed up NGINX from scratch in an AWS C6i compute optimized instance. It worked—NGINX web server performance improved with a little coding and no additional cost. That’s because C6i has 3rd gen Intel Xeon Scalable processors including Intel’s Crypto-NI instructions. Intel hardware and software combined significantly reduces NGINX web server latency. What follows is a 5-part blog series with detailed setup instructions.

At the conclusion of this series, you will be able to:

  • Gain an understanding of what contributes to the performance improvement of NGINX web servers based on the Crypto-NI (NI stands for new instructions) instructions within the 3rd gen of Intel Xeon Scalable processors. This is combined with Intel’s QAT Engine for OpenSSL and the use of optimized libraries for Intel Integrated Performance Primitives and the Intel Multi Buffer Crypto for IPSec.
  • Successfully set up the libraries and QAT Engine for OpenSSL on the NGINX web server.
  • Create a testing environment for the optimized NGINX web server and then compare the results against a non-optimized NGINX web server to measure performance improvement.

A Little Background

NGINX is open-source software for web serving, reverse proxying, caching, load balancing, media streaming, and more. It started out as a web server designed for maximum performance and stability. In addition to its HTTP server capabilities, NGINX can also function as a proxy server for email (IMAP, POP3, and SMTP) and a reverse proxy and load balancer for HTTP, TCP, and UDP servers.

The goal behind NGINX is to create the fastest web server available, and maintaining that excellence is still a central goal of the project. NGINX features as one of the most popular open-source web servers that are used by customers across different business verticals.

Nginx uses SSL/TLS to enhance web access security. Intel has introduced the Crypto-NI software solution which is based on 3rd Gen Intel® Xeon® Scalable Processors. Crypto-NI can improve performance of web servers and reduce latency of web requests.

Crypto-NI is a new instruction set in the field of encryption and decryption for 3rd generation Intel® Xeon® Scalable Processors. It adds new instructions such as Vectorized AES and Integer Fused Multiply Add based on the Intel® Advanced Encryption Standard New Instructions (Intel® AES-NI) that the Intel® Xeon® processor family already has.

The software used in this solution are IPP Cryptography Library, Intel Multi-Buffer Crypto for IPsec Library (intel-ipsec-mb) and Intel® QuickAssist Technology (Intel® QAT), which provides batch submission of multiple SSL requests and parallel asynchronous processing mechanism based on the new instruction set. This means that the CPU needs fewer cycles to process SSL requests. This reduces the end-user latency for web request responses. Hence, the CPU has more cycles to handle other tasks.

The Architecture Overview

As you can see from the diagram below, our architecture is set up within a public subnet in an AWS VPC. Inside the public subnet, we have three EC2 instances. Each of the EC2 instances is labelled with:

  • the name of the instance
  • the operating system and version
  • the instance type
  • the boot disk storage type and size

The three EC2 instances are named Test Client, Machine-1, and Machine-2. Machine-1 is hosting the non-optimized NGINX web server. Machine-2 is hosting the Intel optimized NGINX web server. Test Client is hitting the NGINX web server (either Machine-1 or Machine-2) and generating https requests to load the web servers. For generating load and testing the NGINX web servers, the Test Client has a http/https benchmark tool installed. This benchmark tool is called wrk – a HTTP Benchmark tool is used for load generation and testing of the NGINX web servers.

To route https requests to Machine-1 or the Machine-2, we have created a hosted zone with Route53. The hosted zone has an A record that points to the public IP address of Machine-1 or Machine-2. When we test Machine-1, we will update the A record to have the public IP address of Machine-1. When we test Machine-2, we will update the A record with the public IP address of Machine-2.

The hosted zone in Route 53 is pointing to the domain used in this example. The domain is called https://gotoclouds.co. The domain is hosted on GoDaddy. You can use any domain registrar of your choice to host your websites. The domain registrar in this example is used to mimic a real customer’s websites which are typically hosted with domain registrars. Summarizing below how all this works together.

The traffic is generated from the Test Client machine, and it sends HTTPS GET requests to https://gotoclouds.co. The request is routed to GoDaddy where the DNS record is hosted. Inside the DNS record on GoDaddy the name servers are set up to be the same as the name servers for our hosted zone on AWS Route 53. The request is then routed to Route 53. As per the setting on the A record on our hosted zone in Route 53, the request is routed to either Machine-1 or Machine-2, based on the public IP address configured inside the A record.

 

image1.png

 

In this first installment of the series, we’ve outlined our objectives and provided the basic solution architecture, we’re now ready to move on to Part 2: Setting up the AWS infrastructure and pre-requisite software libraries.