Examine critical components of Cloud computing with Intel® software experts
113 Discussions

A Guide to Containers on the Cloud

0 0 3,977


Container usage is rising, with 86% of technology leaders in one 2020 survey planning to prioritize using containers more.1.  This trend, along with more applications migrating to the cloud, will mean continued demand for cloud service providers (CSPs) that offer strong container options and management.



Kubernetes services from AWS and Azure

The leading CSPs, including AWS and Azure, have invested heavily in providing container offerings. In this blog, I’d like to specifically discuss Elastic Kubernetes Service (EKS) from AWS and Azure Kubernetes Service (AKS). Both are fully managed, well-documented Kubernetes services that support container workloads by deploying and maintaining the control plane. Which of these services you choose will likely depend on which CSP you’re already invested in. My intention with this post is not to compare and contrast the offerings, but to walk you through some of the things we learned while using each service. I’ll also share some of our performance findings to help you in your container journey, regardless of the CSP you choose.

We tested EKS and AKS to see how easy it was to deploy a container workload, what kind of performance they could deliver on our workload, and what pointers we could pick up. To measure Kubernetes cluster performance, we deployed and ran the Weathervane benchmark developed by VMware. Weathervane deploys a multi-tiered web application in a container cluster. We used each provider to spin up three-node clusters of virtual machine (VM) instances. Our goal was to show that even in the cloud, the VMs that you choose impact your performance. Thus, we compared older VMs using previous generations of Intel CPUs to newer VMs with 2nd Generation Intel Xeon Scalable processors. Keeping everything other than the processors identical, performance jumped significantly with the newer VMs. For example, our three-node Azure cluster based on E16s_v4 VMs handled up to 1.77 times as many Weathervane users as the three-node Es_v3 cluster. 2. 

Kubernetes deployment

While both AWS and Azure provide documentation to guide you through the many options for implementing and using their Kubernetes services, I’d like to provide a brief how-to that illustrates the approaches we used.

AKS environment

To create your cluster, navigate to the Containers service and choose Kubernetes Service. This launches a GUI that directs you through creating the cluster, step by step. First, you cover basics such as the Azure Subscription and region, the Kubernetes version you wish to use, and the type and number of VMs to deploy. We set up three-node clusters using D- and E-series VMs of various sizes.
Next, Azure walks you through the steps for creating any additional node pools you may need. Node pools are helpful for separating sections of your workloads from each other. We created one node pool for the VMs under test, and a separate one for the Weathervane test driver system.
The next section of the creation GUI guides you through networking and other configuration settings. Once you’ve configured your cluster, click Create, and your AKS cluster is working. At this point, we used the Azure CLI shell to create a config file for our cluster and set the parameters we needed for the Weathervane test.

EKS environment

Before you create your cluster, you must create an EKS control VM using a Centos AMI for the base image. Connect to your control VM, install the AWS CLI v2 tool; enter your username, password, and AWS region; and install kubectl.

The next step is using the AWS CLI to create an AWS Virtual Private Cloud (VPC) for the Kubernetes environment. This private network lets you customize and control the settings for your Kubernetes cluster, such as setting IPs, subnets, route tables, and more.

Create a trust policy file allowing the EKS service to function within the VPC to allocate and control resources. Create the EKS cluster role and assign the policy to the role. Finally, create a key pair for the EKS cluster. Once you’ve configured these roles and policies, you can create your EKS cluster.

After we created our cluster, we created a .yaml file that defined the node groups we wished to create for our clusters under test. We set the name and region of the cluster, the instance type we wished to use (we focused on the memory-optimized R5n and R4 series), and more.

Finally, we created a second .yaml file to define the StorageClass for our Kubernetes cluster. We designated which tier of storage we wished to use, the file system type, and more. After you set and create your nodegroups and StorageClass .yaml files, you can use them to quickly and easily create numerous Kubernetes clusters, changing parameters to customize each cluster to the size, VM type, storage type, and other specifications you need. For more details about the testing environment and how we obtained up to 1.85 times the Weathervane users on our R5n instances compared to R4 instances, see this solution brief.


Ease of setup

Both EKS and AKS greatly simplify deploying a Kubernetes cluster compared to a fully manual approach. The two CSPs offer extensive documentation aimed at both new and experienced users. Amazon offers a quick-start configuration with preset parameters that gets users up and running with an Internet-facing cluster in minutes. Or you can fully customize every aspect to meet more advanced requirements.

The Azure Kubernetes wizard makes cluster creation simple with easy-to-follow steps. Before deploying your own applications, you can practice creating a fully functional application cluster with tutorials on scaling, updating, and upgrading the application.

Regardless of which CSP you choose, let its Kubernetes service do the work for you, so you can quickly and easily start reaping the benefits of moving your containerized applications to the cloud.

Storage considerations

As is the case for most applications, storage performance is an important aspect of containers and their applications. Kubernetes supports ephemeral and persistent volumes, the former for files and data that you do not need to retain and the latter for data that you do. The storage technology that underlies these volumes affects performance. However, because every application has unique needs, Kubernetes offers the StorageClass resource. Admins can use the StorageClass policy for volumes to create tiers of storage, with each tier representing a different SLA, performance level, or other differentiating factor. AWS and Azure both offer Kubernetes plugins for their storage, such as EBS and Azure Disk, to let users create a StorageClass using their disks. For example, we set a default StorageClass on our AWS clusters using the gp2 EBS volumes.

When selecting storage for your volumes or StorageClasses, pay close attention to the IOPS and throughput limitations that apply to the VM and storage type. With both Azure and AWS, the virtual machine you choose for your Kubernetes cluster has specific limits on the number of disks you can attach and the total IOPS the VM can support, as well as throughput maximums. Make sure your cluster supports the IOPS and throughput your application requires.

Additionally, the volumes you choose also have performance limitations. If your application requires a lot of IOPS or high volumes of throughput, look at the higher-performing storage offerings. For AWS, the provisioned IOPs volumes, io1 and io2, are a good place to start when high performance is a must. On Azure, look at the Premium or Ultra SSDs for high performance and, especially in the case of the Ultra SSDs, flexibility in IOPS and throughput. Our tests used Premium SSD on Azure and gp2 volumes for AWS. This solution brief shows how we got up to 1.58 times the Weathervane users on D8s_v4 Azure VMs as on D8s_v3 VMs.

Tips, tricks, and other considerations

I’d like to mention a few other discoveries we made during our testing that may help you in your cloud container journey.


When choosing Azure VMs for your cluster, keep in mind how many CPU generations the VM series offers. Some of the older series, such as the Ev3 series, spin up with one of three different CPU generations, from the Intel Xeon E5-2673 v4 (Broadwell) to the Intel Xeon Platinum 8272CL (Cascade Lake), and you cannot control which CPU your VM receives. A three-node cluster in this series could include a mix of all three CPUs. This variation isn’t a problem for all workloads, but if you want consistent hardware across your cluster, I suggest choosing a VM series with a single CPU type.
Azure provides excellent Kubernetes-specific offerings for monitoring your cluster performance. These let you monitor stats such as CPU utilization over a given period from a cluster-wide perspective or from a perspective all the way down to an individual container. The offerings also let you export spreadsheets to save or manipulate data.


EKS fully integrates with the AWS Identity and Access Management (IAM) service, allowing for granular permission controls across the Kubernetes cluster. While IAM functionality can be complex to implement, it is a powerful tool. Customizing permissions across your cluster will allow your application to interface seamlessly with other AWS offerings without additional authentication requests. For example, it can let the IAM-authenticated node provision resources as needed without intervention.

Another useful offering from Amazon is the eksctl command line tool. The eksctl tool adds functionality to the Kubernetes kubectl command line tool that lets you easily batch and automate EKS functions. You can use eksctl to automate cluster scaling through the AWS interface, or you can use it to build a dedicated VPC to simplify networking for your Kubernetes cluster. Keep in mind, though, that regardless of which command line tool or other method you use to create your cluster, EKS creates many different resources across several AWS services. AWS does not delete all of these resources when you delete the cluster, so tag all of your resources for easy removal when you’re finished. This will help you avoid paying for resources you are not using.


Both AWS and Azure provide strong offerings for companies who want to move their containerized applications to the cloud, and either one would be a good choice. If you are currently familiar with or invested in one of these CSPs, you can shift your containerized workloads to its Kubernetes service with confidence. I’ve pointed out some of the considerations we’ve learned along the way as we’ve experimented with these offerings, as well as the performance gains you can enjoy by selecting VM instances using 2nd Generation Intel Xeon Scalable processors rather than those using older processors. I hope this helps you shift your containerized applications to the cloud.