Memory & Storage
Engage with Intel professionals on Memory & Storage topics
The Intel sign-in experience has changed to support enhanced security controls. If you sign in, click here for more information.
21 Discussions

Using Intel Xeon Scalable Processors for Red Hat OpenShift Data Foundation on AWS

0 0 1,258

According to the 2019 CNCF survey 84% of customers surveyed have container workloads in production. This is a dramatic increase from 18% in 2018. This is driven by a customer need to be more efficient and agile and a need to operate in a hybrid context to meet the growing nature and demands of the industry. The survey also shows 41% of customers with a hybrid use case up from 21% last year. 27% of surveyed customers are making use of daily release cycles.

As customers adopt these processes, managing storage in a consistent manner becomes a greater challenge. Customers across varied industry verticals have been making use of OpenShift on AWS to meet their hybrid and agility needs. Red Hat now further enables these customers through a new open Source solution: OpenShift Container Storage.

In this blog you will learn how to:

  • Use AWS EC2 instances with Intel Xeon Scalable Processors for Red Hat® OpenShift® Data Foundation—previously OpenShift Container Storage.

  • Configure and deploy containerized Ceph and NooBaa

  • Validate deployment of containerized Ceph and NooBaa

  • Use the MCG (Multi Cloud Gateway) to create a bucket and use in an application

Here you will be using OpenShift Container Platform (OCP) 4.x and the OCS Operator to deploy Ceph and the Multi-Cloud-Gateway (MCG) as a persistent storage solution for OCP workloads. You can deploy OpenShift 4 using this link OpenShift 4 Deployment and then follow the instructions for AWS Installer-Provisioned Infrastructure (IPI).

Map showing apps and OSC Pods

Deploy your storage backend using the OCS Operator

Scale OCP cluster and add 3 new nodes

In light of the increase of production use of containers and hybrid solutions it is worth confirming that the implementation of OpenShift on AWS takes advantage of multiple availability zones to cater for resilience. You also have a choice of AWS Instance types with Intel processors. Here are the AWS instances and the Intel Xeon processors associated with them.

AWS instances with Intel Xeon processors

(Please refer to our earlier blog for steps on changing the default instance selections.)

First validate the OCP environment has 3 master and 3 worker nodes before increasing the cluster size by additional 3 worker nodes for OCS resources. The NAME of your OCP nodes will be different than shown below.

oc get nodes

Example output:

mshetty@mshetty-mac 4.7 % oc get nodes

NAME                                         STATUS   ROLES AGE   VERSION    Ready   master   57m   v1.20.0+bafe72f   Ready   worker   46m   v1.20.0+bafe72f   Ready   worker   44m   v1.20.0+bafe72f   Ready   master   55m   v1.20.0+bafe72f    Ready   worker   46m   v1.20.0+bafe72f    Ready   master   56m   v1.20.0+bafe72f

As we can see on the AWS console 3 x Master Nodes (m5.xlarge) were created, and 3 x Worker Nodes (m5.large).

AWS console

OpenShift 4 allows customers to scale clusters in the same manner they are used to on the Cloud by means of machine sets. Machine sets are a significant improvement over the older versions of OpenShift. Now you are going to add 3 more OCP compute nodes to the cluster using machinesets.

mshetty@mshetty-mac 4.7 % oc get machinesets -n openshift-machine-api | grep -v infra

This will show you the existing machinesets used to create the 3 worker nodes in the cluster already. There is a machineset for each AWS AZ (us-east-1a, us-east-1b, us-east-1c). Your machinesets NAME will be different from that  below.

mshetty@mshetty-mac 4.7 % oc get machinesets -n openshift-machine-api | grep -v infra

NAME                         DESIRED   CURRENT   READY   AVAILABLE   AGE

mytest-xgqs4-worker-us-west-2a   1           1       1       1              7h45m

mytest-xgqs4-worker-us-west-2b   1           1       1       1              7h45m

mytest-xgqs4-worker-us-west-2c   1           1       1       1              7h45m

mytest-xgqs4-worker-us-west-2d   0           0                             7h45m

NOTE: Make sure you do the next step for finding and using your CLUSTERID

mshetty@mshetty-mac 4.7 % CLUSTERID=$(oc get machineset -n openshift-machine-api -o jsonpath='{.items[0].metadata.labels.machine\.openshift\.io/cluster-api-cluster}')

mshetty@mshetty-mac 4.7 % echo $CLUSTERID

mshetty@mshetty-mac 4.7 % curl -s | sed "s/CLUSTERID/$CLUSTERID/g" | oc apply -f -

Example output:

Mayurs-MacBook-Pro:4.3 mshetty$ curl -s | sed "s/CLUSTERID/$CLUSTERID/g" | oc apply -f - created created created

Check that you have new machines created.

oc get machines -n openshift-machine-api | egrep 'NAME|workerocs'

They may be in pending for sometime so repeat the command above until they are in a running STATE. The NAME of your machines will be different than shown below.

Example output:

mshetty@mshetty-mac 4.7 % oc get machines -n openshift-machine-api | egrep 'NAME|workerocs'

NAME                                         PHASE          TYPE           REGION          ZONE           AGE

mytest-xgqs4-workerocs-us-west-2a-7lrcz   Provisioned   m5.4xlarge   us-west-2   us-west-2a   3m15s

mytest-xgqs4-workerocs-us-west-2b-tw48l   Provisioned   m5.4xlarge   us-west-2   us-west-2b   3m15s

mytest-xgqs4-workerocs-us-west-2c-pgdhh   Provisioned   m5.4xlarge   us-west-2   us-west-2c   3m15s

mshetty@mshetty-mac 4.7 % oc get machines -n openshift-machine-api | egrep 'NAME|workerocs'

NAME                                         PHASE   TYPE           REGION         ZONE             AGE

mytest-xgqs4-workerocs-us-west-2a-7lrcz   Running   m5.4xlarge   us-west-2   us-west-2a   6m38s

mytest-xgqs4-workerocs-us-west-2b-tw48l   Running   m5.4xlarge   us-west-2   us-west-2b   6m38s

mytest-xgqs4-workerocs-us-west-2c-pgdhh   Running   m5.4xlarge   us-west-2   us-west-2c   6m38s

AWS instances

You can see that the OCS worker machines are using the AWS EC2 instance type m5.4xlarge. The m5.4xlarge instance type follows our recommended instance sizing for OCS, 16 vCPU and 64 GB RAM. Such instances are based on Intel Xeon Scalable Processors Generation supporting Ceph optimizations done across multiple years of joint Intel and Red Hat enablement work for Ceph.

Now you want to see if our new machines are added to the OCP cluster.

watch "oc get machinesets -n openshift-machine-api | egrep 'NAME|workerocs'"

This step could take more than 5 minutes. The result of this command needs to look like below before you proceed. All new OCS worker machinesets should have an integer, in this case 1, filled out for all rows and under columns READY and AVAILABLE. The NAME of your machinesets will be different than shown below.

mshetty@mshetty-mac 4.7 % oc get machinesets -n openshift-machine-api | egrep 'NAME|workerocs'

NAME                                  DESIRED   CURRENT   READY   AVAILABLE   AGE

mytest-xgqs4-workerocs-us-west-2a   1        1       1       1              22m

mytest-xgqs4-workerocs-us-west-2b   1        1       1       1              22m

mytest-xgqs4-workerocs-us-west-2c   1        1       1       1              22m

Now check to see that you have 3 new OCP worker nodes. The NAME of your OCP nodes will be different than shown below.

oc get nodes -l

Example output:

mshetty@mshetty-mac 4.7 % oc get nodes -l

NAME                                         STATUS   ROLES AGE   VERSION   Ready   worker   20m   v1.20.0+bafe72f   Ready   worker   9h    v1.20.0+bafe72f   Ready   worker   9h    v1.20.0+bafe72f   Ready   worker   20m   v1.20.0+bafe72f    Ready   worker   20m   v1.20.0+bafe72f    Ready   worker   9h    v1.20.0+bafe72f

Installing the OCS Operator

In this section you will be using three of the worker OCP 4 nodes to deploy OCS 4 using the OCS Operator in OperatorHub.


You must create a namespace called openshift-storage as follows:

  1. Click Administration → Namespaces in the left pane of the OpenShift Web Console.

  2. Click Create Namespaces.

  3. In the Create Namespace dialog box, enter openshift-storage for Name and for Labels. This label is required to get the dashboards.

  4. Select No restrictions option for Default Network Policy.

  5. Click Create.

RedHat namespaces

Procedure to install OpenShift Container Storage using the Red Hat OpenShift Container Platform(OCP) Operator Hub on Amazon Web Services (AWS):

  1. Log in to the Red Hat OpenShift Container Platform Web Console as user kubeadmin

  2. Click Operators → OperatorHub.

  3. Search for “OpenShift Container Storage” from the list of operators and click on it.

  4. On the OpenShift Container Storage Operator page, click Install.

  5. OpenShift-Container-Storage-06-400x216.pngOn the Create Operator Subscription page, the Installation Mode, Update Channel, and Approval Strategy options are available.OpenShift-Container-Storage-07-400x197.png

    • Select a specific namespace on the cluster for the Installation Mode option. Select openshift-storage namespace from the drop down menu.

    • stable-4.7 channel is selected by default for the Update Channel option.

    • Select an Approval Strategy: Automatic specifies that you want OpenShift Container Platform to upgradeOpenShift Container Storage automatically

Manual specifies that you want to have control to upgrade OpenShift ContainerStorage manually.

  1. Click Subscribe.

The Installed Operators page is displayed with the status of the operator.


Click on the “Create Storage Cluster” button, it will take you to the following page.


After clicking on the “Create” button, wait for the status to change from Progressing to Ready.

OpenShift-Container-Storage-10-400x217.png OpenShift-Container-Storage-11-400x216.png OpenShift-Container-Storage-12-400x247.png

Verify the OCS service

Click on Openshift Container Storage Operator to get to the OCS configuration screen.


On the top of the OCS configuration screen, scroll over to the Storage cluster, make sure the Status is Ready.

mshetty@mshetty-mac 4.7 % oc -n openshift-storage get pods

NAME                                                                                                                 READY   STATUS  RESTARTS   AGE

csi-cephfsplugin-2vdm9                                                                                  3/3      Running 0        46m

csi-cephfsplugin-6sx77                                                                                    3/3      Running 0        46m

csi-cephfsplugin-h8nff                                                                                     3/3      Running 0        46m

csi-cephfsplugin-n4t7w                                                                                   3/3      Running 0        46m

csi-cephfsplugin-nqpqp                                                                                  3/3      Running 0        46m

csi-cephfsplugin-provisioner-6878df594-dhbz6                                         6/6      Running 0        46m

csi-cephfsplugin-provisioner-6878df594-rg97m                                         6/6      Running 0        46m

csi-cephfsplugin-sbs4l                                                                                     3/3      Running 0        46m

csi-rbdplugin-brtq9                                                                                          3/3      Running 0        46m

csi-rbdplugin-c9fpp                                                                                          3/3      Running 0        46m

csi-rbdplugin-cbg7m                                                                                        3/3      Running 0        46m

csi-rbdplugin-d8qtd                                                                                         3/3      Running 0        46m

csi-rbdplugin-dwhc4                                                                                        3/3      Running 0        46m

csi-rbdplugin-gnrn4                                                                                         3/3      Running 0        46m

csi-rbdplugin-provisioner-85f54d8949-8vvrq                                              6/6      Running 0        46m

csi-rbdplugin-provisioner-85f54d8949-vv289                                              6/6      Running 0        46m

noobaa-core-0                                                                                                   1/1      Running 0        44m

noobaa-db-pg-0                                                                                                 1/1      Running 0        44m

noobaa-endpoint-7c8bbc7944-fjqws                                                             1/1      Running 0        42m

noobaa-operator-b84757f57-hxvf4                                                                1/1      Running 0        53m

ocs-metrics-exporter-54f65bc754-dwr7t                                                       1/1      Running 0        53m

ocs-operator-7b94f9cb5c-xqk8q                                                                     1/1      Running 0        53m

rook-ceph-crashcollector-ip-10-0-131-134-576767fc9-vfrjd                       1/1      Running 0        45m

rook-ceph-crashcollector-ip-10-0-176-120-754b7c8dcf-zl9gx                    1/1      Running 0        45m

rook-ceph-crashcollector-ip-10-0-202-68-5cd5fdb99b-hvtd4                    1/1      Running 0        45m

rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-6fc9d7d6tgnvp     2/2        Running 0        43m

rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-968df477qnkhq   2/2        Running 0        43m

rook-ceph-mgr-a-7dd4bc4898-qz8dz                                                            2/2      Running 0        44m

rook-ceph-mon-a-55c55f6859-fczr9                                                              2/2      Running 0        46m

rook-ceph-mon-b-774b5cf847-p9btn                                                            2/2      Running 0        45m

rook-ceph-mon-c-59b8bb8dcc-4fctk                                                             2/2      Running 0        45m

rook-ceph-operator-8b956db7f-cdkxq                                                         1/1      Running 0        53m

rook-ceph-osd-0-7ff7cb667d-vhtsn                                                               2/2      Running 0        44m

rook-ceph-osd-1-6c6cd97d89-m4smh                                                          2/2      Running 0        44m

rook-ceph-osd-2-57fb66f58c-rvcqq                                                               2/2      Running 0        44m

rook-ceph-osd-prepare-ocs-deviceset-gp2-0-data-0mxwk4-sjc4v           0/1      Completed   0               44m

rook-ceph-osd-prepare-ocs-deviceset-gp2-1-data-0klkp7-zwvr5             0/1      Completed   0               44m

rook-ceph-osd-prepare-ocs-deviceset-gp2-2-data-08llwx-cxdh7             0/1      Completed   0               44m



You can create application pods either on OpenShift Container Storage nodes or nonOpenShift Container Storage nodes and run your applications. However, it is recommended that you apply a taint to the nodes to mark them for exclusive OpenShiftContainer Storage use and not run your applications pods on these nodes. Because the tainted OpenShift nodes are dedicated to storage pods, they will only require an OpenShift Container Storage subscription and not an OpenShift subscription.

To add a taint to a node, use the following command:

Mayurs-MacBook-Pro:4.3 mshetty$ oc adm taint nodes ip-10-0-140-7.ec2.internal

node/ip-10-0-140-7.ec2.internal tainted

Mayurs-MacBook-Pro:4.3 mshetty$ oc adm taint nodes ip-10-0-153-95.ec2.internal

node/ip-10-0-153-95.ec2.internal tainted

Mayurs-MacBook-Pro:4.3 mshetty$ oc adm taint nodes ip-10-0-169-227.ec2.internal

node/ip-10-0-169-227.ec2.internal tainted

Getting to know the Storage Dashboard

You can now also check the status of your storage cluster with the OCS specific Dashboards that are included in your Openshift Web Console. You can reach this by clicking on Home on your left navigation bar, then selecting Dashboards and finally clicking on Persistent Storage on the top navigation bar of the content page.

OpenShift-Container-Storage-15-400x237.png OpenShift-Container-Storage-16-400x248.png OpenShift-Container-Storage-17-400x239.png

Once this is all healthy, you will be able to use the three new StorageClasses created during the OCS 4 Install:

  • ocs-storagecluster-ceph-rbd

  • ocs-storagecluster-cephfs


You can see these three StorageClasses from the Openshift Web Console by expanding the Storage menu in the left navigation bar and selecting Storage Classes. You can also run the command below:

mshetty@mshetty-mac 4.7 % oc -n openshift-storage get sc

NAME                              PROVISIONER                                RECLAIMPOLICY   VOLUMEBINDINGMODE         ALLOWVOLUMEEXPANSION   AGE

gp2 (default)                                 Delete               WaitForFirstConsumer   true               19h

gp2-csi                                             Delete               WaitForFirstConsumer   true               19h

ocs-storagecluster-ceph-rbd     Delete           Immediate                   true                      147m

ocs-storagecluster-cephfs   Delete               Immediate                 true                      147m        Delete           Immediate                   false                     143m

mshetty@mshetty-mac 4.7 %

Please make sure the three storage classes are available in your cluster before proceeding.

(Note: The NooBaa pod used the ocs-storagecluster-ceph-rbd storage class for creating a PVC for mounting to it’s db container.)

You can access the Noobaa Dashboard by clicking on the “Multicloud Object Gateway” in the Object Services tab on the OpenShift Console.


In this post we have seen how to add AWS instances with Intel Xeon Scalable Processors  for Red Hat® OpenShift® Data Foundation—previously Red Hat OpenShift Container Storage—for persistent software-defined storage integrated with and optimized for Red Hat OpenShift Container Platform. We also saw how to use the OpenShift administrator console for dynamic, stateful, and highly available container-native storage that can be provisioned and de-provisioned on demand.


Written by Mayur Shetty, Principal Solution Architect, Red Hat and Raghu Moorthy, Principal Engineer, Intel Inc.