How to deliver Persistent Storage to your OLCNE Kubernetes Cluster

In this Blog I am going to show how we can use the Pure Service Orchestrator (PSO) Container Storage Interface (CSI) driver to deliver Persistent Storage to a 3 node Oracle Linux Cloud Native Environment (OLCNE) Kubernetes Cluster.

For this Blog I will be using Vagrant and VirtualBox, if you have not already got Vagrant and VirtualBox installed you can use the links below.

We will also be using the Oracle provided Vagrant build for the Oracle Linux Cloud Native Environment (OLCNE)

Getting Started

Clone the GitHub hosted Oracle vagrant-boxes repository to your machine.

$ git clone https://github.com/oracle/vagrant-boxes

set DEPLOY_HELM=true to install helm.

$ export DEPLOY_HELM=true

Navigate, to ../vagrant-boxes/OLCNE and use vagrant up to start the 3 virtual machines.

$ vagrant up
Bringing machine 'worker1' up with 'virtualbox' provider...
Bringing machine 'worker2' up with 'virtualbox' provider...
Bringing machine 'master1' up with 'virtualbox' provider...
...
master1: ===== Your Oracle Linux Cloud Native Environment is operational. =====
    master1: NAME                 STATUS   ROLES    AGE     VERSION
    master1: master1.vagrant.vm   Ready    master   7m44s   v1.17.9+1.0.5.el7
    master1: worker1.vagrant.vm   Ready    <none>   7m12s   v1.17.9+1.0.5.el7
    master1: worker2.vagrant.vm   Ready    <none>   6m57s   v1.17.9+1.0.5.el7

Once complete, check status of the VM’s with vagrant status e.g.

$ vagrant status
Current machine states:
master                    running (virtualbox)
worker1                   running (virtualbox)
worker2                   running (virtualbox

and vagrant hosts list e.g.

$ vagrant hosts list
192.168.99.112 worker2.vagrant.vm worker2
192.168.99.111 worker1.vagrant.vm worker1
192.168.99.101 master1.vagrant.vm master1

Using Kubernetes

To start using our 3 node Cluster we need to logon to out master VM.

$ vagrant ssh master1

Welcome to Oracle Linux Server release 7.8 (GNU/Linux 4.14.35-1902.301.1.el7uek.x86_64)

The Oracle Linux End-User License Agreement can be viewed here:

  * /usr/share/eula/eula.en_US

For additional packages, updates, documentation and community help, see:

  * https://yum.oracle.com/
[vagrant@master ~]$ 

If you want to see what packages OLCNE has installed try yum list installed e.g.

[vagrant@master1 ~]$ yum list installed *olcne*
Installed Packages
olcne-agent.x86_64              1.1.5-2.el7   @ol7_olcne11
olcne-api-server.x86_64         1.1.5-2.el7   @ol7_olcne11
olcne-nginx.x86_64              1.1.5-2.el7   @ol7_olcne11
olcne-utils.x86_64              1.1.5-2.el7   @ol7_olcne11
olcnectl.x86_64                 1.1.5-2.el7   @ol7_olcne11
oracle-olcne-release-el7.x86_64 1.0-5.el7     @ol7_latest 

We can see our 3 available nodes with kubectl get nodes e.g.

[vagrant@master1 ~]$ kubectl get nodes -o wide
NAME                 STATUS   ROLES    AGE    VERSION             INTERNAL-IP      EXTERNAL-IP   OS-IMAGE                  KERNEL-VERSION                     CONTAINER-RUNTIME
master1.vagrant.vm   Ready    master   119m   v1.17.9+1.0.5.el7   192.168.99.101   <none>        Oracle Linux Server 7.8   4.14.35-1902.301.1.el7uek.x86_64   cri-o://1.17.0
worker1.vagrant.vm   Ready    <none>   119m   v1.17.9+1.0.5.el7   192.168.99.111   <none>        Oracle Linux Server 7.8   4.14.35-1902.301.1.el7uek.x86_64   cri-o://1.17.0
worker2.vagrant.vm   Ready    <none>   119m   v1.17.9+1.0.5.el7   192.168.99.112   <none>        Oracle Linux Server 7.8   4.14.35-1902.301.1.el7uek.x86_64   cri-o://1.17.0 

Uuse kubectl cluster-info to get information about our cluster.

[vagrant@master1 ~]$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.99:6443
KubeDNS is running at https://192.168.99.99:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

And use kubectl get pods to check the status of the pods.

[vagrant@master1 ~]$ kubectl get pods -n kube-system
NAME                                       READY STATUS  RESTARTS AGE
coredns-764bf46ff4-578xn                   1/1   Running 0        125m
coredns-764bf46ff4-cx8r5                   1/1   Running 0        125m
etcd-master1.vagrant.vm                    1/1   Running 0        126m
kube-apiserver-master1.vagrant.vm          1/1   Running 0        126m
kube-controller-manager-master1.vagrant.vm 1/1   Running 0        126m
kube-flannel-ds-8644k                      1/1   Running 0        118m
kube-flannel-ds-h5fxv                      1/1   Running 0        118m
kube-flannel-ds-q98g7                      1/1   Running 0        118m
kube-proxy-lcsn5                           1/1   Running 0        125m
kube-proxy-r2p87                           1/1   Running 0        125m
kube-proxy-vrlbg                           1/1   Running 0        125m
kube-scheduler-master1.vagrant.vm          1/1   Running 0        126m
kubernetes-dashboard-748699dcb4-7qm5c      1/1   Running 0        125m

iSCSI packages

If all has gone well we should now have a 3 node Kubernetes cluster up and running.

Check to see if you have the following packages installed, otherwise install manually using the below. I have updated the Oracle provided script/provision.sh script to do this for me.

[vagrant@master1 ~]$ sudo su -
[root@master1 ~]# yum install -y device-mapper-multipath
[root@master1 ~]# yum install -y iscsi-initiator-utils

Remember to install the device-mapper and iSCSI initiator on all nodes e.g. worker1 and worker2.

Container Storage Interface (CSI) Driver

The Pure Service Orchestrator (PSO)

[vagrant@master1 ~]# helm repo add pure https://purestorage.github.io/helm-charts
"pure" has been added to your repositories
[vagrant@master1 ~]# helm repo update
Hang tight while we grab the latest from your chart repositories...
...Successfully got an update from the "pure" chart repository
Update Complete. ⎈ Happy Helming!⎈  
[vagrant@master1 ~]# helm search repo pure-csi
NAME                 CHART VERSION APP VERSION DESCRIPTION                                       
pure/pure-csi        1.2.0        1.2.0        A Helm chart for Pure Service Orchestrator CSI ...

Create a Kubernetes namespace for PSO.

[vagrant@master1 ~]$ kubectl create namespace pso-namespace
namespace/pso-namespace created

Before we perform our Helm install, let’s try a dry run to make sure our yaml file is ok.

[vagrant@master1 ~]$ cd /vagrant/
[vagrant@master1 vagrant]$ helm install pure-storage-driver pure/pure-csi --namespace pso-namespace -f pure-csi.yaml --dry-run --debug

Great, and again but this time for real.

[vagrant@master1 vagrant]$ helm install pure-storage-driver pure/pure-csi --namespace pso-namespace -f pure-csi.yaml
NAME: pure-storage-driver
LAST DEPLOYED: Wed Sep  9 12:55:17 2020
NAMESPACE: pso-namespace
STATUS: deployed
REVISION: 1
TEST SUITE: None

We can also now see our new CSI storage classes.

[vagrant@master1 vagrant]$ kubectl get storageclass 
NAME       PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION
pure       pure-csi    Delete        Immediate         true       
pure-block pure-csi    Delete        Immediate         true        
pure-file  pure-csi    Delete        Immediate         true      

Creating Persistent Volumes

I will now create a Kubernetes deployment with 2 block volumes using the Pure Service Orchestrator (PSO) and present them to a container.

[vagrant@master1 vagrant]$ kubectl apply -f testdeploy.yaml
persistentvolumeclaim/test-pvc01 created
persistentvolumeclaim/test-pvc02 created
deployment.apps/test-deployment created

The above has created two CSI persistent volumes using the pure-block storage class.

Below, you can see I have a 1G Read-Write-Once (RWO) volume and second 2G Read-Write-Many (RWX) volume.

[vagrant@master1 vagrant]$ kubectl get pvc
NAME       STATUS VOLUME                              CAPACITY ACCESS MODES STORAGECLASS AGE

test-pvc01 Bound  pvc-db53b670-3b20-43fe-8a3c-9f1d85fba0ae 1Gi  RWO         pure-block     26s
test-pvc02 Bound  pvc-7cdab708-71a4-48fb-aeec-3a9634889425 2Gi  RWX         pure-block     26s

The Read-Write-Many (RWX) volume can be presented to multiple containers as shared storage for Oracle RAC for example.

We can see our 2 newly created CSI volumes from the FlashArray CLI and WebUI

FlashArray CSI persistent volumes
Pure FlashArray purevol list

The FlashArray Volume Serial numbers are 786ED6041D784C58000113FB and 786ED6041D784C58000113FC

From the Kubernetes master VM we can see our container is running on ‘worker2’

kubectl get pods -o wide

And from the FlashArray we can confirm that the 2 volumes have been connected to ‘worker2‘.

Pure Storage FlashArray Persistent Volume Claims via iSCSI

The Linux Volume UUID will be set as Vendor ID + lowercase FlashArray volume serial number e.g 3624a9370 + 786ED6041D784C58000113FB = 3624a9370786ed6041d784c58000113fb

OK, lets see what pods we have running.

[vagrant@master1 vagrant]$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
test-deployment-68bd4876db-5lzbs   1/1     Running   0          11m

Let’s shell into our container using kubectl exec -it pods/<pod name> /bin/bash

From within our container we can see our CSI managed persistent Read-Write-Once (RWO) volume is presented as /dev/mapper/3624a9370786ed6041d784c58000113fb and mounted at ‘/usr/share/nginx/html‘ as an XFS filesystem.

RWO PSO Volume

We can also see the 2G Read-Write-Many (RWX) PSO volume using fdisk -l <device>

RWX PSO Volume

Summary

In this Blog I have shown how we can use the Pure Storage Orchestrator (PSO) to provision Read-Write-Once (RWO) and also Read-Write-Many (RWX) persistent storage volumes to an Oracle Linux Cloud Native Environment (OLCNE) Kubernetes cluster.

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: