How to use Portworx with the Oracle Container Engine for Kubernetes (OKE)

Oracle Cloud Infrastructure (OCI)

In this blog I will show how we can use Portworx with the Oracle Container Engine for Kubernetes (OKE) service within the Oracle Cloud.

For this post I created a 5 node OKE cluster and 5 Block devices which I attached to each compute instance using iSCSI.

OCI Compute Instances
OCI Block Volumes
Attached Instances Block Volume(s)

From the above we can see our block device has been attached as /dev/oracleoci/oraclevdb

[opc@oke-crdcodggy2d-nqtgmjsgrrw-sdu7ner4aca-0 ~]$ ls -l /dev/oracleoci/
total 0
lrwxrwxrwx. 1 root root 6 Jan  7 14:09 oraclevda -> ../sda
lrwxrwxrwx. 1 root root 7 Jan  7 14:09 oraclevda1 -> ../sda1
lrwxrwxrwx. 1 root root 7 Jan  7 14:09 oraclevda2 -> ../sda2
lrwxrwxrwx. 1 root root 7 Jan  7 14:09 oraclevda3 -> ../sda3
lrwxrwxrwx. 1 root root 6 Jan  7 14:09 oraclevdb -> ../sdb

If we ssh into our Linux servers as the opc user we can see the device has been mapped to /dev/sdb, this is the device Portworx will discover and use later in the blog.

[opc@oke-crdcodggy2d-nqtgmjsgrrw-sdu7ner4aca-0 ~]$ lsblk
NAME   MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sdb      8:16   0   50G  0 disk 
sda      8:0    0   50G  0 disk 
├─sda2   8:2    0    8G  0 part 
├─sda3   8:3    0 38.4G  0 part /
└─sda1   8:1    0  200M  0 part /boot/efi

In future posts I will provide a detailed OCI & OKE walkthrough for the above steps.

Kubernetes Cluster Info

Before we start let’s have a look at our Kubernetes environment, using kubectl cluster-info to get information about our cluster.

[opc@px-operator ~]$ kubectl cluster-info
Kubernetes master is running at https://147.XXX.XXX.XX:6443
CoreDNS is running at https://147.XXX.XXX.XX:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Kubernetes Version

We can confirm the version of our Oracle Cloud Infrastructure (OCI) Kubernetes Cluster with kubectl version

[opc@px-operator ~]$ kubectl version --short | awk -Fv '/Server Version: / {print $3}'
1.18.10

List Kubernetes Nodes

We can use kubectl get nodes to list the 5 nodes with our OKE cluster.

[opc@px-operator ~]$$ kubectl get nodes
NAME        STATUS   ROLES   AGE    VERSION
10.0.64.2   Ready    node    102m   v1.18.10
10.0.64.3   Ready    node    102m   v1.18.10
10.0.64.4   Ready    node    102m   v1.18.10
10.0.64.5   Ready    node    102m   v1.18.10
10.0.64.6   Ready    node    102m   v1.18.10

The Oracle Cloud Cloud provides instance shape, region, zone, failure domain, hostname and other details to Kubernetes which we an see with kubectl get nodes –show-labels.

[opc@px-operator ~]$ kubectl get nodes --show-labels 10.0.64.2
NAME        STATUS   ROLES   AGE    VERSION    LABELS
10.0.64.2   Ready    node    104m   v1.18.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=VM.Standard.E2.1,beta.kubernetes.io/os=linux,displayName=oke-crdcodggy2d-nqtgmjsgrrw-sdu7ner4aca-0,failure-domain.beta.kubernetes.io/region=uk-london-1,failure-domain.beta.kubernetes.io/zone=UK-LONDON-1-AD-1,hostname=oke-crdcodggy2d-nqtgmjsgrrw-sdu7ner4aca-0,internal_addr=10.0.64.2,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.64.2,kubernetes.io/os=linux,node...

Label nodes for Portworx

We can label our Kubernetes nodes as available for Portworx using the px/metadata-node tag.

[opc@px-operator ~]$ kubectl label nodes 10.0.64.2 10.0.64.3 10.0.64.4 10.0.64.5 10.0.64.6 px/metadata-node=true
node/10.0.64.2 labeled
node/10.0.64.3 labeled
node/10.0.64.4 labeled
node/10.0.64.5 labeled
node/10.0.64.6 labeled

If we repeat the kubectl get nodes –show-labels command we can see our new px/metadata-node tag.

[opc@px-operator ~]$  kubectl get nodes --show-labels 10.0.64.2
NAME        STATUS   ROLES   AGE    VERSION    LABELS
10.0.64.2   Ready    node    121m   v1.18.10   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=VM.Standard.E2.1,beta.kubernetes.io/os=linux,displayName=oke-crdcodggy2d-nqtgmjsgrrw-sdu7ner4aca-0,failure-domain.beta.kubernetes.io/region=uk-london-1,failure-domain.beta.kubernetes.io/zone=UK-LONDON-1-AD-1,hostname=oke-crdcodggy2d-nqtgmjsgrrw-sdu7ner4aca-0,internal_addr=10.0.64.2,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.64.2,kubernetes.io/os=linux,node...,px/metadata-node=true

Generate Portworx Specification

Now we have our Kubernetes Cluster up and running we can head over to https://central.portworx.com/ to create a Portworx specification using the wizard.

For this post I will be using the Portworx ‘Free forever’ Essentials option for 5 my OKE nodes.

Storage Tab

On the Storage Tab, we need to select On-Premises, as the Spec Generator currently does not include a wizard for OCI.

Network Tab
Customise Tab

From the Operator wizard we can decide to Download or Save Spec.


Install Portworx Operator

From our shell install the Portworx Operator using kubectl apply.

[opc@px-operator ~]$ kubectl apply -f 'https://install.portworx.com/2.6?comp=pxoperator'
serviceaccount/portworx-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created

Apply specification

Apply the specification created from PX-Central

[opc@px-operator ~]$ kubectl apply -f 'https://install.portworx.com/2.6?operator=true&mc=false&kbver=&oem=esse&user=6fxxxxxxxxxxxxxxxxxxxxxxx&b=true&c=px-cluster-dxxxxxxxxxxxxxxxxxxxxx8e&stork=true&lh=true&st=k8s'
storagecluster.core.libopenstorage.org/px-cluster-dxxxxxxxxxxxxxxxxxxxxxxxxxx18e created
secret/px-essential created

Monitor Progress

Enter the kubectl get pods command below and wait until all Portworx nodes show a status of running.

[opc@px-operator ~]$ kubectl get pods -n kube-system -l name=portworx
NAME                                                  READY STATUS RESTARTS AGE
px-cluster-f484d694-ff61-4e97-88de-c7b311c21011-c7sgg 1/1   Running   0     25m
px-cluster-f484d694-ff61-4e97-88de-c7b311c21011-crf22 1/1   Running   0     25m
px-cluster-f484d694-ff61-4e97-88de-c7b311c21011-mwqlf 1/1   Running   0     25m
px-cluster-f484d694-ff61-4e97-88de-c7b311c21011-pcx2n 1/1   Running   0     25m
px-cluster-f484d694-ff61-4e97-88de-c7b311c21011-zpbgr 1/1   Running   0     25m

Portworx Status

And to finish, let’s check the status and see if our 50GB volumes we attached to each OKE node have been discovered and used by Portworx using pxctl.

[opc@px-operator ~]$ PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
[opc@px-operator ~]$ kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status
Status: PX is operational
License: PX-Essential (lease renewal in 23h, 32m)
Node ID: f84e8d5c-ade5-4171-91c6-7f54ed0c874d
       IP: 10.0.64.4 
       Local Storage Pool: 1 pool
       POOL IO_PRIORITY RAID_LEVEL USABLE USED    STATUS ZONE             REGION
          0 LOW         raid0      50 GiB 6.1 GiB Online UK-LONDON-1-AD-3 uk-london-1
       Local Storage Devices: 1 device
       Device Path     Media Type              Size   Last-Scan
       0:1    /dev/sdb STORAGE_MEDIUM_MAGNETIC 50 GiB 07 Jan 21 14:09 UTC
       * Internal kvdb on this node is sharing this storage device /dev/sdb  to store its data.
       total   -       50 GiB
       Cache Devices:
       * No cache devices
Cluster Summary
        Cluster ID: px-cluster-f484d694-ff61-4e97-88de-c7b311c21011
        Cluster UUID: 4ed9084a-07bc-418f-9b36-66cd9f8d6f7f
        Scheduler: kubernetes
        Nodes: 5 node(s) with storage (5 online)
        IP ID     SchedulerNodeName StorageNode  Used Capacity Status StorageStatus Version KerneOS
        10.0.64.4 f84e8d5c-ade5-4171-91c6-7f54ed0c874d 10.0.64.4 Yes 6.1 GiB 50 GiB Online Up (This node) 2.6.1.6-3409af2 4.14.35-1902.302.2.el7uek.x86_64 Oracle Linux Server 7.8
        10.0.64.3 8f3b8afd-cd56-4bf5-acfa-c1b4ee80a8b6 10.0.64.3 Yes 6.1 GiB 50 GiB Online Up 2.6.1.6-3409af2 4.14.35-1902.302.2.el7uek.x86_64 Oracle Linux Server 7.8
        10.0.64.2 789aff6a-9986-455e-a3ac-b62d9bb6a0e2 10.0.64.2 Yes 6.0 GiB 50 GiB Online Up 2.6.1.6-3409af2 4.14.35-1902.302.2.el7uek.x86_64 Oracle Linux Server 7.8
        10.0.64.6 2b6296ca-27ff-4090-a703-d3d8866f6936 10.0.64.6 Yes 6.0 GiB 50 GiB Online Up 2.6.1.6-3409af2 4.14.35-1902.302.2.el7uek.x86_64 Oracle Linux Server 7.8
        10.0.64.5 0f7c3773-2da9-42d2-a23d-b5eadf572d7b 10.0.64.5 Yes 6.1 GiB 50 GiB Online Up 2.6.1.6-3409af2 4.14.35-1902.302.2.el7uek.x86_64 Oracle Linux Server 7.8
        Warnings: 
                WARNING: Insufficient CPU resources. Detected: 2 cores, Minimum required: 4 cores
                WARNING: Persistent journald logging is not enabled on this node.
                WARNING: Internal Kvdb is not using dedicated drive on nodes [10.0.64.3 10.0.64.5 10.0.64.4]. This configuration is not recommended for production clusters.
Global Storage Pool
        Total Used     :  30 GiB
        Total Capacity :  250 GiB

From the above we can see Portworx has discovered the 50GB volumes we attached to our 5 OKE nodes and is correctly reporting a Total Capacity of 250GB.

During the next few months I will share more posts on the use of OCI, OKE and Portworx.

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]

One thought on “How to use Portworx with the Oracle Container Engine for Kubernetes (OKE)

Add yours

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

Discover more from Ron Ekins' - Oracle Technology, DevOps and Kubernetes Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading