Portworx Enterprise 2.8 installation on Oracle Kubernetes Engine (OKE)

Oracle Kubernetes Engine (OKE) Cluster

Let’s start by creating an Oracle Kubernetes Engine cluster on Oracle Cloud Infrastructure (OCI).

You can use the OCI Web UI to manually create an OKE Cluster within your OCI Compartment.

Or if you prefer you may want to consider using my Terraform automation which is available in GitHub.

Before you start, consult the Portworx documentation for latest supported Kubernetes and Kernel versions.

For this blog I will be using the Portworx 2.8 suppported Kubernetes v1.20.11 and Oracle Linux 7.8.

My Terraform script created a new 3 node OKE cluster in the UK-London-1 region.

I have also created 3 x 100GB Block Volumes, 1 per AD and attached the block volume to each compute node again using Terraform.

Note: UK-London-region has 3 Availability Domains (ADs), but some regions are limited to a single AD, check OCI Region documentation.

Oracle Kubernetes Engine (OKE)

To access the OKE cluster from your laptop, select compartment and navigate to the Containers -> Clusters -> Cluster Details and click ‘Access Cluster’ and follow the instructions for Local Access.

% oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.uk-london-1.aaaaaaaayss.... --file $HOME/.kube/config --region uk-london-1 --token-version 2.0.0
Existing Kubeconfig file found at /Users/rekins/.kube/config and new config merged into it

Using the kubectl get nodes we can see the 3 workers running Kubernetes v1.20.11 and Oracle Linux 7.8

% kubectl get nodes -o wide
NAME         STATUS   ROLES   AGE   VERSION    INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                  KERNEL-VERSION                     CONTAINER-RUNTIME
10.0.1.157   Ready    node    48m   v1.20.11   10.0.1.157    132.145.33.86   Oracle Linux Server 7.8   4.14.35-1902.306.2.el7uek.x86_64   cri-o://1.20.2
10.0.1.163   Ready    node    48m   v1.20.11   10.0.1.163    144.21.51.173   Oracle Linux Server 7.8   4.14.35-1902.306.2.el7uek.x86_64   cri-o://1.20.2
10.0.1.205   Ready    node    48m   v1.20.11   10.0.1.205    132.145.69.16   Oracle Linux Server 7.8   4.14.35-1902.306.2.el7uek.x86_64   cri-o://1.20.2

Again, we can use kubectl get nodes but this time with the –show-labels option we can see further details.

% kubectl get nodes --show-labels
NAME         STATUS   ROLES   AGE   VERSION    LABELS
10.0.1.157   Ready    node    52m   v1.20.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=VM.Standard2.1,beta.kubernetes.io/os=linux,displayName=oke-cgr3bekcbka-n5k6rizv7zq-sbgpx554ada-1,failure-domain.beta.kubernetes.io/region=uk-london-1,failure-domain.beta.kubernetes.io/zone=UK-LONDON-1-AD-3,hostname=oke-cgr3bekcbka-n5k6rizv7zq-sbgpx554ada-1,internal_addr=10.0.1.157,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.1.157,kubernetes.io/os=linux,node-role.kubernetes.io/node=,node.info.ds_proxymux_client=true,node.info/compartment.id_prefix=ocid1.compartment.oc1,node.info/compartment.id_suffix=aaaaaaaauktztjuam57uhird7up5gxp32svboo6bmjfxvotkja75tfndc2sq,node.info/compartment.name=tf-compartment,node.info/kubeletVersion=v1.20,oci.oraclecloud.com/fault-domain=FAULT-DOMAIN-2,oke.oraclecloud.com/node.info.private_subnet=false,oke.oraclecloud.com/node.info.private_worker=true,oke.oraclecloud.com/tenant_agent.version=1.38.5-8f5c194266-712,px/enabled=true,px/metadata-node=true
10.0.1.163   Ready    node    52m   v1.20.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=VM.Standard2.1,beta.kubernetes.io/os=linux,displayName=oke-cgr3bekcbka-n5k6rizv7zq-sbgpx554ada-0,failure-domain.beta.kubernetes.io/region=uk-london-1,failure-domain.beta.kubernetes.io/zone=UK-LONDON-1-AD-2,hostname=oke-cgr3bekcbka-n5k6rizv7zq-sbgpx554ada-0,internal_addr=10.0.1.163,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.1.163,kubernetes.io/os=linux,last-migration-failure=get_kubesvc_failure,node-role.kubernetes.io/node=,node.info.ds_proxymux_client=true,node.info/compartment.id_prefix=ocid1.compartment.oc1,node.info/compartment.id_suffix=aaaaaaaauktztjuam57uhird7up5gxp32svboo6bmjfxvotkja75tfndc2sq,node.info/compartment.name=tf-compartment,node.info/kubeletVersion=v1.20,oci.oraclecloud.com/fault-domain=FAULT-DOMAIN-1,oke.oraclecloud.com/node.info.private_subnet=false,oke.oraclecloud.com/node.info.private_worker=true,oke.oraclecloud.com/tenant_agent.version=1.38.5-8f5c194266-712,px/enabled=true,px/metadata-node=true
10.0.1.205   Ready    node    52m   v1.20.11   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/instance-type=VM.Standard2.1,beta.kubernetes.io/os=linux,displayName=oke-cgr3bekcbka-n5k6rizv7zq-sbgpx554ada-2,failure-domain.beta.kubernetes.io/region=uk-london-1,failure-domain.beta.kubernetes.io/zone=UK-LONDON-1-AD-1,hostname=oke-cgr3bekcbka-n5k6rizv7zq-sbgpx554ada-2,internal_addr=10.0.1.205,kubernetes.io/arch=amd64,kubernetes.io/hostname=10.0.1.205,kubernetes.io/os=linux,node-role.kubernetes.io/node=,node.info.ds_proxymux_client=true,node.info/compartment.id_prefix=ocid1.compartment.oc1,node.info/compartment.id_suffix=aaaaaaaauktztjuam57uhird7up5gxp32svboo6bmjfxvotkja75tfndc2sq,node.info/compartment.name=tf-compartment,node.info/kubeletVersion=v1.20,oci.oraclecloud.com/fault-domain=FAULT-DOMAIN-3,oke.oraclecloud.com/node.info.private_subnet=false,oke.oraclecloud.com/node.info.private_worker=true,oke.oraclecloud.com/tenant_agent.version=1.38.5-8f5c194266-712,px/enabled=true,px/metadata-node=true

Portworx Enterprise

Now we have our Kubernetes Cluster up and running we can head over to https://central.portworx.com/ to create a Portworx specification using the wizard.

For this post I will be using the Portworx Enterprise edition for 3 my OKE nodes.

And Kubernetes Operator as deployment method.

Basic Tab
Storage Tab

On the Storage Tab, we need to select On-Premises, as the Spec Generator currently does not include a wizard for Oracle Cloud Infrastructure (OCI).

Network Tab
Customize Tab

From the Operator wizard we can decide to Download or Save Spec.

Operator Tab

Install Portworx Operator

From our shell install the Portworx Operator using kubectl apply.

% kubectl apply -f 'https://install.portworx.com/2.8?comp=pxoperator'
serviceaccount/portworx-operator created
podsecuritypolicy.policy/px-operator created
clusterrole.rbac.authorization.k8s.io/portworx-operator created
clusterrolebinding.rbac.authorization.k8s.io/portworx-operator created
deployment.apps/portworx-operator created

Apply specification

Apply the specification created from PX-Central

% kubectl apply -f 'https://install.portworx.com/2.8?operator=true&mc=false&kbver=&b=true&c=px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533&stork=true&csi=true&mon=true&st=k8s&promop=true'
storagecluster.core.libopenstorage.org/px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533 created

Monitor Progress

Enter the kubectl get pods command below and wait until all Portworx nodes show a status of running.

% kubectl get pods -n kube-system -l name=portworx --watch
NAME                                                    READY   STATUS              RESTARTS   AGE
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   0/3     ContainerCreating   0          82s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-jj5gc   0/3     ContainerCreating   0          82s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   0/3     ContainerCreating   0          82s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   1/3     Running             0          2m7s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   1/3     Running             0          2m13s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-jj5gc   1/3     Running             0          2m14s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-jj5gc   2/3     Running             0          2m30s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   2/3     Running             0          2m35s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   2/3     Running             0          2m38s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-jj5gc   3/3     Running             0          3m42s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-jj5gc   3/3     Running             0          3m43s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   2/3     Running             0          3m48s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   2/3     Running             0          3m55s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   2/3     Running             0          3m55s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   2/3     Running             0          3m55s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   2/3     Running             0          4m1s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   2/3     Running             0          4m1s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6   3/3     Running             0          4m4s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-xqfw9   3/3     Running             0          4m9s
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-jj5gc   3/3     Running             0          4m12s
% PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
% echo $PX_POD
px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533-c6tq6

Confirm we are using Portworx 2.8

% kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl -v          
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar, telemetry
pxctl version 2.8.0.0-1ef62f8

And status of the Portworx cluster.

% kubectl exec -it $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar, telemetry
Status: PX is operational
Telemetry: Healthy
License: Trial (expires in 31 days)
Node ID: b2a4cd36-f0d0-4c06-9de2-378a68dafe5a
	IP: 10.0.1.205 
 	Local Storage Pool: 1 pool
	POOL	IO_PRIORITY	RAID_LEVEL	USABLE	USED	STATUS	ZONE			REGION
	0	MEDIUM		raid0		100 GiB	7.1 GiB	Online	UK-LONDON-1-AD-1	uk-london-1
	Local Storage Devices: 1 device
	Device	Path		Media Type		Size		Last-Scan
	0:1	/dev/sdb	STORAGE_MEDIUM_MAGNETIC	100 GiB		09 Oct 21 13:05 UTC
	* Internal kvdb on this node is sharing this storage device /dev/sdb  to store its data.
	total		-	100 GiB
	Cache Devices:
	 * No cache devices
Cluster Summary
	Cluster ID: px-cluster-21a6ec62-0ed7-48b8-b6ac-b6833005b533
	Cluster UUID: faf0a08a-d456-436e-81c8-7d6dbab1525e
	Scheduler: kubernetes
	Nodes: 3 node(s) with storage (3 online)
	IP		ID					SchedulerNodeName	Auth		StorageNode	Used	Capacity	Status	StorageStatus	Version		Kernel					OS
	10.0.1.163	d5e96ff9-94d0-4004-9753-6a2a4568a79c	10.0.1.163		Disabled	Yes		7.1 GiB	100 GiB		Online	Up2.8.0.0-1ef62f8	4.14.35-1902.306.2.el7uek.x86_64	Oracle Linux Server 7.8
	10.0.1.205	b2a4cd36-f0d0-4c06-9de2-378a68dafe5a	10.0.1.205		Disabled	Yes		7.1 GiB	100 GiB		Online	Up (This node)	2.8.0.0-1ef62f8	4.14.35-1902.306.2.el7uek.x86_64	Oracle Linux Server 7.8
	10.0.1.157	3dea3719-f7b3-4cfa-8d4c-b934f615e357	10.0.1.157		Disabled	Yes		7.1 GiB	100 GiB		Online	Up2.8.0.0-1ef62f8	4.14.35-1902.306.2.el7uek.x86_64	Oracle Linux Server 7.8
	Warnings: 
		 WARNING: Insufficient CPU resources. Detected: 2 cores, Minimum required: 4 cores
		 WARNING: Persistent journald logging is not enabled on this node.
		 WARNING: Internal Kvdb is not using dedicated drive on nodes [10.0.1.157 10.0.1.205]. This configuration is not recommended for production clusters.
Global Storage Pool
	Total Used    	:  21 GiB
	Total Capacity	:  300 GiB

Portworx Storage nodes

% kubectl -n kube-system get storagenodes -l name=portworx
NAME         ID                                     STATUS   VERSION           AGE
10.0.1.157   3dea3719-f7b3-4cfa-8d4c-b934f615e357   Online   2.8.0.0-1ef62f8   53m
10.0.1.163   d5e96ff9-94d0-4004-9753-6a2a4568a79c   Online   2.8.0.0-1ef62f8   53m
10.0.1.205   b2a4cd36-f0d0-4c06-9de2-378a68dafe5a   Online   2.8.0.0-1ef62f8   53m

Finally, using kubectl get sc we can see the OCI & Portworx Storage Classes.

% kubectl get sc
NAME                             PROVISIONER                       RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
oci (default)                    oracle.com/oci                    Delete          Immediate              false                  165m
oci-bv                           blockvolume.csi.oraclecloud.com   Delete          WaitForFirstConsumer   false                  165m
px-db                            kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-db-cloud-snapshot             kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-db-cloud-snapshot-encrypted   kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-db-encrypted                  kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-db-local-snapshot             kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-db-local-snapshot-encrypted   kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-replicated                    kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
px-replicated-encrypted          kubernetes.io/portworx-volume     Delete          Immediate              true                   59m
stork-snapshot-sc                stork-snapshot                    Delete          Immediate              true                   59m

Summary

From the above we can see Portworx Enterprise deployment is using the 3 OCI compute instances as Portworx storage nodes and discovered the 100GB volumes and is correctly reporting a Total Capacity of 300GB.

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: