Introduction
In this post I am going to share how we can install Portworx Essentials into an existing Oracle Container Engine for Kubernetes (OKE) on Oracle Cloud Infrastructure (OCI)
If you have not already got an Oracle Kubernetes Engine (OKE) cluster up and running you could may want to check-out the two posts below.
Automated Oracle Container Engine for Kubernetes (OKE) build with Terraform
Provisioning an Oracle Kubernetes Engine (OKE) cluster with Rancher
Assuming you have already have a running Oracle Kubernetes Engine (OKE) let’s begin.
Local Access
Using the OCI console navigate to Containers -> Clusters -> Cluster Details and Click Access Cluster and follow the instructions to create / update a local kubeconfig file.

Copy the oci ce cluster create-kubeconfig command and apply on your desktop.
% oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.uk-london-1... --file $HOME/.kube/config --region uk-london-1 --token-version 2.0.0 Existing Kubeconfig file found at /Users/rekins/.kube/config and new config merged into it
The OKE Kubenentes cluster should now be available for selection
% kubectl config get-contexts CURRENT NAME CLUSTER AUTHINFO NAMESPACE context-cdio66lz54q cluster-cdio66lz54q user-cdio66lz54q docker-desktop docker-desktop docker-desktop * rke local kube-admin-local kube-system % kubectl config rename-context context-cdio66lz54q OKE-TF Context "context-cdio66lz54q" renamed to "OKE-TF". % kubectl config set-context OKE-TF Context "OKE-TF" modified. % kubectl config current-context OKE-TF
OKE Cluster
Before we deploy Portworx a quick review of our Oracle Kubernetes Engine (OKE) cluster.
Kubernetes Version
Using kubectl version I can see my OKE cluster is v1.20.8.
% kubectl version --short | awk -Fv '/Server Version: / {print $3}' 1.20.8
Kubernetes Nodes
That I have 3 nodes all Ready and available.
% kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 10.0.1.136 Ready node 37m v1.20.8 10.0.1.136 150.230.117.117 Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.2 10.0.1.161 Ready node 37m v1.20.8 10.0.1.161 132.145.24.146 Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.2 10.0.1.45 Ready node 37m v1.20.8 10.0.1.45 132.145.46.68 Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.2
Kubernetes Pods
And a number of default pods running in my Kube-system namespace.
% kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-749d57869-mk7hr 1/1 Running 0 35m coredns-749d57869-qqf9c 1/1 Running 0 35m coredns-749d57869-vph6m 1/1 Running 0 44m csi-oci-node-g7hnr 1/1 Running 1 38m csi-oci-node-gbxn2 1/1 Running 1 38m csi-oci-node-h4crs 1/1 Running 0 38m kube-dns-autoscaler-6cbdf96f6d-zkphz 1/1 Running 0 44m kube-flannel-ds-5qj58 1/1 Running 2 38m kube-flannel-ds-dn4js 1/1 Running 2 38m kube-flannel-ds-g79dd 1/1 Running 2 38m kube-proxy-78g88 1/1 Running 0 38m kube-proxy-bmh88 1/1 Running 0 38m kube-proxy-mj8vj 1/1 Running 0 38m portworx-operator-bfc87df78-tzghb 1/1 Running 0 103s proxymux-client-2wbvw 1/1 Running 0 38m proxymux-client-fpt4v 1/1 Running 0 38m proxymux-client-gc7fs 1/1 Running 0 38m
Storage Classes
And again, using kubectl get storageclass or kubectl get sc if you prefer, we can see two OCI storage classes.
% kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE ooci (default) oracle.com/oci Delete Immediate false 44m oci-bv blockvolume.csi.oraclecloud.com Delete WaitForFirstConsumer false 44m
Portworx Essentials
Visit PX-Central and logon, and select Portworx Essentials, and click Next.

Select Use the Portworx Operator, pick Portworx Version and click Next.

For my OCI OKE deployment I will select On Premises and Skip KVDB devices, click Next.

I will use Network defaults, so click Next

Select None and click Finish and accept licence agreement.


Click on the ‘Copy Command to Clipboard’ and apply to deploy Portworx Operator.
% kubectl apply -f 'https://install.portworx.com/2.7?comp=pxoperator' serviceaccount/portworx-operator unchanged podsecuritypolicy.policy/px-operator configured clusterrole.rbac.authorization.k8s.io/portworx-operator unchanged clusterrolebinding.rbac.authorization.k8s.io/portworx-operator unchanged deployment.apps/portworx-operator unchanged
And again, copy and paste the kubectl apply command.
% kubectl apply -f 'https://install.portworx.com/2.7?operator=true&mc=false&kbver=&oem=esse&user=xxxxxxx-xxxx-xxxx-xxx5-c24e499c7467&b=true&c=px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8&stork=true&csi=false&lh=true&tel=false&st=k8s' storagecluster.core.libopenstorage.org/px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8 created secret/px-essential created
OKE Portworx Cluster
Using kubectl we can also see the Portworx generated storage classes
% kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
oci (default) oracle.com/oci Delete Immediate false 53m
oci-bv blockvolume.csi.oraclecloud.com Delete WaitForFirstConsumer false 53m
px-db kubernetes.io/portworx-volume Delete Immediate true 3m21s
px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 3m20s
px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 3m20s
px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 3m21s
px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 3m20s
px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 3m20s
px-replicated kubernetes.io/portworx-volume Delete Immediate true 3m21s
px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 3m21s
stork-snapshot-sc stork-snapshot Delete Immediate true 3m30s
List Portworx pods with kubectl get pods
% kubectl get pods -l name=portworx -n kube-system -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-p4hds 2/2 Running 0 4m9s 10.0.1.136 10.0.1.136 <none> <none> px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-pwj98 2/2 Running 0 4m9s 10.0.1.45 10.0.1.45 <none> <none> px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-wvt8c 2/2 Running 0 4m9s 10.0.1.161 10.0.1.161 <none> <none>
PXCTL commands
Get Portworx pod name
% PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}')
% echo $PX_POD
px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-p4hds
Get Portworx cluster status using pxctl status.
% kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status Defaulted container "portworx" out of: portworx, csi-node-driver-registrar Status: PX is operational License: PX-Essential (lease renewal in 23h, 54m) Node ID: 5bc87bd8-338b-4fda-948f-439ffa904e7a IP: 10.0.1.136 Local Storage Pool: 1 pool POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION 0 MEDIUM raid0 100 GiB 7.1 GiB Online UK-LONDON-1-AD-2 uk-london-1 Local Storage Devices: 1 device Device Path Media Type Size Last-Scan 0:1 /dev/sdb STORAGE_MEDIUM_MAGNETIC 100 GiB 09 Aug 21 10:52 UTC * Internal kvdb on this node is sharing this storage device /dev/sdb to store its data. total - 100 GiB Cache Devices: * No cache devices Cluster Summary Cluster ID: px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8 Cluster UUID: 6d451e61-532d-417b-81c3-b1d329198761 Scheduler: kubernetes Nodes: 3 node(s) with storage (3 online) IP ID SchedulerNodeName Auth StorageNode Used Capacity Status StorageStatus Version KernelOS 10.0.1.161 84b711ef-4bfb-4ec2-aa6a-b976090ed0ca 10.0.1.161 Disabled Yes 7.1 GiB 100 GiB Online Up 2.7.2.1-20328f1 4.14.35-1902.306.2.el7uek.x86_64 Oracle Linux Server 7.8 10.0.1.136 5bc87bd8-338b-4fda-948f-439ffa904e7a 10.0.1.136 Disabled Yes 7.1 GiB 100 GiB Online Up (This node) 2.7.2.1-20328f1 4.14.35-1902.306.2.el7uek.x86_64 Oracle Linux Server 7.8 10.0.1.45 3c1459f0-862f-4b7f-a324-ea47a013633d 10.0.1.45 Disabled Yes 7.1 GiB 100 GiB Online Up 2.7.2.1-20328f1 4.14.35-1902.306.2.el7uek.x86_64 Oracle Linux Server 7.8 Warnings: WARNING: Insufficient CPU resources. Detected: 2 cores, Minimum required: 4 cores WARNING: Persistent journald logging is not enabled on this node. WARNING: Internal Kvdb is not using dedicated drive on nodes [10.0.1.136]. This configuration is not recommended for production clusters. Global Storage Pool Total Used : 21 GiB Total Capacity : 300 GiB
I can see Portworx has discovered the 100GB block devices I created and attached to my OCI instances and the storage is now available for use with my Portworx environment.
Summary
In this short post I have shared how to deploy Portworx Essentials 2.7 into an Oracle Cloud Infrastructure (OCI) Oracle v1.20.8 Kubernetes (OKE) cluster.

Portworx is also fully supported on all the major Kubernetes on-premises and Cloud platforms.
You can find out more about Portworx by visiting portworx.com
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]