Site icon Ron Ekins' – Oracle Technology, DevOps and Kubernetes Blog

Portworx Essentials installation on Oracle Kubernetes Engine (OKE)


In this post I am going to share how we can install Portworx Essentials into an existing Oracle Container Engine for Kubernetes (OKE) on Oracle Cloud Infrastructure (OCI)

If you have not already got an Oracle Kubernetes Engine (OKE) cluster up and running you could may want to check-out the two posts below.

Automated Oracle Container Engine for Kubernetes (OKE) build with Terraform

Provisioning an Oracle Kubernetes Engine (OKE) cluster with Rancher

Assuming you have already have a running Oracle Kubernetes Engine (OKE) let’s begin.

Local Access

Using the OCI console navigate to Containers -> Clusters -> Cluster Details and Click Access Cluster and follow the instructions to create / update a local kubeconfig file.

Access Your Cluster

Copy the oci ce cluster create-kubeconfig command and apply on your desktop.

% oci ce cluster create-kubeconfig --cluster-id --file $HOME/.kube/config --region uk-london-1 --token-version 2.0.0
Existing Kubeconfig file found at /Users/rekins/.kube/config and new config merged into it

The OKE Kubenentes cluster should now be available for selection

% kubectl config  get-contexts
CURRENT   NAME                  CLUSTER               AUTHINFO           NAMESPACE
          context-cdio66lz54q   cluster-cdio66lz54q   user-cdio66lz54q   
          docker-desktop        docker-desktop        docker-desktop     
*         rke                   local                 kube-admin-local   kube-system

% kubectl config rename-context context-cdio66lz54q OKE-TF
Context "context-cdio66lz54q" renamed to "OKE-TF". 

% kubectl config set-context OKE-TF  
Context "OKE-TF" modified.
% kubectl config current-context   

OKE Cluster

Before we deploy Portworx a quick review of our Oracle Kubernetes Engine (OKE) cluster.

Kubernetes Version

Using kubectl version I can see my OKE cluster is v1.20.8.

% kubectl version --short | awk -Fv '/Server Version: / {print $3}'

Kubernetes Nodes

That I have 3 nodes all Ready and available.

% kubectl get nodes -o wide
NAME        STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP     OS-IMAGE                KERNEL-VERSION                   CONTAINER-RUNTIME  Ready  node  37m v1.20.8 Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.2  Ready  node  37m v1.20.8  Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.2   Ready  node  37m v1.20.8   Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.2

Kubernetes Pods

And a number of default pods running in my Kube-system namespace.

% kubectl get pods -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-749d57869-mk7hr                1/1     Running   0          35m
coredns-749d57869-qqf9c                1/1     Running   0          35m
coredns-749d57869-vph6m                1/1     Running   0          44m
csi-oci-node-g7hnr                     1/1     Running   1          38m
csi-oci-node-gbxn2                     1/1     Running   1          38m
csi-oci-node-h4crs                     1/1     Running   0          38m
kube-dns-autoscaler-6cbdf96f6d-zkphz   1/1     Running   0          44m
kube-flannel-ds-5qj58                  1/1     Running   2          38m
kube-flannel-ds-dn4js                  1/1     Running   2          38m
kube-flannel-ds-g79dd                  1/1     Running   2          38m
kube-proxy-78g88                       1/1     Running   0          38m
kube-proxy-bmh88                       1/1     Running   0          38m
kube-proxy-mj8vj                       1/1     Running   0          38m
portworx-operator-bfc87df78-tzghb      1/1     Running   0          103s
proxymux-client-2wbvw                  1/1     Running   0          38m
proxymux-client-fpt4v                  1/1     Running   0          38m
proxymux-client-gc7fs                  1/1     Running   0          38m

Storage Classes

And again, using kubectl get storageclass or kubectl get sc if you prefer, we can see two OCI storage classes.

% kubectl get sc       
ooci (default)                  Delete        Immediate            false                44m
oci-bv Delete        WaitForFirstConsumer false                44m

Portworx Essentials

Visit PX-Central and logon, and select Portworx Essentials, and click Next.

Spec Generator

Select Use the Portworx Operator, pick Portworx Version and click Next.


For my OCI OKE deployment I will select On Premises and Skip KVDB devices, click Next.


I will use Network defaults, so click Next


Select None and click Finish and accept licence agreement.

Save Spec

Click on the ‘Copy Command to Clipboard’ and apply to deploy Portworx Operator.

% kubectl apply -f ''
serviceaccount/portworx-operator unchanged
podsecuritypolicy.policy/px-operator configured unchanged unchanged
deployment.apps/portworx-operator unchanged

And again, copy and paste the kubectl apply command.

% kubectl apply -f '' created
secret/px-essential created

OKE Portworx Cluster

Using kubectl we can also see the Portworx generated storage classes

% kubectl get sc
oci (default)                            Delete         Immediate            false                53m
oci-bv                  Delete         WaitForFirstConsumer false                53m
px-db                     Delete         Immediate            true                 3m21s
px-db-cloud-snapshot      Delete         Immediate            true                 3m20s
px-db-cloud-snapshot-encrypted    Delete         Immediate            true                 3m20s
px-db-encrypted           Delete         Immediate            true                 3m21s
px-db-local-snapshot      Delete         Immediate            true                 3m20s
px-db-local-snapshot-encrypted    Delete         Immediate            true                 3m20s
px-replicated             Delete         Immediate            true                 3m21s
px-replicated-encrypted    Delete         Immediate            true                 3m21s
stork-snapshot-sc               stork-snapshot                   Delete         Immediate            true                 3m30s

List Portworx pods with kubectl get pods

% kubectl get pods -l name=portworx -n kube-system -o wide
NAME                                                  READY STATUS  RESTARTS AGE  IP         NODE       NOMINATED NODE READINESS GATES
px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-p4hds 2/2   Running 0        4m9s <none>         <none>
px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-pwj98 2/2   Running 0        4m9s  <none>         <none>
px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8-wvt8c 2/2   Running 0        4m9s <none>         <none>

PXCTL commands

Get Portworx pod name

% PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0]}')

% echo $PX_POD

Get Portworx cluster status using pxctl status.

% kubectl exec $PX_POD -n kube-system -- /opt/pwx/bin/pxctl status                                  
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
Status: PX is operational
License: PX-Essential (lease renewal in 23h, 54m)
Node ID: 5bc87bd8-338b-4fda-948f-439ffa904e7a
 	Local Storage Pool: 1 pool
	0	MEDIUM		raid0		100 GiB	7.1 GiB	Online	UK-LONDON-1-AD-2	uk-london-1
	Local Storage Devices: 1 device
	Device	Path		Media Type		Size		Last-Scan
	0:1	/dev/sdb	STORAGE_MEDIUM_MAGNETIC	100 GiB		09 Aug 21 10:52 UTC
	* Internal kvdb on this node is sharing this storage device /dev/sdb  to store its data.
	total		-	100 GiB
	Cache Devices:
	 * No cache devices
Cluster Summary
	Cluster ID: px-cluster-e65a40a3-0c0c-45ee-b678-3226aa1936b8
	Cluster UUID: 6d451e61-532d-417b-81c3-b1d329198761
	Scheduler: kubernetes
	Nodes: 3 node(s) with storage (3 online)
	IP		ID					SchedulerNodeName	Auth		StorageNode	Used	Capacity	Status	StorageStatus	Version		KernelOS	84b711ef-4bfb-4ec2-aa6a-b976090ed0ca		Disabled	Yes		7.1 GiB	100 GiB		Online	Up	4.14.35-1902.306.2.el7uek.x86_64	Oracle Linux Server 7.8	5bc87bd8-338b-4fda-948f-439ffa904e7a		Disabled	Yes		7.1 GiB	100 GiB		Online	Up (This node)	4.14.35-1902.306.2.el7uek.x86_64	Oracle Linux Server 7.8	3c1459f0-862f-4b7f-a324-ea47a013633d		Disabled	Yes		7.1 GiB	100 GiB		Online	Up	4.14.35-1902.306.2.el7uek.x86_64	Oracle Linux Server 7.8
		 WARNING: Insufficient CPU resources. Detected: 2 cores, Minimum required: 4 cores
		 WARNING: Persistent journald logging is not enabled on this node.
		 WARNING: Internal Kvdb is not using dedicated drive on nodes []. This configuration is not recommended for production clusters.
Global Storage Pool
	Total Used    	:  21 GiB
	Total Capacity	:  300 GiB

I can see Portworx has discovered the 100GB block devices I created and attached to my OCI instances and the storage is now available for use with my Portworx environment.


In this short post I have shared how to deploy Portworx Essentials 2.7 into an Oracle Cloud Infrastructure (OCI) Oracle v1.20.8 Kubernetes (OKE) cluster.

Portworx is also fully supported on all the major Kubernetes on-premises and Cloud platforms.

You can find out more about Portworx by visiting

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]

Exit mobile version