In this blog post I am going to share how to install and configure Portworx PX-Backup to perform a backup of an application using the Oracle Container Engine for Kubernetes (OKE) managed Kubernetes service running on Oracle Cloud Infrastructure (OCI).
Kubernetes Cluster
For this post I have built a 3 node Kubernetes v1.25.4 cluster in the uk-london-1 region, we can confirm the topology using kubectl get nodes -L topology.kubernetes.io/region,topology.kubernetes.io/zone for example.
% kubectl get nodes -L topology.kubernetes.io/region,topology.kubernetes.io/zone NAME STATUS ROLES AGE VERSION REGION ZONE 10.0.10.141 Ready node 3d4h v1.25.4 uk-london-1 UK-LONDON-1-AD-1 10.0.10.169 Ready node 3d4h v1.25.4 uk-london-1 UK-LONDON-1-AD-3 10.0.10.230 Ready node 3d4h v1.25.4 uk-london-1 UK-LONDON-1-AD-2
I have also installed PX-Enterprise into my OKE cluster, which we can see using pxctl -v.
% PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}') % alias pxctl='kubectl exec -n kube-system ${PX_POD} -it -- /opt/pwx/bin/pxctl' % pxctl -v Defaulted container "portworx" out of: portworx, csi-node-driver-registrar pxctl version 2.13.0-f9b46cd
I will start by creating a new Kubernetes Storage Class for our Portworx backups specifying 3 way replication, using the example ( px-backup-sc.yaml ) below.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-backup-sc
provisioner: pxd.portworx.com
parameters:
repl: "3"
And applying using kubectl apply
% kubectl apply -f px-backup-sc.yaml storageclass.storage.k8s.io/px-backup-sc created % kubectl get sc/px-backup-sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE px-backup-sc pxd.portworx.com Delete Immediate false 4h59m
PX-Central
Now visit PX-Central and logon, from the Product Catalog Welcome screen select Portworx PX-Backup on-premises and hit Continue.
Provide Namespace, select Helm 3 and Cloud and provide the Storage Class Name just created, then click Next, read and agree backup licence agreement.

Copy the helm commands and switch to Kubernetes terminal to install.

Paste helm repo add command
% helm repo add portworx http://charts.portworx.io/ && helm repo update "portworx" has been added to your repositories Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "portworx" chart repository ...Successfully got an update from the "stable" chart repository Update Complete. ⎈Happy Helming!⎈
Paste helm install px-central
% helm install px-central portworx/px-central --namespace central --create-namespace --version 2.3.2 --set persistentStorage.enabled=true,persistentStorage.storageClassName="px-backup-sc",pxbackup.enabled=true NAME: px-central LAST DEPLOYED: Fri Feb 3 10:58:47 2023 NAMESPACE: central STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: Your Release is named: "px-central" PX-Central is deployed in the namespace: central Chart Version: 2.3.2 -------------------------------------------------- Monitor PX-Central Install: -------------------------------------------------- Wait for job "pxcentral-post-install-hook" status to be in "Completed" state. kubectl get po --namespace central -ljob-name=pxcentral-post-install-hook -o wide | awk '{print $1, $3}' | grep -iv error ---------------------------- Features Summary: ---------------------------- PX-Backup: enabled PX-Monitor: disabled PX-License-Server: disabled -------------------------------------------------- Access PX-Central UI: -------------------------------------------------- Using port forwarding: kubectl port-forward service/px-backup-ui 8080:80 --namespace central To access PX-Central: http://localhost:8080 Login with the following credentials: Username: admin Password: admin For more information: https://github.com/portworx/helm/blob/master/charts/px-central/README.md --------------------------------------------------
We can see the that a new namespace called central has been created.
% kubectl get ns central NAME STATUS AGE central Active 82s
And new PVCs for the PX-Backup database using the px-backup-sc storage class.
% kubectl get pvc -n central NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pxc-mongodb-data-pxc-backup-mongodb-0 Bound pvc-6f8f8f38-07ac-4c53-b674-395641fd9894 64Gi RWO px-backup-sc 6m33s pxc-mongodb-data-pxc-backup-mongodb-1 Bound pvc-df209ed8-eea8-40a8-ab16-1f2c23fc19d5 64Gi RWO px-backup-sc 6m32s pxc-mongodb-data-pxc-backup-mongodb-2 Bound pvc-3ffa1665-2d14-42d2-9908-bbd568b6cda7 64Gi RWO px-backup-sc 6m32s pxcentral-keycloak-data-pxcentral-keycloak-postgresql-0 Bound pvc-ffa5ac65-7a91-4197-8235-945dff315581 10Gi RWO px-backup-sc 6m33s pxcentral-mysql-pvc Bound pvc-f49d9ea8-4019-4404-9c5d-2dd4bf018c11 100Gi RWO px-backup-sc 6m34s theme-pxcentral-keycloak-0 Bound pvc-0fa5073b-ec1f-484e-b115-a31903097fe9 5Gi RWO px-backup-sc 6m33s
Wait until all the pods are running with watch kubectl get pods -n central or kubectl get pods -n central –watch
% kubectl get pods -n central NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE px-backup-6457d9f6fc-wkt9g 1/1 Running 2 (5m55s ago) 7m pxc-backup-mongodb-0 1/1 Running 0 7m pxc-backup-mongodb-1 1/1 Running 0 7m pxc-backup-mongodb-2 1/1 Running 0 7m pxcentral-apiserver-57b88685-hm797 1/1 Running 0 7m pxcentral-backend-5d58bfd4cc-fl9xl 1/1 Running 0 3m23s pxcentral-frontend-64897ffb4b-5gzbd 1/1 Running 0 3m23s pxcentral-keycloak-0 1/1 Running 0 7m pxcentral-keycloak-postgresql-0 1/1 Running 0 7m pxcentral-lh-middleware-5ff4bb6678-b8nwf 1/1 Running 0 3m23s pxcentral-mysql-0 1/1 Running 0 7m pxcentral-post-install-hook-vkq4d 0/1 Completed 0 7m
Once all the pods are reporting a running status check for errors, using the below.
% kubectl get po --namespace central -ljob-name=pxcentral-post-install-hook -o wide | awk '{print $1, $3}' | grep -iv error NAME STATUS pxcentral-post-install-hook-vkq4d Completed
Use kubectl get svc to find the OCI LoadBalancer IP address, we will need this to logon to the PX Central UI.
% kubectl get svc -n central NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE px-backup ClusterIP 10.96.71.21 <none> 10002/TCP,10001/TCP 8m13s px-backup-ui LoadBalancer 10.96.71.138 141.147.118.148 80:31664/TCP 8m13s px-central-ui LoadBalancer 10.96.83.175 141.147.76.195 80:30680/TCP 8m13s pxc-backup-mongodb-headless ClusterIP None <none> 27017/TCP 8m13s pxcentral-apiserver ClusterIP 10.96.68.141 <none> 10005/TCP,10006/TCP 8m13s pxcentral-backend ClusterIP 10.96.53.48 <none> 80/TCP 8m13s pxcentral-frontend ClusterIP 10.96.75.157 <none> 80/TCP 8m13s pxcentral-keycloak-headless ClusterIP None <none> 80/TCP,8443/TCP 8m13s pxcentral-keycloak-http ClusterIP 10.96.115.181 <none> 80/TCP,8443/TCP 8m13s pxcentral-keycloak-postgresql ClusterIP 10.96.57.164 <none> 5432/TCP 8m13s pxcentral-keycloak-postgresql-headless ClusterIP None <none> 5432/TCP 8m13s pxcentral-lh-middleware ClusterIP 10.96.100.243 <none> 8091/TCP,8092/TCP 8m13s pxcentral-mysql ClusterIP 10.96.26.231 <none> 3306/TCP 8m13s
Configure Service Account
Create Service Account
Using the example yaml ( px-backup-sa.yaml ) create a service account called pxbackup-sa
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: pxbackup-sa
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pxbackup-sa-clusterrolebinding
subjects:
- kind: ServiceAccount
name: pxbackup-sa
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
---
Create token
As of Kubernetes 1.24, service accounts do not automatically create secrets, we need to do this manually for example.
% kubectl create token pxbackup-sa -n kube-system eyJhbGciOiJSUzI1NiIsImtpZCI6Ikhqd1pTRmxlZ0RCbTZUZzduRk5HcjBkOEhsSUhPS2pSd3NhWFEtRVRaMncifQ.eyJhdWQiOlsiYXBp...
Create secret
Create a secret for the service account ( px-backup-secret.yaml )
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: pxbackup-sa
namespace: kube-system
annotations:
kubernetes.io/service-account.name: "pxbackup-sa"
Cluster Info
Get Kubenetes API endpoint
Update file with service account, namespace and API endpoint
% kubectl cluster-info Kubernetes control plane is running at https://150.XXX.XXX.XXX:6443 CoreDNS is running at https://150.XXX.XXX.XXX:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Generate Kubeconfig
Update the SERVER and run the below ( generate_kubeconfig.sh ) to create a kubeconfig file and save for use later.
SERVICE_ACCOUNT=pxbackup-sa
NAMESPACE=kube-system
SERVER=https://150.XXX.XXX.XXX:6443
SERVICE_ACCOUNT_TOKEN=$(kubectl -n ${NAMESPACE} get secret ${SERVICE_ACCOUNT} -o "jsonpath={.data.token}" | base64 --decode)
SERVICE_ACCOUNT_CERTIFICATE=$(kubectl -n ${NAMESPACE} get secret ${SERVICE_ACCOUNT} -o "jsonpath={.data['ca\.crt']}")
cat <<END
apiVersion: v1
kind: Config
clusters:
- name: default-cluster
cluster:
certificate-authority-data: ${SERVICE_ACCOUNT_CERTIFICATE}
server: ${SERVER}
contexts:
- name: default-context
context:
cluster: default-cluster
namespace: ${NAMESPACE}
user: ${SERVICE_ACCOUNT}
current-context: default-context
users:
- name: ${SERVICE_ACCOUNT}
user:
token: ${SERVICE_ACCOUNT_TOKEN}
END
PX-Central
Using the above logon to the px-central UI using initial credentials provided, and change password as prompted, and provide email, first name and surname.

For this post I will perform a guided install using the UI Wizard, on future installs, click ‘Cancel’ for manual configuration

Configure S3 Target
Select Cloud Settings

Select AWS / S3 Compliant Object Store, provide descriptive Cloud Account Name, OCI Access Key and Secret Key.

Enter Backup Location, providing a descriptive Name, Cloud Account, OCI Object Store Bucket, Region and Endpoint, you will need to include :443 at the end of the endpoint as OCI Object Storage uses https
The OCI Object Store Endpoint format is <OCI Namespace>.compat.objectstorage.<region>.oraclecloud.com

You can determine your Object Storage Namespace from the within OCI by navigating to Profile -> Tenancy and from there you will see the Object storage namespace.
Alternatively use oci os ns get.
% oci os ns get { "data": "lxxxxxxxxxx" }
Configure Kuberenete Cluster
Confirm Stork version, it needs to be higher the 2.4.
% kubectl get deployment stork -n kube-system -o wide NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR stork 3/3 3 3 3d1h stork docker.io/openstorage/stork:2.11.0 name=stork,tier=control-plane

Select Other, and enter Cluster Name and paste Kubeconfig previously generated, and click Add Cluster

The new Cluster should now be visible.

If we click on the Cluster Name we can now see the Kubernetes namespaces.

Summary
In this post I have shared how to setup PX-Backup with OKE, in Part 2 I will detail how PX-Backup can be used to protect an OKE application.
All the examples used in this post are available in my PX-Backup-OKE GitHub repository.
Leave a Reply