Background
In Getting started with the Oracle Database Kuberenetes Operator (OraOperator) I introduced the Oracle Database Kubernetes Operator and provided details on how to install and configure it for a single instance database running on Oracle Container Engine for Kubernetes (OKE) service within Oracle Cloud Infrastructure (OCI).
In the post I share how I use Terraform to build an OKE cluster with Portworx storage, and then provide details on how to install and configure the OraOperator to use a Portworx storage class to provide the ability to failover a Kubernetes Oracle 21c database pod across Availability Domains (ADs) with OCI.
OKE Preparation
Using the details in my Oracle Kubernetes Engine (OKE) deployment with Portworx using Terraform post and Terraform code which has been upload into GitHub create a new OKE environment.
For example clone the Terraform code from my GitHub repo, change directory, localise the terraform.tfvars file providing OCI account, tenancy, and Portworx URLs.
- git clone https://github.com/raekins/oke-px-terraform.git
- cd oke-px-terraform
- cp terraform.tfvars.example terraform.tfvars
- vi terraform.tfvars
- terraform apply –auto-approve
OKE Evironment
The Terraform script will create an OKE cluster as below.

Before we move on, let’s have a quick look at the Terraform built OKE environment.
Kubernetes Version
Confirm Kubernetes version with kubectl version
% kubectl version --short | awk -Fv '/Server Version: / {print $3}' 1.20.11
Kubernetes Nodes
Check size of Kubernetes cluster with kubectl get nodes
% kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 10.0.10.115 Ready node 138m v1.20.11 10.0.10.115 <none> Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.7 10.0.10.236 Ready node 138m v1.20.11 10.0.10.236 <none> Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.7 10.0.10.239 Ready node 138m v1.20.11 10.0.10.239 <none> Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.7
Portworx
And use pxctl to see version of Portworx
% export PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0].metadata.name}') % alias pxctl='kubectl exec -n kube-system ${PX_POD} -it -- /opt/pwx/bin/pxctl' % pxctl -v Defaulted container "portworx" out of: portworx, csi-node-driver-registrar pxctl version 2.8.1.5-937bfdd
Kubernetes Storage Classes
To ensure data is available in all three OCI Availability Domains (ADs) within our Region, we need to create and apply a new Portworx StorageClass with 3 replicas for example px-ora-csi-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-ora-csi-sc
provisioner: pxd.portworx.com
parameters:
repl: "3"
io_profile: "db_remote"
priority_io: "high"
allowVolumeExpansion: True
% kubectl apply -f px-ora-csi-sc.yaml storageclass.storage.k8s.io/px-ora-csi-sc created
List available Kubernetes Storage Classes using kubectl get sc or kubectl get StorageClass, for example.
% kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE oci (default) oracle.com/oci Delete Immediate false 4h1m oci-bv blockvolume.csi.oraclecloud.com Delete WaitForFirstConsumer true 4h1m px-db kubernetes.io/portworx-volume Delete Immediate true 3h52m px-db-cloud-snapshot kubernetes.io/portworx-volume Delete Immediate true 3h52m px-db-cloud-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 3h52m px-db-encrypted kubernetes.io/portworx-volume Delete Immediate true 3h52m px-db-local-snapshot kubernetes.io/portworx-volume Delete Immediate true 3h52m px-db-local-snapshot-encrypted kubernetes.io/portworx-volume Delete Immediate true 3h52m px-ora-csi-sc pxd.portworx.com Delete Immediate true 25s px-replicated kubernetes.io/portworx-volume Delete Immediate true 3h52m px-replicated-encrypted kubernetes.io/portworx-volume Delete Immediate true 3h52m stork-snapshot-sc stork-snapshot Delete Immediate true 3h52m
Install cert-manager
The Oracle Database Kubernetes Operator uses webhooks for validating user input before persisting it in Etcd. Webhooks require TLS certificates that are generated and managed by a certificate manager.
Install the certificate manager using the following command:
% kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml
Install Operator
Clone the Oracle Database Kubernetes Operator from GitHub, for example
% git clone https://github.com/oracle/oracle-database-operator.git
Use kubectl apply to install the Operator
% kubectl apply -f oracle-database-operator/oracle-database-operator.yaml
The file will perform the following operations:
- Create Namespace
- Create CRDs
- Create Roles and Bindings
- Operator Deployment
Use kubectl get all to see the Kubernetes pods, services, deployments and replicasets, for example
% kubectl get all -n oracle-database-operator-system NAME READY STATUS RESTARTS AGE pod/oracle-database-operator-controller-manager-58447bcbf8-29w8s 1/1 Running 0 26s pod/oracle-database-operator-controller-manager-58447bcbf8-6ljpn 1/1 Running 0 26s pod/oracle-database-operator-controller-manager-58447bcbf8-ng87q 1/1 Running 0 26s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oracle-database-operator-controller-manager-metrics-service ClusterIP 10.96.253.67 <none> 8443/TCP 27s service/oracle-database-operator-webhook-service ClusterIP 10.96.45.121 <none> 443/TCP 27s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oracle-database-operator-controller-manager 3/3 3 3 27s NAME DESIRED CURRENT READY AGE replicaset.apps/oracle-database-operator-controller-manager-58447bcbf8 3 3 3 27s
Oracle Namespace
Optional create new Kubernetes namespace, for example oracle-namespace for ease of management.
We can create a new namespace with kubectl create namespace.
% kubectl create namespace oracle-namespace namespace/oracle-namespace created
To avoid having to specify our namespace with -n each time I will use the set-context to specify a default namespace of oracle-namespace.
% kubectl config set-context --current --namespace=oracle-namespace Context "context-cdgorkg4nua" modified.
Kubectl will now default to using the oracle–namespace for example,
Create Secrets
For this blog we will use a Kubernetes Secret to store Database and credentials for the Oracle Container Registry, note this is not the most secure method and should not be used for production as the value can be easily obtained, as below.
Create Database Secret
Use kubectl create secret to create a database secret
% kubectl create secret generic admin-password --from-literal=sidb-admin-password='Kube#2022' -n oracle-namespace secret/admin-password created
Create Oracle Container Registry Secret
Logon to the Oracle Container Registry (OCR) using SSO credentials, for example:
% docker login container-registry.oracle.com Username: <SSO email> Password: <SSO password> Login Succeeded
Create Kubernetes secret using OCR credentials
% kubectl create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson -n oracle-namespace secret/regcred created
Configure Database
Copy ../oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml to your working directory and update as required, for example.
% cp oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml .
Update as required, for example name, namespace, sid, secretName, secretKey, pdbName, pullFrom, pullSecrets, size, storageClass
#
# Copyright (c) 2021, Oracle and/or its affiliates.
# Licensed under the Universal Permissive License v 1.0 as shown at http://oss.oracle.com/licenses/upl.
#
apiVersion: database.oracle.com/v1alpha1
kind: SingleInstanceDatabase
metadata:
name: sidb-pstg
namespace: oracle-namespace
spec:
## Use only alphanumeric characters for sid
sid: PSTGCDB
## A source database ref to clone from, leave empty to create a fresh database
cloneFrom: ""
## NA if cloning from a SourceDB (cloneFrom is set)
edition: enterprise
## Should refer to SourceDB secret if cloning from a SourceDB (cloneFrom is set)
## Secret containing SIDB password mapped to secretKey
## This secret will be deleted after creation of the database unless keepSecret is set to true
adminPassword:
secretName: admin-password
secretKey: sidb-admin-password
keepSecret: true
## NA if cloning from a SourceDB (cloneFrom is set)
charset: AL32UTF8
## NA if cloning from a SourceDB (cloneFrom is set)
pdbName: pstgpdb1
## Enable/Disable Flashback
flashBack: false
## Enable/Disable ArchiveLog
archiveLog: false
## Enable/Disable ForceLogging
forceLog: false
## NA if cloning from a SourceDB (cloneFrom is set)
## Specify both sgaSize and pgaSize (in MB) or dont specify both
## Specify Non-Zero value to use
initParams:
cpuCount: 0
processes: 0
sgaTarget: 0
pgaAggregateTarget: 0
## Database image details
## Database can be patched by updating the RU version/image
## Major version changes are not supported
image:
pullFrom: container-registry.oracle.com/database/enterprise:21.3.0.0
pullSecrets: regcred
## size : Minimum size of pvc | class : PVC storage Class
## AccessMode can only accept one of ReadWriteOnce, ReadWriteMany
## Below mentioned storageClass/accessMode applies to OCI block volumes. Update appropriately for other types of persistent volumes.
persistence:
size: 60Gi
storageClass: "px-ora-csi-sc"
accessMode: "ReadWriteOnce"
## Type of service . Applicable on cloud enviroments only
## if loadBalService : false, service type = "NodePort". else "LoadBalancer"
loadBalancer: false
## Deploy only on nodes having required labels. Format label_name : label_value
## Leave empty if there is no such requirement.
## Uncomment to use
# nodeSelector:
# failure-domain.beta.kubernetes.io/zone: bVCG:PHX-AD-1
# pool: sidb
## Count of Database Pods. Applicable only for "ReadWriteMany" AccessMode
replicas: 1
Provision Oracle database
Create a new database using the updated singleinstancedatabase.yaml file, for example
% kubectl apply -f singleinstancedatabase.yaml -n oracle-namespace singleinstancedatabase.database.oracle.com/sidb-pstg created
Persistent Volume Claims
List Persistent Volume Claims with kubectl get pvc
% kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE sidb-pstg Bound pvc-0d405f7e-72c2-4a6c-b165-e1ec0904abc0 60Gi RWO px-ora-csi-sc 6s
List Databases
We can use kubetcl to list Oracle databases created with the Operator thus:
% kubectl get singleinstancedatabases -o name -n oracle-namespace singleinstancedatabase.database.oracle.com/sidb-pstg
Database Status
We can use kubectl get singleinstancedatabase to return database status, we can watch the status change from Pending -> Creating -> Patching -> Healthy
% watch kubectl get singleinstancedatabase -n oracle-namespace NAME EDITION STATUS VERSION CONNECT STR OEM EXPRESS URL singleinstancedatabase Enterprise Healthy 21.3.0.0.0 132.145.69.16:31408/ORCL1 https://132.145.69.16:31906/em
kubectl get singleinstancedatabase can also be used to return different elements, for example.
Database Health
% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.status}" -n oracle-namespace Healthy
Database Connection String
% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.connectString}" -n oracle-namespace 10.0.10.115:30221/PSTGCDB
Database SID Name
% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.sid}" -n oracle-namespace PSTGCDB
Pluggable Database Name (PDB)
% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.pdbName}" -n oracle-namespace pstgpdb1
PDB Connect String
% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.pdbConnectString}" -n oracle-namespace 10.0.10.115:30221/PSTGPDB1
Database Enterprise Manager URL
% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.oemExpressUrl}" -n oracle-namespace https://10.0.10.115:30920/em
Oracle Database
Using the pod details we can confirm database instance name, version and edition using the v$instance for example.
% kubectl exec -it pods/sidb-pstg-s0iwg -n oracle-namespace -- sqlplus sys/Kube#2022@PSTGCDB as sysdba Defaulted container "sidb-pstg" out of: sidb-pstg, init-permissions (init), init-wallet (init) SQL*Plus: Release 21.0.0.0.0 - Production on Tue Apr 5 14:18:48 2022 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> show pdbs; CON_ID CON_NAME OPEN MODE RESTRICTED ---------- ------------------------------ ---------- ---------- 2 PDB$SEED READ ONLY NO 3 PSTGPDB1 READ WRITE NO SQL> set linesize 200 SQL> select INSTANCE_NAME, HOST_NAME, VERSION_FULL, EDITION from v$instance; INSTANCE_NAME HOST_NAME VERSION_FULL EDITION ---------------- ---------------------------------------------------------------- ----------------- ------- PSTGCDB sidb-pstg-s0iwg 21.3.0.0.0 EE
Note: the Oracle database host name is the Kubernetes pod name not the OKE node name.
Summary
In this post I have shared how to deploy an Oracle 21c database with the Oracle Database Kubernetes Operator (OraOperator) with Portworx storage.
In my next post I will share how Portworx storage replication can be used to support Oracle 21c cold-failover across OCI Availability Domains (ADs) within an Oracle Cloud Infrastructure (OCI) Region.
Please Note: The current release of OraOperator
(v0.1.0) is for development and test only. DO NOT USE IN PRODUCTION (yet!)