Oracle 21c with the Oracle Database Kubernetes Operator (OraOperator) and Portworx storage


In Getting started with the Oracle Database Kuberenetes Operator (OraOperator) I introduced the Oracle Database Kubernetes Operator and provided details on how to install and configure it for a single instance database running on Oracle Container Engine for Kubernetes (OKE) service within Oracle Cloud Infrastructure (OCI).

In the post I share how I use Terraform to build an OKE cluster with Portworx storage, and then provide details on how to install and configure the OraOperator to use a Portworx storage class to provide the ability to failover a Kubernetes Oracle 21c database pod across Availability Domains (ADs) with OCI.

OKE Preparation

Using the details in my Oracle Kubernetes Engine (OKE) deployment with Portworx using Terraform post and Terraform code which has been upload into GitHub create a new OKE environment.

For example clone the Terraform code from my GitHub repo, change directory, localise the terraform.tfvars file providing OCI account, tenancy, and Portworx URLs.

OKE Evironment

The Terraform script will create an OKE cluster as below.

3 node OKE Portworx Storage Cluster

Before we move on, let’s have a quick look at the Terraform built OKE environment.

Kubernetes Version

Confirm Kubernetes version with kubectl version

% kubectl version --short | awk -Fv '/Server Version: / {print $3}'

Kubernetes Nodes

Check size of Kubernetes cluster with kubectl get nodes

% kubectl get nodes -o wide
NAME         STATUS  ROLES AGE  VERSION  INTERNAL-IP EXTERNAL-IP OS-IMAGE                KERNEL-VERSION                   CONTAINER-RUNTIME  Ready   node  138m v1.20.11 <none>      Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.7  Ready   node  138m v1.20.11 <none>      Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.7  Ready   node  138m v1.20.11 <none>      Oracle Linux Server 7.8 4.14.35-1902.306.2.el7uek.x86_64 cri-o://1.20.7


And use pxctl to see version of Portworx

% export PX_POD=$(kubectl get pods -l name=portworx -n kube-system -o jsonpath='{.items[0]}')
% alias pxctl='kubectl exec -n kube-system ${PX_POD} -it -- /opt/pwx/bin/pxctl' 
% pxctl -v                                                                                                 
Defaulted container "portworx" out of: portworx, csi-node-driver-registrar
pxctl version

Kubernetes Storage Classes

To ensure data is available in all three OCI Availability Domains (ADs) within our Region, we need to create and apply a new Portworx StorageClass with 3 replicas for example px-ora-csi-sc.yaml

kind: StorageClass
  name: px-ora-csi-sc
  repl: "3"
  io_profile: "db_remote"
  priority_io: "high"
allowVolumeExpansion: True
% kubectl apply -f px-ora-csi-sc.yaml created

List available Kubernetes Storage Classes using kubectl get sc or kubectl get StorageClass, for example.

% kubectl get sc
oci (default)                            Delete         Immediate            false                 4h1m
oci-bv                  Delete         WaitForFirstConsumer true                  4h1m
px-db                     Delete         Immediate            true                  3h52m
px-db-cloud-snapshot      Delete         Immediate            true                  3h52m
px-db-cloud-snapshot-encrypted    Delete         Immediate            true                  3h52m
px-db-encrypted           Delete         Immediate            true                  3h52m
px-db-local-snapshot      Delete         Immediate            true                  3h52m
px-db-local-snapshot-encrypted    Delete         Immediate            true                  3h52m
px-ora-csi-sc                          Delete         Immediate            true                  25s
px-replicated             Delete         Immediate            true                  3h52m
px-replicated-encrypted    Delete         Immediate            true                  3h52m
stork-snapshot-sc               stork-snapshot                   Delete         Immediate            true                  3h52m

Install cert-manager

The Oracle Database Kubernetes Operator uses webhooks for validating user input before persisting it in Etcd. Webhooks require TLS certificates that are generated and managed by a certificate manager.

Install the certificate manager using the following command:

% kubectl apply -f

Install Operator

Clone the Oracle Database Kubernetes Operator from GitHub, for example

% git clone

Use kubectl apply to install the Operator

% kubectl apply -f oracle-database-operator/oracle-database-operator.yaml

The file will perform the following operations:

  • Create Namespace
  • Create CRDs
  • Create Roles and Bindings
  • Operator Deployment

Use kubectl get all to see the Kubernetes pods, services, deployments and replicasets, for example

% kubectl get all -n oracle-database-operator-system    
NAME                                                               READY   STATUS    RESTARTS   AGE
pod/oracle-database-operator-controller-manager-58447bcbf8-29w8s   1/1     Running   0          26s
pod/oracle-database-operator-controller-manager-58447bcbf8-6ljpn   1/1     Running   0          26s
pod/oracle-database-operator-controller-manager-58447bcbf8-ng87q   1/1     Running   0          26s

NAME                                                                  TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
service/oracle-database-operator-controller-manager-metrics-service   ClusterIP   <none>        8443/TCP   27s
service/oracle-database-operator-webhook-service                      ClusterIP   <none>        443/TCP    27s

NAME                                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/oracle-database-operator-controller-manager   3/3     3            3           27s

NAME                                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/oracle-database-operator-controller-manager-58447bcbf8   3         3         3       27s

Oracle Namespace

Optional create new Kubernetes namespace, for example oracle-namespace for ease of management.

We can create a new namespace with kubectl create namespace.

% kubectl create namespace oracle-namespace
namespace/oracle-namespace created

To avoid having to specify our namespace with -n each time I will use the set-context to specify a default namespace of oracle-namespace.

% kubectl config set-context --current --namespace=oracle-namespace
Context "context-cdgorkg4nua" modified.

Kubectl will now default to using the oracle–namespace for example,

Create Secrets

For this blog we will use a Kubernetes Secret to store Database and credentials for the Oracle Container Registry, note this is not the most secure method and should not be used for production as the value can be easily obtained, as below.

Create Database Secret

Use kubectl create secret to create a database secret

% kubectl create secret generic admin-password --from-literal=sidb-admin-password='Kube#2022' -n oracle-namespace
secret/admin-password created

Create Oracle Container Registry Secret

Logon to the Oracle Container Registry (OCR) using SSO credentials, for example:

% docker login
Username: <SSO email>
Password: <SSO password>
Login Succeeded

Create Kubernetes secret using OCR credentials

% kubectl create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json -n oracle-namespace
secret/regcred created

Configure Database

Copy ../oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml to your working directory and update as required, for example.

% cp oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml .  

Update as required, for example name, namespace, sid, secretName, secretKey, pdbName, pullFrom, pullSecrets, size, storageClass

# Copyright (c) 2021, Oracle and/or its affiliates. 
# Licensed under the Universal Permissive License v 1.0 as shown at
kind: SingleInstanceDatabase
  name: sidb-pstg
  namespace: oracle-namespace
  ## Use only alphanumeric characters for sid
  sid: PSTGCDB
  ## A source database ref to clone from, leave empty to create a fresh database
  cloneFrom: ""

  ## NA if cloning from a SourceDB (cloneFrom is set)
  edition: enterprise
  ## Should refer to SourceDB secret if cloning from a SourceDB (cloneFrom is set)
  ## Secret containing SIDB password mapped to secretKey
  ## This secret will be deleted after creation of the database unless keepSecret is set to true
    secretName: admin-password 
    secretKey: sidb-admin-password
    keepSecret: true

  ## NA if cloning from a SourceDB (cloneFrom is set)
  charset: AL32UTF8

  ## NA if cloning from a SourceDB (cloneFrom is set)
  pdbName: pstgpdb1

  ## Enable/Disable Flashback
  flashBack: false

  ## Enable/Disable ArchiveLog
  archiveLog: false

  ## Enable/Disable ForceLogging
  forceLog: false

  ## NA if cloning from a SourceDB (cloneFrom is set)
  ## Specify both sgaSize and pgaSize (in MB) or dont specify both
  ## Specify Non-Zero value to use
    cpuCount: 0
    processes: 0
    sgaTarget: 0
    pgaAggregateTarget: 0

  ## Database image details
  ## Database can be patched by updating the RU version/image
  ## Major version changes are not supported
    pullSecrets: regcred

  ## size : Minimum size of pvc | class : PVC storage Class
  ## AccessMode can only accept one of ReadWriteOnce, ReadWriteMany
  ## Below mentioned storageClass/accessMode applies to OCI block volumes. Update appropriately for other types of persistent volumes.
    size: 60Gi
    storageClass: "px-ora-csi-sc"
    accessMode: "ReadWriteOnce"

  ## Type of service . Applicable on cloud enviroments only
  ## if loadBalService : false, service type = "NodePort". else "LoadBalancer"
  loadBalancer: false

  ## Deploy only on nodes having required labels. Format label_name : label_value
  ## Leave empty if there is no such requirement.
  ## Uncomment to use
  # nodeSelector: 
  # bVCG:PHX-AD-1
  #   pool: sidb

  ## Count of Database Pods. Applicable only for "ReadWriteMany" AccessMode
  replicas: 1

Provision Oracle database

Create a new database using the updated singleinstancedatabase.yaml file, for example

% kubectl apply -f singleinstancedatabase.yaml -n oracle-namespace created

Persistent Volume Claims

List Persistent Volume Claims with kubectl get pvc

% kubectl get pvc                                                 
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
sidb-pstg   Bound    pvc-0d405f7e-72c2-4a6c-b165-e1ec0904abc0   60Gi       RWO            px-ora-csi-sc   6s

List Databases

We can use kubetcl to list Oracle databases created with the Operator thus:

% kubectl get singleinstancedatabases -o name -n oracle-namespace

Database Status

We can use kubectl get singleinstancedatabase to return database status, we can watch the status change from Pending -> Creating -> Patching -> Healthy

% watch kubectl get singleinstancedatabase -n oracle-namespace 
NAME                     EDITION      STATUS    VERSION      CONNECT STR                 OEM EXPRESS URL
singleinstancedatabase   Enterprise   Healthy

kubectl get singleinstancedatabase can also be used to return different elements, for example.

Database Health

% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.status}" -n oracle-namespace

Database Connection String

% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.connectString}" -n oracle-namespace                                          

Database SID Name

% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.sid}" -n oracle-namespace

Pluggable Database Name (PDB)

% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.pdbName}" -n oracle-namespace

PDB Connect String

% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.pdbConnectString}" -n oracle-namespace

Database Enterprise Manager URL

% kubectl get singleinstancedatabase sidb-pstg -o "jsonpath={.status.oemExpressUrl}" -n oracle-namespace                                    

Oracle Database

Using the pod details we can confirm database instance nameversion and edition using the v$instance for example.

%  kubectl exec -it pods/sidb-pstg-s0iwg -n oracle-namespace -- sqlplus sys/Kube#2022@PSTGCDB as sysdba        
Defaulted container "sidb-pstg" out of: sidb-pstg, init-permissions (init), init-wallet (init)

SQL*Plus: Release - Production on Tue Apr 5 14:18:48 2022

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Connected to:
Oracle Database 21c Enterprise Edition Release - Production

SQL> show pdbs;

---------- ------------------------------ ---------- ----------

SQL> set linesize 200

---------------- ---------------------------------------------------------------- ----------------- -------
PSTGCDB 	 sidb-pstg-s0iwg	    EE

Note: the Oracle database host name is the Kubernetes pod name not the OKE node name.


In this post I have shared how to deploy an Oracle 21c database with the Oracle Database Kubernetes Operator (OraOperator) with Portworx storage.

In my next post I will share how Portworx storage replication can be used to support Oracle 21c cold-failover across OCI Availability Domains (ADs) within an Oracle Cloud Infrastructure (OCI) Region.

Please Note: The current release of OraOperator (v0.1.0) is for development and test only. DO NOT USE IN PRODUCTION (yet!)

Leave a Reply

Create a website or blog at

Up ↑

%d bloggers like this: