Oracle Database Kubernetes Operator (OraOperator) and Oracle 21c

Background

Running Oracle within a container in not new, in fact when I checked I first blogged about running Oracle on Docker all the way back in 2017.

However, what is new and exciting is the recent release of the Oracle Database Kubernetes Operator (OraOperator) which has available from the Oracle GitHub area.

The Oracle Database Kubernetes Operator provide a simple method the provisioning, cloning, and patching of Oracle Databases on Kubernetes. 

For this post I will be focusing on Single Instance Databases, I will look at other options in future posts.

Note: The current release of OraOperator (v0.1.0) is for development and test only. DO NOT USE IN PRODUCTION.

Environment

Before you start it’s advisable, to check the Single Instance Database prerequisites to help avoid any issues down the line.

Kubernetes Version

Verify Kubernetes version.

% kubectl version --short | awk -Fv '/Server Version: / {print $3}'
1.20.11

Kubernetes Nodes

Check Kubernetes cluster details.

% kubectl get nodes -o wide
NAME         STATUS   ROLES   AGE   VERSION    INTERNAL-IP   EXTERNAL-IP     OS-IMAGE                  KERNEL-VERSION                     CONTAINER-RUNTIME
10.0.1.157   Ready    node    31d   v1.20.11   10.0.1.157    132.145.33.86   Oracle Linux Server 7.8   4.14.35-1902.306.2.el7uek.x86_64   cri-o://1.20.2
10.0.1.163   Ready    node    31d   v1.20.11   10.0.1.163    144.21.51.173   Oracle Linux Server 7.8   4.14.35-1902.306.2.el7uek.x86_64   cri-o://1.20.2
10.0.1.205   Ready    node    31d   v1.20.11   10.0.1.205    132.145.69.16   Oracle Linux Server 7.8   4.14.35-1902.306.2.el7uek.x86_64   cri-o://1.20.2
3 node Kubernetes Cluster

Install cert-manager

The Oracle Database Kubernetes Operator uses webhooks for validating user input before persisting it in Etcd.

Webhooks require TLS certificates that are generated and managed by a certificate manager.

Install the certificate manager using the following command:

% kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml

customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created
customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created
namespace/cert-manager created
serviceaccount/cert-manager-cainjector created
serviceaccount/cert-manager created
serviceaccount/cert-manager-webhook created
clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrole.rbac.authorization.k8s.io/cert-manager-view created
clusterrole.rbac.authorization.k8s.io/cert-manager-edit created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created
clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created
role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
role.rbac.authorization.k8s.io/cert-manager:leaderelection created
role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created
rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created
service/cert-manager created
service/cert-manager-webhook created
deployment.apps/cert-manager-cainjector created
deployment.apps/cert-manager created
deployment.apps/cert-manager-webhook created
mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created
validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created

Install Operator

Clone the Oracle Database Kubernetes Operator from GitHub, for example

% git clone https://github.com/oracle/oracle-database-operator.git
Cloning into 'oracle-database-operator'...
remote: Enumerating objects: 342, done.
remote: Counting objects: 100% (342/342), done.
remote: Compressing objects: 100% (260/260), done.
remote: Total 342 (delta 126), reused 232 (delta 69), pack-reused 0
Receiving objects: 100% (342/342), 673.58 KiB | 919.00 KiB/s, done.
Resolving deltas: 100% (126/126), done.

Then use kubectl apply to install the Operator

% kubectl apply -f oracle-database-operator.yaml

namespace/oracle-database-operator-system created
customresourcedefinition.apiextensions.k8s.io/autonomousdatabases.database.oracle.com created
customresourcedefinition.apiextensions.k8s.io/shardingdatabases.database.oracle.com created
customresourcedefinition.apiextensions.k8s.io/singleinstancedatabases.database.oracle.com created
role.rbac.authorization.k8s.io/oracle-database-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/oracle-database-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/oracle-database-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-proxy-rolebinding created
service/oracle-database-operator-controller-manager-metrics-service created
service/oracle-database-operator-webhook-service created
deployment.apps/oracle-database-operator-controller-manager created
certificate.cert-manager.io/oracle-database-operator-serving-cert created
issuer.cert-manager.io/oracle-database-operator-selfsigned-issuer created
mutatingwebhookconfiguration.admissionregistration.k8s.io/oracle-database-operator-mutating-webhook-configuration created
validatingwebhookconfiguration.admissionregistration.k8s.io/oracle-database-operator-validating-webhook-configuration created

The file will perform the following operations:

  • Create Namespace
  • Create CRDs
  • Create Roles and Bindings
  • Operator Deployment

Use kubectl get all to see the Kubernetes pods, services, deployments and replicasets.

% kubectl get all -n oracle-database-operator-system    
NAME                                                               READY   STATUS    RESTARTS   AGE
pod/oracle-database-operator-controller-manager-58447bcbf8-4xvhr   1/1     Running   0          34m
pod/oracle-database-operator-controller-manager-58447bcbf8-5m9r5   1/1     Running   0          34m
pod/oracle-database-operator-controller-manager-58447bcbf8-76ght   1/1     Running   0          34m

NAME                                                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/oracle-database-operator-controller-manager-metrics-service   ClusterIP   10.96.190.131   <none>        8443/TCP   34m
service/oracle-database-operator-webhook-service                      ClusterIP   10.96.251.255   <none>        443/TCP    34m

NAME                                                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/oracle-database-operator-controller-manager   3/3     3            3           34m

NAME                                                                     DESIRED   CURRENT   READY   AGE
replicaset.apps/oracle-database-operator-controller-manager-58447bcbf8   3         3         3       34m

Describe deployment with kubectl describe deployments

% kubectl describe deployments/oracle-database-operator-controller-manager -n oracle-database-operator-system
Name:                   oracle-database-operator-controller-manager
Namespace:              oracle-database-operator-system
CreationTimestamp:      Wed, 10 Nov 2021 10:22:42 +0000
Labels:                 control-plane=controller-manager
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               control-plane=controller-manager
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  control-plane=controller-manager
  Containers:
   manager:
    Image:      container-registry.oracle.com/database/operator:0.1.0
    Port:       9443/TCP
    Host Port:  0/TCP
    Command:
      /manager
    Args:
      --enable-leader-election
    Limits:
      cpu:     400m
      memory:  400Mi
    Requests:
      cpu:        400m
      memory:     400Mi
    Environment:  <none>
    Mounts:
      /tmp/k8s-webhook-server/serving-certs from cert (ro)
  Volumes:
   cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  webhook-server-cert
    Optional:    false
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   oracle-database-operator-controller-manager-58447bcbf8 (3/3 replicas created)
Events:
  Type    Reason             Age    From                   Message
  ----    ------             ----   ----                   -------
  Normal  ScalingReplicaSet  5m39s  deployment-controller  Scaled up replica set oracle-database-operator-controller-manager-58447bcbf8 to 3

List Operator pods

% kubectl get pods -n oracle-database-operator-system -o wide                        
NAME                                                           READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
oracle-database-operator-controller-manager-58447bcbf8-4xvhr   1/1     Running   0          9m29s   10.244.1.20    10.0.1.157   <none>           <none>
oracle-database-operator-controller-manager-58447bcbf8-5m9r5   1/1     Running   0          9m29s   10.244.0.17    10.0.1.163   <none>           <none>
oracle-database-operator-controller-manager-58447bcbf8-76ght   1/1     Running   0          9m29s   10.244.0.151   10.0.1.205   <none>           <none>

Create Namespace

Optional, create new Kubernetes namespace for ease of management, for example oracle-namespace.

We can create a new namespace with kubectl create namespace.

% kubectl create namespace oracle-namespace
namespace/oracle-namespace created

To avoid having to specify our namespace with -n each time I will use the set-context to specify a default namespace of oracle-namespace.

% kubectl config set-context --current --namespace=oracle-namespace
Context "context-cgr3bekcbka" modified.

Kubectl will now default to using the oracle–namespace for example,

% kubectl get pods
No resources found in oracle-namespace namespace.

Create Secrets

For this blog we will use Kubernetes Secrets to store Database and Oracle Container Registry (OCR) credentials.

Note this is not the most secure method and should not be used for production as the value can be easily obtained, as I show below.

Create Database Secret

Use kubectl create secret to create a database secret

% kubectl create secret generic admin-password \
--from-literal=sidb-admin-password='Kube#2021' \
-n oracle-namespace
secret/admin-password created

Create Oracle Container Registry Secret

Logon to the Oracle Container Registry (OCR) using SSO credentials, for example:

% docker login container-registry.oracle.com
Username: <SSO email>
Password: <SSO password>
Login Succeeded

Create Kubernetes secret using OCR credentials

% kubectl create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json  --type=kubernetes.io/dockerconfigjson -n oracle-namespace
secret/regcred created

View Secret

If you forget the database credentials the contents of a secret can easily be recovered, for example.

% kubectl get secret/admin-password -n oracle-database-operator-system -o jsonpath='{.data}' -n oracle-namespace
{"sidb-admin-password":"S3ViZSMyMDIx"

And to decode secret

% echo 'S3ViZSMyMDIx' | base64 --decode                                                     
Kube#2021

Configure Database

Create a copy of the cloned ../oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml to your working directory and update as required, for example.

Update name and sid

apiVersion: database.oracle.com/v1alpha1
kind: SingleInstanceDatabase
metadata:
  name: sidb-oci 
  namespace: oracle-namespace
spec:
  
  ## Use only alphanumeric characters for sid
  sid: SIDB1

Update secretName and secretKey

## Should refer to SourceDB secret if cloning from a SourceDB (cloneFrom is set)
## Secret containing SIDB password mapped to secretKey
## This secret will be deleted after creation of the database unless keepSecret is set to true
adminPassword:
  secretName: admin-password
  secretKey: sidb-admin-password
  keepSecret: false

Update pullFrom and pullSecrets

## Database image details
## Database can be patched by updating the RU version/image
## Major version changes are not supported
image:
    pullFrom: container-registry.oracle.com/database/enterprise:21.3.0.0
    pullSecrets: regcred

Update size, storageClasss and accessMode

## size : Minimum size of pvc | class : PVC storage Class
## AccessMode can only accept one of ReadWriteOnce, ReadWriteMany
persistence:
  size: 20Gi
  storageClass: "oci"
  accessMode: "ReadWriteOnce"

For a single instance use the standard OCI StorageClass of “oci” this will provision block storage in the same AvailabilityDomain as the database pod.

## Count of Database Pods. Applicable only for "ReadWriteMany" AccessMode
replicas: 1

Note, if using the ‘oci’ StorageClass leave the replicas as 1 as the ‘oci’ StorageClass does not support ReadWriteMany, this is to avoid.

Warning  ProvisioningFailed    36s (x6 over 111s)  oracle.com/oci_control-plane-host-10-64-222-206_4e7fc395-3f94-4a33-8f21-2cec8b026b7d  failed to provision volume with StorageClass "oci": invalid access mode ReadWriteMany specified. Only ReadWriteOnce is supported

Provision Oracle Database

Now provision database using kubectl apply providing the updated .yaml file.

% kubectl apply -f sidb_oci.yaml -n oracle-namespace
singleinstancedatabase.database.oracle.com/sidb-oci created

List Databases

% kubectl get singleinstancedatabases -o name -n oracle-namespace
singleinstancedatabase.database.oracle.com/sidb-oci

Database Status

We can use kubectl get singleinstancedatabase to return database status, wait until Healthy

% kubectl get singleinstancedatabase -n oracle-namespace                                                      
NAME       EDITION      STATUS    VERSION   CONNECT STR                 OEM EXPRESS URL
sidb-oci   Enterprise   Pending   Unknown   144.21.51.173:31081/SIDB1   https://144.21.51.173:31250/em

% kubectl get singleinstancedatabase -n oracle-namespace 
NAME       EDITION      STATUS     VERSION   CONNECT STR                 OEM EXPRESS URL
sidb-oci   Enterprise   Creating   Unknown   144.21.51.173:31081/SIDB1   https://144.21.51.173:31250/em

% kubectl get singleinstancedatabase -n oracle-namespace 
NAME       EDITION      STATUS    VERSION      CONNECT STR                 OEM EXPRESS URL
sidb-oci   Enterprise   Healthy   21.3.0.0.0   144.21.51.173:31081/SIDB1   https://144.21.51.173:31250/em

kubectl get singleinstancedatabase can be used to return different elements, for example.

Database Health

% kubectl get singleinstancedatabase sidb-oci -o "jsonpath={.status.status}" -n oracle-namespace
Healthy                                                                                                                           

Database Connection String

% kubectl get singleinstancedatabase sidb-oci -o "jsonpath={.status.connectString}" -n oracle-namespace
144.21.51.173:31081/SIDB1                                           

Database Enterprise Manager URL

% kubectl get singleinstancedatabase sidb-oci -o "jsonpath={.status.oemExpressUrl}" -n oracle-namespace
https://144.21.51.173:31250/em                                      

SID Name

% kubectl get singleinstancedatabase sidb-oci -o "jsonpath={.status.sid}" -n oracle-namespace
SIDB

Pluggable Database Name (PDB)

% kubectl get singleinstancedatabase sidb-oci -o "jsonpath={.status.pdbName}" -n oracle-namespace
sidbpdb1

PDB Connect String

% kubectl get singleinstancedatabase sidb-oci -o "jsonpath={.status.pdbConnectString}" -n oracle-namespace
144.21.51.173:31081/SIDBPDB1      

Database Details

We can confirm database instance name, version and edition using the v$instance view providing the database credentials stored in the Kubernetes secret.

% kubectl get pods -n oracle-namespace -o wide
NAME             READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
sidb-oci-7r5vz   1/1     Running   0          54m   10.244.0.20   10.0.1.163   <none>           <none>

Note: the Oracle database host name is the pod name not the Kubernetes node.

% kubectl exec -it pods/sidb-oci-7r5vz -n oracle-namespace -- sqlplus system/Kube#2021@SIDB1                      

SQL*Plus: Release 21.0.0.0.0 - Production on Thu Nov 11 13:43:26 2021
Version 21.3.0.0.0

Copyright (c) 1982, 2021, Oracle.  All rights reserved.

Last Successful login time: Thu Nov 11 2021 13:43:09 +00:00

Connected to:
Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production
Version 21.3.0.0.0

SQL> select INSTANCE_NAME, HOST_NAME, VERSION_FULL, EDITION from v$instance;

INSTANCE_NAME	 HOST_NAME	  VERSION_FULL	    EDITION
---------------- ---------------- ----------------- -------
SIDB1		 sidb-oci-7r5vz   21.3.0.0.0	    EE

Database Storage Details

% kubectl get pvc -n oracle-namespace                                             

NAME                             STATUS   VOLUME                                                                                      CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/sidb-oci   Bound    ocid1.volume.oc1.uk-london-1.abwgiljtm46pvnngumnkbsmunnxsaqw7ct2enof3tsbx6yqykidohvid5xqq   50Gi       RWO            oci            149m

Summary

In this post I have shown how to install the Oracle Database Kubernetes Operator (OraOperator) and use the standard OCI StorageClasses to deploy a single instance Oracle database.

In a future post I will share how we can manage database fail-over in the event of a Kubernetes pod or node failure, and look at how Portworx storage replication can used to accelerate in database fail-over.

Create a website or blog at WordPress.com

Up ↑