Background
The Oracle Database Operator for Kubernetes (OraOpeator) provides a simple, supported method of provisioning, cloning, and patching Oracle Databases including databases deployed on Kubernetes.
I have previously posted a blog on how to use the OraOperator (v0.1.0) to deliver an Oracle database on the Oracle Cloud Infrastructure (OCI) Oracle Container Engine for Kubernetes (OKE) service and how to deliver a warm failover with Portworx persistent storage for Kubernetes.
In this post I will show how we can use the OraOperator to create a single instance Oracle 21.3 Databases on an on-premises Kubernetes cluster.
Please note: The current release of OraOperator
(v0.2.0) is for development and test only. DO NOT USE IN PRODUCTION.
Environment
Before you start it’s advisable to check the latest OraOperator prerequisites to help avoid any issues down the line.
Kubernetes Version
Let’s begin by confirming the version of Kubernetes with kubectl version
[root@master-1 ~]# kubectl version --short | awk -Fv '/Server Version: / {print $3}' 1.21.11
Kubernetes Nodes
And check Kubernetes worker node details with kubectl get nodes
[root@master-1 ~]# kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master-1 Ready control-plane,master 118m v1.21.11 10.225.115.193 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 docker://1.13.1 node-1-1 Ready <none> 117m v1.21.11 10.225.115.190 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 docker://1.13.1 node-1-2 Ready <none> 117m v1.21.11 10.225.115.189 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 docker://1.13.1 node-1-3 Ready <none> 117m v1.21.11 10.225.115.191 <none> CentOS Linux 7 (Core) 3.10.0-1160.45.1.el7.x86_64 docker://1.13.1
Cert Manager
The Oracle Database Kubernetes Operator uses webhooks for validating user input before persisting it in Etcd.
Webhooks require TLS certificates that are generated and managed by a certificate manager.
Install the certificate manager using the following command:
[root@master-1 ~]# kubectl apply -f https://github.com/jetstack/cert-manager/releases/latest/download/cert-manager.yaml namespace/cert-manager created customresourcedefinition.apiextensions.k8s.io/clusterissuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/challenges.acme.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificaterequests.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/issuers.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/certificates.cert-manager.io created customresourcedefinition.apiextensions.k8s.io/orders.acme.cert-manager.io created serviceaccount/cert-manager-cainjector created serviceaccount/cert-manager created serviceaccount/cert-manager-webhook created configmap/cert-manager-webhook created clusterrole.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrole.rbac.authorization.k8s.io/cert-manager-view created clusterrole.rbac.authorization.k8s.io/cert-manager-edit created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrole.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrole.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-cainjector created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-issuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-clusterissuers created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificates created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-orders created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-challenges created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-ingress-shim created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-approve:cert-manager-io created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-controller-certificatesigningrequests created clusterrolebinding.rbac.authorization.k8s.io/cert-manager-webhook:subjectaccessreviews created role.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created role.rbac.authorization.k8s.io/cert-manager:leaderelection created role.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created rolebinding.rbac.authorization.k8s.io/cert-manager-cainjector:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager:leaderelection created rolebinding.rbac.authorization.k8s.io/cert-manager-webhook:dynamic-serving created service/cert-manager created service/cert-manager-webhook created deployment.apps/cert-manager-cainjector created deployment.apps/cert-manager created deployment.apps/cert-manager-webhook created mutatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created validatingwebhookconfiguration.admissionregistration.k8s.io/cert-manager-webhook created [root@master-1 ~]# git clone https://github.com/oracle/oracle-database-operator.git Cloning into 'oracle-database-operator'... remote: Enumerating objects: 5447, done. remote: Counting objects: 100% (1294/1294), done. remote: Compressing objects: 100% (195/195), done. remote: Total 5447 (delta 1184), reused 1099 (delta 1099), pack-reused 4153 Receiving objects: 100% (5447/5447), 3.25 MiB | 3.18 MiB/s, done. Resolving deltas: 100% (3546/3546), done.
The Database Operator
Now clone the Oracle Database Kubernetes Operator from GitHub, for example
[root@master-1 ~]# git clone https://github.com/oracle/oracle-database-operator.git Cloning into 'oracle-database-operator'... remote: Enumerating objects: 5447, done. remote: Counting objects: 100% (1294/1294), done. remote: Compressing objects: 100% (195/195), done. remote: Total 5447 (delta 1184), reused 1099 (delta 1099), pack-reused 4153 Receiving objects: 100% (5447/5447), 3.25 MiB | 3.18 MiB/s, done. Resolving deltas: 100% (3546/3546), done.
Change directory to oracle-database-operator then use kubectl apply to install the database Operator
[root@master-1 oracle-database-operator]# kubectl apply -f oracle-database-operator.yaml namespace/oracle-database-operator-system created customresourcedefinition.apiextensions.k8s.io/autonomouscontainerdatabases.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/autonomousdatabasebackups.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/autonomousdatabaserestores.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/autonomousdatabases.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/cdbs.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/dbcssystems.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/oraclerestdataservices.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/pdbs.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/shardingdatabases.database.oracle.com created customresourcedefinition.apiextensions.k8s.io/singleinstancedatabases.database.oracle.com created role.rbac.authorization.k8s.io/oracle-database-operator-leader-election-role created clusterrole.rbac.authorization.k8s.io/oracle-database-operator-manager-role created clusterrole.rbac.authorization.k8s.io/oracle-database-operator-metrics-reader created clusterrole.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-proxy-role created rolebinding.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-leader-election-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-manager-rolebinding created clusterrolebinding.rbac.authorization.k8s.io/oracle-database-operator-oracle-database-operator-proxy-rolebinding created service/oracle-database-operator-controller-manager-metrics-service created service/oracle-database-operator-webhook-service created certificate.cert-manager.io/oracle-database-operator-serving-cert created issuer.cert-manager.io/oracle-database-operator-selfsigned-issuer created mutatingwebhookconfiguration.admissionregistration.k8s.io/oracle-database-operator-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/oracle-database-operator-validating-webhook-configuration created deployment.apps/oracle-database-operator-controller-manager created
The file will perform the following operations:
- Create Namespace (oracle-database-operator-system)
- Create Custom Resource Definitions (CRDs)
- Create Roles and Bindings
- Deploy the Operator
Use kubectl get all to see the Kubernetes pods, services, deployments and replicasets.
[root@master-1 ~]# kubectl get all -n oracle-database-operator-system NAME READY STATUS RESTARTS AGE pod/oracle-database-operator-controller-manager-694f44ffc8-6hbx9 1/1 Running 0 3m21s pod/oracle-database-operator-controller-manager-694f44ffc8-9pqbg 1/1 Running 0 3m21s pod/oracle-database-operator-controller-manager-694f44ffc8-h99cj 1/1 Running 0 3m21s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/oracle-database-operator-controller-manager-metrics-service ClusterIP 10.111.168.39 <none> 8443/TCP 3m22s service/oracle-database-operator-webhook-service ClusterIP 10.99.58.64 <none> 443/TCP 3m22s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/oracle-database-operator-controller-manager 3/3 3 3 3m21s NAME DESIRED CURRENT READY AGE replicaset.apps/oracle-database-operator-controller-manager-694f44ffc8 3 3 3 3m21s
Create Namespace
Optional, create a new Kubernetes namespace for ease of management, for example oracle-namespace.
Use kubectl create namespace to create a Kubenetes namespace, for example.
[root@master-1 ~]# kubectl create namespace oracle-namespace namespace/oracle-namespace created
To avoid having to specify the namespace with -n each time I will use the set-context to specify a default namespace of oracle-namespace.
[root@master-1 ~]# kubectl config set-context --current --namespace=oracle-namespace Context "kubernetes-admin@kubernetes" modified.
Kubernetes Secrets
For this blog we will use Kubernetes Secrets to store Database and Oracle Container Registry (OCR) credentials.
Note this is not the most secure method and should not be used for production as the value can be easily obtained. Oracle strongly recommends that you set and get sensitive data from Oracle Cloud Infrastructure Vault, or from other third-party Vaults.
Oracle Database Secret
Use kubectl create secret to create a database secret, this will be used to logon to the database.
[root@master-1 ~]# kubectl create secret generic admin-password --from-literal=sidb-admin-password='Kube#2022' -n oracle-namespace secret/admin-password created
Oracle Container Registry Secret
Logon to the Oracle Container Registry (OCR) using SSO credentials, for example:
[root@master-1 ~]# docker login container-registry.oracle.com Username: <sso username> Password: <sso password> Login Succeeded
Create Kubernetes secret for OCR using cached Oracle SSO credentials
[root@master-1 ~]# kubectl create secret generic regcred --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson -n oracle-namespace secret/regcred created
Database Configuration
Create a copy of the cloned ../oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml to your working directory and update as required, for example.
[root@master-1 ~]# cp oracle-database-operator/config/samples/sidb/singleinstancedatabase.yaml pstg.yaml
And localise as required.
For this blog post, I updated
- name, namespace, sid and pdbName
- secretName and secretKey
- pullFrom and pullSecrets
- size and storageClasss.
Now provision database using kubectl apply providing the updated .yaml file, for example.
[root@master-1 ~]# kubectl apply -f pstg.yaml -n oracle-namespace singleinstancedatabase.database.oracle.com/pstg created
Database Status
We can use watch kubectl get singleinstancedatabase to return database status, the status will change from Pending -> Creating -> Patching -> Healthly
[root@master-1 ~]# watch kubectl get singleinstancedatabase -n oracle-namespace Every 2.0s: kubectl get singleinstancedatabase -n oracle-namespace Mon Dec 12 12:02:05 2022 NAME EDITION STATUS VERSION CONNECT STR TCPS CONNECT STR OEM EXPRESS URL pstg Enterprise Pending Unavailable 10.225.115.191:30745/PSTG1 Unavailable https://10.225.115.191:30275/em
Wait until Healthy
[root@master-1 ~]# kubectl get singleinstancedatabase pstg -n oracle-namespace NAME EDITION STATUS VERSION CONNECT STR TCPS CONNECT STR OEM EXPRESS URL pstg Enterprise Healthy 21.3.0.0.0 10.225.115.191:30745/PSTG1 Unavailable https://10.225.115.191:30275/em
Oracle Database Log
Note, whilst the Oracle database is being created we can review the creation log with kubectl logs pod, for example.
[root@master-1 ~]# kubectl get pods -n oracle-namespace NAME READY STATUS RESTARTS AGE pstg-9vqzn 0/1 Running 0 23m [root@master-1 ~]# kubectl logs pod/pstg-9vqzn -n oracle-namespace [2022:12:12 12:10:56]: Acquiring lock .PSTG1.create_lck with heartbeat 30 secs [2022:12:12 12:10:56]: Lock acquired [2022:12:12 12:10:56]: Starting heartbeat [2022:12:12 12:10:56]: Lock held .PSTG1.create_lck ORACLE EDITION: ENTERPRISE LSNRCTL for Linux: Version 21.0.0.0.0 - Production on 12-DEC-2022 12:10:56 Copyright (c) 1991, 2021, Oracle. All rights reserved. Starting /opt/oracle/product/21c/dbhome_1/bin/tnslsnr: please wait... TNSLSNR for Linux: Version 21.0.0.0.0 - Production ...
Persistent Volume Claim
With kubectl get pvc we can confirm that a Persistent Volume Claim has been created with the size and storage class specified in our .yaml file.
[root@master-1 ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pstg Bound pvc-4f2e08d1-43b6-457f-9bc9-797ed9a1ff2c 8Gi RWO px-csi-db 12m
Oracle Database
Now we have a Healthy OraOperator deployed 21c database we can confirm database instance name, version and edition by shelling into the pod using the v$instance view providing the database credentials stored in the Kubernetes secret.
[root@master-1 ~]# kubectl get pods -n oracle-namespace -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pstg-9vqzn 1/1 Running 0 32m 10.244.2.9 node-1-3 <none> <none>
Note: the Oracle database host name is the pod name not the Kubernetes node.
[root@master-1 ~]# kubectl exec -it pods/pstg-9vqzn -n oracle-namespace -- sqlplus system/Kube#2022@PSTG1 Defaulted container "pstg" out of: pstg, init-permissions (init), init-wallet (init) SQL*Plus: Release 21.0.0.0.0 - Production on Mon Dec 12 12:31:57 2022 Version 21.3.0.0.0 Copyright (c) 1982, 2021, Oracle. All rights reserved. Last Successful login time: Mon Dec 12 2022 12:19:00 +00:00 Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> set linesize 200 SQL> select INSTANCE_NAME, HOST_NAME, VERSION_FULL, EDITION from v$instance; INSTANCE_NAME HOST_NAME VERSION_FULL EDITION ---------------- ---------------------------------------------------------------- ----------------- ------- PSTG1 pstg-9vqzn 21.3.0.0.0 EE SQL>
Oracle Instant Client
We can also do the same with the Oracle Instant client for Docker, start by pulling the Oracle Instant Client docker image.
[root@master-1 ~]# docker pull ghcr.io/oracle/oraclelinux8-instantclient:21 Trying to pull repository ghcr.io/oracle/oraclelinux8-instantclient ... 21: Pulling from ghcr.io/oracle/oraclelinux8-instantclient 1ca7c848b9e5: Pull complete d92bd232d5b4: Pull complete Digest: sha256:76dd41b84f5217a66eada78b421f12dec217b69578e4e4e2a40fcf006a7f40ec Status: Downloaded newer image for ghcr.io/oracle/oraclelinux8-instantclient:21
Determine Node Port.
[root@master-1 ~]# kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE pstg ClusterIP 10.106.247.177 <none> 1521/TCP 90m pstg-ext NodePort 10.105.235.39 <none> 5500:30275/TCP,1521:30745/TCP 90m
And connect using the Oracle Instance Client Docker image and Kubernetes Node Port.
[root@master-1 ~]# docker run -ti --rm ghcr.io/oracle/oraclelinux8-instantclient:21 sqlplus system/Kube#2022@10.105.235.39/PSTG1 SQL*Plus: Release 21.0.0.0.0 - Production on Mon Dec 12 12:53:05 2022 Version 21.8.0.0.0 Copyright (c) 1982, 2022, Oracle. All rights reserved. Last Successful login time: Mon Dec 12 2022 12:31:57 +00:00 Connected to: Oracle Database 21c Enterprise Edition Release 21.0.0.0.0 - Production Version 21.3.0.0.0 SQL> set linesize 200 SQL> select INSTANCE_NAME, HOST_NAME, VERSION_FULL, EDITION from v$instance; INSTANCE_NAME HOST_NAME VERSION_FULL EDITION ---------------- ---------------------------------------------------------------- ----------------- ------- PSTG1 pstg-9vqzn 21.3.0.0.0 EE
Summary
In this post I have shown how we can now use the Oracle Database Kubernetes Operator (OraOperator) to deploy an Oracle 21c single instance database on an on-premises Kubernetes cluster.
In my next post I will share how we can use Portworx CloudSnap to perform a storage snapshot and send it to an S3 bucket in OCI Object Storage.