Back in April 2017 I blogged on how to use the then official Oracle 12.1 Docker image to run an Oracle 12c database on Docker, if you want to revisit that post you can find it here.
However, more recently I shared how you could use Minikube with the latest official Oracle 12.2 Docker image to run an Oracle database within a container on your laptop.
This time I am going to use the above findings to deploy the same Oracle 12.2 image on a 4 node MicroK8s Kubernetes cluster with persistent storage delivered from my lab Pure Storage FlashArray and the Pure Service Orchestrator (PSO)
Let’s start by checking the status of my Kubernetes cluster.
Kubernetes Nodes
$ kubectl get nodes NAME STATUS ROLES AGE VERSION z-re-uk8s01 Ready <none> 2d21h v1.18.2-41+b5cdb79a4060a3 z-re-uk8s02 Ready <none> 2d21h v1.18.2-41+b5cdb79a4060a3 z-re-uk8s03 Ready <none> 2d21h v1.18.2-41+b5cdb79a4060a3 z-re-uk8s04 Ready <none> 2d21h v1.18.2-41+b5cdb79a4060a3
From the above we can see I have 4 nodes in Kubernetes cluster and they are all ready.
Kubernetes Name Spaces
Now, let’s use kubectl get namespace to list existing namespaces.
$ kubectl get namespaces NAME STATUS AGE container-registry Active 2d22h default Active 2d22h kube-node-lease Active 2d22h kube-public Active 2d22h kube-system Active 2d22h
Kubernetes Pods
And with kubectl get pods –all namespaces list all running pods.
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE container-registry registry-7cf58dcdcc-krvzw 1/1 Running 2 2d20h kube-system coredns-588fd544bf-gxgqg 1/1 Running 2 2d20h kube-system dashboard-metrics-scraper-db65b9c6f-kfshl 1/1 Running 2 2d20h kube-system heapster-v1.5.2-58fdbb6f4d-r2vbn 4/4 Running 8 2d20h kube-system hostpath-provisioner-75fdc8fccd-9lm26 1/1 Running 2 2d20h kube-system kubernetes-dashboard-67765b55f5-qdcpc 1/1 Running 2 2d20h kube-system monitoring-influxdb-grafana-v4-6dc675bf8c-w4cqz 2/2 Running 4 2d20h
Kubernetes Storage Classes
Before we install the Pure Storage PSO Kubernetes Container Storage Interface (CSI), let’s see what storage classes are already present.
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE microk8s-hostpath (default) microk8s.io/hostpath Delete Immediate false 2d21h
Pure Storage Orchestrator (PSO)
We can add the Pure Service Orchestrator(PSO) to Helm Repository with:
$ helm repo add pure https://purestorage.github.io/helm-charts "pure" has been added to your repositories
Now update Helm repository
$ helm repo update Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "pure" chart repository Update Complete. ⎈ Happy Helming!⎈
Confirm PSO has been installed successfully using helm search repo
$ helm search repo pure-csi NAME CHART VERSION APP VERSION DESCRIPTION pure/pure-csi 1.1.1 1.1.1 A Helm chart for Pure Service Orchestrator CSI
It is strongly advised to install PSO into its own Kubernetes namespace, we can do this using the kubectl create namespace command.
$ kubectl create namespace pso-namespace namespace/pso-namespace created
You will now need to create a PSO yaml file, however the easiest way to get started is by using default file which you can copy from here.
If you want to read-up on all the configuration options follow the link here to the pure-csi GitHub repo.
PSO Installation
Before we perform our Helm install, let’s try a dry run to make sure our yaml file is ok.
$ helm install pure-storage-driver pure/pure-csi --namespace pso-namespace -f values.yaml --dry-run --debug
Great, and again but this time for real.
$ helm install pure-storage-driver pure/pure-csi --namespace pso-namespace -f values.yaml NAME: pure-storage-driver LAST DEPLOYED: Tue Apr 28 16:03:02 2020 NAMESPACE: pso-namespace STATUS: deployed REVISION: 1 TEST SUITE: None
Kuberbetes PODS
If we repeat the kubectl get pods command, this time we can see we have some new pods running for Pure Service Orchestrator (PSO)
$ kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE container-registry registry-7cf58dcdcc-krvzw 1/1 Running 2 3d21h kube-system coredns-588fd544bf-gxgqg 1/1 Running 2 3d21h kube-system dashboard-metrics-scraper-db65b9c6f-kfshl 1/1 Running 2 3d21h kube-system heapster-v1.5.2-58fdbb6f4d-r2vbn 4/4 Running 8 3d21h kube-system hostpath-provisioner-75fdc8fccd-9lm26 1/1 Running 2 3d21h kube-system kubernetes-dashboard-67765b55f5-qdcpc 1/1 Running 2 3d21h kube-system monitoring-influxdb-grafana-v4-6dc675bf8c-w4cqz 2/2 Running 4 3d21h pso-namespace pure-csi-4tqwn 3/3 Running 0 19h pso-namespace pure-csi-5mgz9 3/3 Running 0 19h pso-namespace pure-csi-g6h57 3/3 Running 0 19h pso-namespace pure-csi-xh8qh 3/3 Running 0 19h pso-namespace pure-provisioner-0 3/3 Running 0 19h
Kubernetes Storage Class
We can also now see we have some additional storage classes created
$ kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE microk8s-hostpath (default) microk8s.io/hostpath Delete Immediate false 3d1h pure pure-csi Delete Immediate false 3m40s pure-block pure-csi Delete Immediate false 3m40s pure-file pure-csi Delete Immediate false 3m40s
Update Array details
Before we can create our Persistent Volume we need to update the previously downloaded values.yaml file with our Pure Storage FlashArray and FlashBlade details.
$ helm upgrade pure-storage-driver pure/pure-csi --namespace pso-namespace -f values.yaml Release "pure-storage-driver" has been upgraded. Happy Helming! NAME: pure-storage-driver LAST DEPLOYED: Wed Apr 29 12:18:15 2020 NAMESPACE: pso-namespace STATUS: deployed REVISION: 2 TEST SUITE: None
Getting Oracle 12c ready
I have previously Blogged on running Oracle 12c in MiniKube and will be using some of the same commands and yaml files here. Below is my namespace.yaml file
apiVersion: v1
kind: Namespace
metadata:
name: oracle-namespace
We can create our oracle Kubernetes namespace with
$ kubectl apply -f namespace.yaml
We should now be able to see our new namespace
$ kubectl get namespace oracle-namespace NAME STATUS AGE oracle-namespace Active 72m
Create Persistent Volume(s)
We are now ready to create Persistent Volumes using PSO, using the sample yaml file below pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ora-vol1
namespace: oracle-namespace
labels:
app: oracle12c
spec:
storageClassName: pure-block
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Gi
We can now use the above yaml file to create a new volume.
$ kubectl create -f pvc.yaml persistentvolumeclaim/ora-vol1 created
$ kubectl get pvc -n oracle-namespace NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ora-vol1 Bound pvc-a45e3da4-6df6-428d-83af-231a38f6cc0f 20Gi RWO pure-block 47s
We can also describe the Persistent Volume Claim
$ kubectl describe pvc/ora-vol1 -n oracle-namespace Name: ora-vol1 Namespace: oracle-namespace StorageClass: pure-block Status: Bound Volume: pvc-d10c4025-860c-410d-925f-522a6d78f931 Labels: app=oracle12c Annotations: pv.kubernetes.io/bind-completed: yes pv.kubernetes.io/bound-by-controller: yes volume.beta.kubernetes.io/storage-provisioner: pure-csi Finalizers: [kubernetes.io/pvc-protection] Capacity: 20Gi Access Modes: RWO VolumeMode: Filesystem Mounted By: database-9bcbc7d44-4bgkp Events: <none>
We can see also see the volume created by PSO by visiting the Pure FlashArray,
Kubernetes Secret
I will be using the same secret I created in my previous Kubernetes Blog.
$ kubectl create secret docker-registry oracle \ --docker-server=docker.io \ --docker-username=<docker username> \ --docker-password=<docker password> \ --docker-email=<docker password> \ -n oracle-namespace secret/oracle created
And, let’s check it
$ kubectl get secrets -n oracle-namespace NAME TYPE DATA AGE default-token-skwxm kubernetes.io/service-account-token 3 103m oracle kubernetes.io/dockerconfigjson 1 48m
ConfigMap
For my Kubernetes build I will be using a ConfigMap to pass variables to my Oracle 12 deployment, below is my oracle.properties file.
DB_SID=PSTG
DB_PDB=PSTGPDB1
DB_PASSWD=Kube#2020
DB_DOMAIN=localdomain
DB_BUNDLE=basic
DB_MEMORY=8g
Create a configmap with kubectl create configmap
$ kubectl create configmap oradb --from-env-file=oracle.properties -n oracle-namespace configmap/oradb created
Starting Oracle 12c Database
If everything above has gone to plan, we should be good to create our Oracle 12c database Kubernetes pod with kubectl apply.
$ kubectl apply -f database12c.yaml -n oracle-namespace deployment.apps/oracle12c created service/oracle12c created
We should now have a running Oracle 12c database within a Pod using persistent storage provided by the PSO.
Oracle 12c Database
From the Kubernetes dashboard I can view the database log output and seen the Oracle 12.2 database been renamed to the value I provided in my ConfigMap, and hostname has been set to Kubernetes pod name.
$ kubectl get pods -n oracle-namespace NAME READY STATUS RESTARTS AGE database-9bcbc7d44-4bgkp 1/1 Running 0 18m
We can also connect to our Docker container using kubectl exec using our pod name
$ kubectl exec -it database-9bcbc7d44-4bgkp /bin/bash -n oracle-namespace kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.
From our shell we can see the FlashArray volume mounted at /ORCL
[oracle@database-9bcbc7d44-4bgkp /]$ df -h
Filesystem Size Used Avail Use% Mounted on
overlay 99G 15G 80G 16% /
tmpfs 64M 0 64M 0% /dev
tmpfs 7.9G 0 7.9G 0% /sys/fs/cgroup
/dev/mapper/3624a9370513519106e354b37007f68a6 20G 4.0G 17G 20% /ORCL
tmpfs 7.9G 0 7.9G 0% /dev/shm
/dev/sda2 99G 15G 80G 16% /etc/hosts
tmpfs 7.9G 12K 7.9G 1% /run/secrets/kubernetes.io/serviceaccount
tmpfs 7.9G 0 7.9G 0% /proc/acpi
tmpfs 7.9G 0 7.9G 0% /proc/scsi
tmpfs 7.9G 0 7.9G 0% /sys/firmware
The Linux device UUID is shown is made up of Vendor ID + lowercase volume serial number e.g 3624a9370 + 513519106e354b37007f68a6. This can be seen in FlashArray Serial number thus:
Database Stop / Start
An easy way to stop and start our database pod is by using kubectl scale to change the replicas count.
Shutdown
$ kubectl scale -n oracle-namespace deployment database --replicas
=0
Startup
$ kubectl scale -n oracle-namespace deployment database --replicas=1
In a future Blog I plan to return to the world of Kubernetes and try the same with Oracle 18c, Oracle 18c Express Edition (XE) & Oracle 19c docker images.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]