Relocating an Oracle 18XE Database on a Kubernetes cluster

Background

If you follow my Blog or read my last post on delivering Oracle 18c Express Edition Database (XE) in Kubernetes you may already have Oracle 18XE database up and running on Kubernetes cluster using persistent storage. If not, you may want to read this.

In this Blog I am going to show how we can relocate our Oracle 18XE database to a different node of a Kubernetes cluster simply using Kubernetes command line tool kubectl.

My Oracle 18xe database is using persistent storage delivered from a Pure FlashArray in my lab with the Pure Storage Orchestrator (PSO) presenting volumes to my Kubernetes node via iSCSI.

If you want to read-up on how I did that, check you my previous blog post How to run Oracle on Kubernetes with Persistent Storage.

Getting Started

Let’s start by listing the Kubernetes names and statuses of nodes with kubectl get nodes

$ kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
z-re-uk8s01   Ready    <none>   10d   v1.18.2-41+b5cdb79a4060a3
z-re-uk8s02   Ready    <none>   10d   v1.18.2-41+b5cdb79a4060a3
z-re-uk8s03   Ready    <none>   10d   v1.18.2-41+b5cdb79a4060a3
z-re-uk8s04   Ready    <none>   10d   v1.18.2-41+b5cdb79a4060a3

We can see our running Kubernetes pods with kubectl get pods -o wide -n <namespace>. The -o wide option is required to show the node.

$ kubectl get pods -o wide-n oracle-namespace
NAME                        READY STATUS  RESTARTS AGE   IP         NODE        NOMINATED NODE   
oracle18xe-698648f6df-49ttr 1/1   Running 0        6m39s 10.1.68.48 z-re-uk8s03 <none>           

Ok, lets make the node running our pod un-schedulable using kubectl cordon <node name>

$ kubectl cordon z-re-uk8s03
node/z-re-uk8s03 cordoned

If we check the status of the pods in the cluster we can see the status has changed to SchedulingDisabled.

$ kubectl get nodes
NAME          STATUS                     ROLES    AGE   VERSION
z-re-uk8s01   Ready                      <none>   10d   v1.18.2-41+b5cdb79a4060a3
z-re-uk8s02   Ready                      <none>   10d   v1.18.2-41+b5cdb79a4060a3
z-re-uk8s03   Ready,SchedulingDisabled   <none>   10d   v1.18.2-41+b5cdb79a4060a3
z-re-uk8s04   Ready                      <none>   10d   v1.18.2-41+b5cdb79a4060a3

We can also see our pod is still running, but we know the node will no longer schedule any new pods.

$ kubectl get pods -o wide-n oracle-namespace
NAME                        READY STATUS  RESTARTS AGE   IP         NODE        NOMINATED NODE  
oracle18xe-698648f6df-49ttr 1/1   Running 0        6m39s 10.1.68.48 z-re-uk8s03 <none>               

If we delete the pod, and we should see the pod restarted on different node.

$ kubectl delete pod/oracle18xe-698648f6df-49ttr -n oracle-namespace
pod "oracle18xe-698648f6df-49ttr" deleted

Let’s find out where our pod has relocated to with kubectl get pods

$ kubectl get pods -n oracle-namespace -o wide
NAME                        READY STATUS  RESTARTS AGE   IP          NODE          NOMINATED NODE   
oracle18xe-698648f6df-7hdnw 1/1   Running 0        2m20s 10.1.73.154 z-re-uk8s01   <none>           

Before we move on, we can make the node available again with uncordon

$ kubectl uncordon z-re-uk8s03
node/z-re-uk8s03 uncordoned

Oracle Database 18xe

From the above, we can see my Oracle 18xe pod has moved from z-re-uks03 to z-re-uks01

If we connect to our container we can see our PSO persistent volume mounted.

$ kubectl exec -it oracle18xe-698648f6df-7hdnw /bin/bash -n oracle-namespace
df -h from inside container

And from my laptop I can use sqlplus to connect to the database now running on z-re-uk8s01.

[twitter-follow screen_name=’RonEkins’ show_count=’yes’]

Leave a Reply

Create a website or blog at WordPress.com

Up ↑

Discover more from Ron Ekins' - Oracle Technology, DevOps and Kubernetes Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading