Oracle Database 19c on Kubernetes with Portworx storage

Background

In this post I am going to share how you can obtain the official Oracle 19.3 Enterprise Edition (EE) Docker image and deliver an Oracle 19c database on Kubernetes 1.17.0 with Portworx 2.6 storage.

Kubernetes Cluster Info

Before we start let’s have a look at our Kubernetes environment

$ kubectl cluster-info
Kubernetes master is running at https://192.168.101.90:6443
KubeDNS is running at https://192.168.101.90:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

 To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Get Kubernetes Version

The version of Kubernetes

$ kubectl version --short | awk -Fv '/Server Version: / {print $3}'
 1.17.0

Get Kubernetes Nodes

The number of Kubernetes nodes.

$ kubectl get nodes
NAME       STATUS   ROLES    AGE     VERSION
master-1   Ready    master   7m12s   v1.17.0
node-1-1   Ready    <none>   6m50s   v1.17.0
node-1-2   Ready    <none>   6m12s   v1.17.0
node-1-3   Ready    <none>   6m32s   v1.17.0

Label Kubernetes Nodes

I will start by labelling our Kubernetes nodes for Portworx, it’s an optional step but can help with identification.

$ kubectl label nodes node-1-1 node-1-2 node-1-3 px/metadata-node=true
node/node-1-1 labeled
node/node-1-2 labeled
node/node-1-3 labeled

And check labels with kubectl get nodes –show-labels

$ kubectl get nodes --show-labels
NAME       STATUS   ROLES    AGE   VERSION   LABELS
master-1   Ready    master   22m   v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=
node-1-1   Ready    <none>   22m   v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1-1,kubernetes.io/os=linux,px/metadata-node=true
node-1-2   Ready    <none>   21m   v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1-2,kubernetes.io/os=linux,px/metadata-node=true
node-1-3   Ready    <none>   21m   v1.17.0   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node-1-3,kubernetes.io/os=linux,px/metadata-node=true

Show Portworx pods

Portworx runs as a container and we see it running across our Kubernetes cluster.

$ kubectl get pods -n kube-system -l name=portworx -o wide
NAME           READY STATUS  RESTARTS AGE IP              NODE
portworx-7hbsh 1/1   Running 0        45m 192.168.101.103 node-1-3   
portworx-rvf55 1/1   Running 0        46m 192.168.101.101 node-1-1   
portworx-vs4n7 1/1   Running 0        45m 192.168.101.102 node-1-2   

Portworx

Portworx Version

We can check the version of Portworx in using with pxctl -v

[root@node-1-1 ~]# /opt/pwx/bin/pxctl -v
pxctl version 2.6.0.2-d505d8d (OCI)

Portworx Status

Now, let’s check the Portworx status with pxctl status

[root@node-1-1 ~]# pxctl status
 Status: PX is operational
 License: Trial (expires in 31 days)
 Node ID: 8b3a83da-7910-4a7c-957d-efbe36245cf7
 IP: 192.168.101.101 
   Local Storage Pool: 1 pool
 POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION
 0 HIGH raid0 40 GiB 2.4 GiB Online default default
 Local Storage Devices: 1 device
 Device Path Media Type Size Last-Scan
 0:1 /dev/nvme1n1 STORAGE_MEDIUM_NVME 40 GiB 05 Nov 20 13:01 UTC
 * Internal kvdb on this node is sharing this storage device /dev/nvme1n1  to store its data.
 total - 40 GiB
 Cache Devices:
  * No cache devices
 Cluster Summary
 Cluster ID: px-deploy-1
 Cluster UUID: 9f961924-ca71-49e8-8f13-ab9235030500
 Scheduler: kubernetes
 Nodes: 3 node(s) with storage (3 online)
...
 Total Used     :  7.1 GiB
 Total Capacity :  120 GiB

Oracle 19c on Kubernetes

In have previously posted on how you can get hold of the official Oracle 19c Docker image from the Oracle Container Registry, you can find details here.

OK time to clone my Oracle on Kubernetes repo from GitHub.

$ git clone https://github.com/raekins/Oracle-on-Kubernetes
Cloning into 'Oracle-on-Kubernetes'...

Kubernetes Oracle Namespace

For this blog I create new Kubernetes namespace called oracle-namespace.

We can create a new namespace with kubectl create namespace.

kubectl create namespace oracle-namespace
namespace/oracle-namespace created

We can see which namespace are available with kubectl get namespace.

$ kubectl get namespace oracle-namespace
 NAME               STATUS   AGE
 oracle-namespace   Active   118s

To avoid having to specify our namespace with -n each time I will use the set-context to specify a default namespace of oracle-namespace.

$ kubectl config set-context --current --namespace=oracle-namespace
 Context "kubernetes-admin@kubernetes" modified.

Kubectl will now default to using the oracle–namespace

$ kubectl get pods
No resources found in oracle-namespace namespace.

Kubernetes Secret

For this post I will be using the official Oracle 19c Docker image which is available from Oracle Container Registry. However, before you can pull this image you will need to accept the licence details and have a valid Oracle support account.

I have previously blogged on this which you can familiarise yourself here.

Sign-on to the Oracle Container Registry

$ docker login container-registry.oracle.com
 Username: <Oracle SSO>
 Password: 
 Login Succeeded

This creates a file ~/.docker/config.json

$ cat ~/.docker/config.json
{
    "auths": {         
            "container-registry.oracle.com": {
                    "auth": "cm9...w=="        
            }     
   }
 }

Create a Secret based on Docker credentials

$ kubectl create secret generic regcred --from-file=.dockerconfigjson=/root/.docker/config.json  --type=kubernetes.io/dockerconfigjson
secret/regcred created

Oracle Registry Credentials Test

Let’s test the Kubernetes secret just created with this simple test.yaml file.

apiVersion: v1
 kind: Pod
 metadata:
   name: 19c
 spec:
   containers:
     - name: 19c
       image: container-registry.oracle.com/database/enterprise:latest
       imagePullPolicy: Always
       ports:
       - name: nginx
         containerPort: 8080
         protocol: TCP
   imagePullSecrets:
     - name: regcred   
$ kubectl apply -f test.yaml 
pod/19c created

$ kubectl get pods -o wide
NAME  READY  STATUS             RESTARTS  AGE    IP      NODE 
19c   0/1    ContainerCreating  0         5m50s  <none>  node-1-3
$ kubectl describe pods/19c
Name:         19c
Namespace:    oracle-namespace
...
Events:
   Type    Reason     Age        From               Message
   ----    ------     ----       ----               -------
   Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned oracle-namespace/19c to node-1-3
   Normal  Pulling    8m59s      kubelet, node-1-3  Pulling image "container-registry.oracle.com/database/enterprise:latest"
   Normal  Pulled     2m46s      kubelet, node-1-3  Successfully pulled image "container-registry.oracle.com/database/enterprise:latest"
   Normal  Created    2m44s      kubelet, node-1-3  Created container 19c
   Normal  Started    2m44s      kubelet, node-1-3  Started container 19c

If we ssh on to the node the pod was running on and use docker images, we can see our Oracle 19c image has successfully been pulled from the Oracle Container Registry.

We can can also see all the Portworx images.

[root@node-1-3 ~]# docker images
REPOSITORY                                        TAG      IMAGE ID        
quay.io/coreos/flannel                            v0.13.0  e708f4bb69e3 
docker.io/portworx/oci-monitor                    2.6.0.2  245fc7c703be        
docker.io/portworx/px-enterprise                  2.6.0.2  97ac86a26528        
docker.io/openstorage/stork                       2.5.0    f83a670a6ddb        
docker.io/portworx/oci-monitor                    2.6.0    40b667da2c2f        
docker.io/portworx/autopilot                      1.3.0    b037f2462cef        
docker.io/portworx/px-enterprise                  2.6.0    6fdf27d441ff        
container-registry.oracle.com/database/enterprise latest   2e375ab66980        
gcr.io/google_containers/kube-scheduler-amd64     v1.14.10 3fa2504a839b        
k8s.gcr.io/kube-proxy                             v1.17.0  7d54289267dc        
gcr.io/google_containers/kube-scheduler-amd64     v1.17.0  78c190f736b1        
k8s.gcr.io/kube-scheduler                         v1.17.0  8c190f736b1        
k8s.gcr.io/kube-apiserver                         v1.17.0  0cae8d5cc64c        
k8s.gcr.io/kube-controller-manager                v1.17.0  5eb3b7486872        
k8s.gcr.io/coredns                                1.6.5    70f311871ae1        
k8s.gcr.io/etcd                                   3.4.3-0  303ce5db0e90        
k8s.gcr.io/pause                                  3.1      da86e6ba6ca1        

Kubernetes ConfigMap

For my Kubernetes build I will be using a ConfigMap to pass variables to my Oracle 19.3 container, this makes is very easy to change database name and passwords etc.

$ kubectl create configmap oradb --from-env-file=oracle.properties -n oracle-namespace
configmap/oradb created

You can obtain a copy of the oracle.properies from my GitHub site.

Kubernetes StorageClass

In this post, I will create a Portworx Storage Class that has a replication factor of 3 with I/O profile set to “db” and priority set to “high.

This means that the storage will be optimized for low latency database workloads like Oracle and automatically placed on the highest performance storage available in the cluster.

$ kubectl apply -f px-ora-sc.yaml 
storageclass.storage.k8s.io/px-ora-sc created

Let’s check the StorageClass details

$ kubectl describe storageclass/px-ora-sc
Name:            px-ora-sc
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1beta1","kind":"StorageClass","metadata":{"annotations":{},"name":"px-ora-sc"},"parameters":{"io_profile":"db","priority_io":"high","repl":"3"},"provisioner":"kubernetes.io/portworx-volume"}

Provisioner:           kubernetes.io/portworx-volume
Parameters:            io_profile=db,priority_io=high,repl=3
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

Kubernetes Statefulset

We can create our Oracle 19.3 database Kubernetes deployment with kubectl apply.

[root@master-1 ~]# kubectl apply -f 19c_statefulset_PX.yaml 
statefulset.apps/oracle19c created
service/oracle19c created

Use kubectl get statefulset to check status of the statefulset.

$ kubectl get statefulsets -o wide
NAME      READY AGE CONTAINERS IMAGES
oracle19c 1/1   51s oracle19c  container-registry.oracle.com/database/enterprise:19.3.0.0

Use kubectl get pods to get pod name and its status.

$ kubectl get pods -n oracle-namespace
NAME          READY   STATUS    RESTARTS   AGE
oracle19c-0   1/1     Running   0          3m57s

Now we know the pod name we can view the log output during the build.

$ kubectl logs pods/oracle19c-0 -n oracle-namespace --follow
[2020:11:06 13:43:10]: Acquiring lock on /opt/oracle/oradata/.ORCL.create_lck
[2020:11:06 13:43:10]: Lock acquired on /opt/oracle/oradata/.ORCL.create_lck
[2020:11:06 13:43:10]: Holding on to the lock using /tmp/.ORCL.create_lck
ORACLE EDITION: ENTERPRISE
ORACLE PASSWORD FOR SYS, SYSTEM AND PDBADMIN: Kube#2020
LSNRCTL for Linux: Version 19.0.0.0.0 - Production on 06-NOV-2020 13:43:10
Copyright (c) 1991, 2019, Oracle.  All rights reserved.
Starting /opt/oracle/product/19c/dbhome_1/bin/tnslsnr: please wait...
TNSLSNR for Linux: Version 19.0.0.0.0 - Production
System parameter file is /opt/oracle/product/19c/dbhome_1/network/admin/listener.ora
Log messages written to /opt/oracle/diag/tnslsnr/oracle19c-0/listener/alert/log.xml
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
Listening on: (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521)))
Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
STATUS of the LISTENER
------------------------
Alias                     LISTENER
Version                   TNSLSNR for Linux: Version 19.0.0.0.0 - Production
Start Date                06-NOV-2020 13:43:10
Uptime                    0 days 0 hr. 0 min. 0 sec
Trace Level               off
Security                  ON: Local OS Authentication
SNMP                      OFF
Listener Parameter File   /opt/oracle/product/19c/dbhome_1/network/admin/listener.ora
Listener Log File         /opt/oracle/diag/tnslsnr/oracle19c-0/listener/alert/log.xml
Listening Endpoints Summary...
  (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))
  (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=0.0.0.0)(PORT=1521))) The listener supports no services
The command completed successfully
Prepare for db operation
8% complete
Copying database files
31% complete
Creating and starting Oracle instance
32% complete
36% complete
..
SQL> 
Pluggable database altered.
SQL> 
PL/SQL procedure successfully completed.
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
The following output is now a tail of the alert.log:
...

Kubernetes Persistant Storage

Ok, now we have our Oracle 19c database up and running, let’s check-out Kubernetes Persistent Volumes (PVs), Persistent Volume Claims (PVCs) and Portworx storage

Persistent Volumes

$ kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                                         STORAGECLASS   REASON   AGE
pvc-41257b91-f51e-47a4-9ef2-6ec412959e21   1Gi        RWO            Delete           Bound    oracle-namespace/ora-startup193-oracle19c-0   px-ora-sc               22h
pvc-6045469c-75da-463f-9ba7-a2244534ba73   10Gi       RWO            Delete           Bound    oracle-namespace/ora-data193-oracle19c-0      px-ora-sc               22h
pvc-7591234a-73fc-46ee-aeaf-be3d05a39145   1Gi        RWO            Delete           Bound    oracle-namespace/ora-setup193-oracle19c-0     px-ora-sc               22h

Persistent Volume Claims

$ kubectl get pvc
NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ora-data193-oracle19c-0      Bound    pvc-6045469c-75da-463f-9ba7-a2244534ba73   10Gi       RWO            px-ora-sc      22h
ora-setup193-oracle19c-0     Bound    pvc-7591234a-73fc-46ee-aeaf-be3d05a39145   1Gi        RWO            px-ora-sc      22h
ora-startup193-oracle19c-0   Bound    pvc-41257b91-f51e-47a4-9ef2-6ec412959e21   1Gi        RWO            px-ora-sc      22h

Portworx Volumes

[root@node-1-1 ~]# pxctl volume list
 ID NAME SIZE HA SHARED ENCRYPTED PROXY-VOLUME IO_PRIORITY STATUS SNAP-ENABLED 
 1039351034861076190 pvc-41257b91-f51e-47a4-9ef2-6ec412959e21 1 GiB 3 no no no HIGup - attached on 192.168.101.101 no
 459237757202127604 pvc-6045469c-75da-463f-9ba7-a2244534ba73 10 GiB 3 no no no HIGup - attached on 192.168.101.101 no
 380723862930309205 pvc-7591234a-73fc-46ee-aeaf-be3d05a39145 1 GiB 3 no no no HIGup - attached on 192.168.101.101 no

Connecting Remotely

As we are connecting remotely we now need to do it determine the our external ports.

$ kubectl get svc -n oracle-namespace
NAME      TYPE     CLUSTER-IP    EXTERNAL-IP PORT(S
oracle19c NodePort 10.96.105.127 <none>    1521:30785/TCP,5500:30142/TCP   

From the above we can see our Oracle listener (1521) is available externally on Port 30785 and Enterprise Manager (5500) on 30142.

So using the above details I can connect from my laptop using sqlplus and Oracle SQL Developer.

rekins@rekins--MacBookPro15 ~ % sqlplus system/Kube#2020@//18.133.255.247:30785/ORCL @database_details

SQL*Plus: Release 19.0.0.0.0 - Production on Fri Nov 6 15:49:31 2020
Version 19.3.0.0.0
 
Copyright (c) 1982, 2019, Oracle.  All rights reserved.
 
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
 
Current Time   : 06/11/20 15:49:32 

Database Details
===============================================
Hostname       : oracle19c-0
Database Name  : ORCL
Date Created   : 06/11/20 13:45:16
Date Started   : 06/11/20 13:59:23
Resetlogs Date : 06/11/20 13:45:18
DB Status      : OPEN
Space Allocated: 1.72 GB
Space Used     : 1.56 GB

Or if you prefer from Oracle SQL Developer

Oracle SQL Developer – New Database Connection
Oracle SQL Developer – database_details.sql

Summary

So in this post I have covered a lot, so here’s a quick recap.

  • I have shown how to get the official Oracle 19c EE Docker image
  • Create a StatefulSet for our Database
  • Check Kubernetes and Portworx storage
  • Connect to our database remotely.

Over the next few months I plan to create a few more Oracle, Kubernetes and Portworx content so watch this space.

One thought on “Oracle Database 19c on Kubernetes with Portworx storage

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: