Introduction
If you follow my blog or have read my post Oracle Database 19c on Kubernetes with Portworx storage you may have an Oracle database running on Kubernetes and are wondering how do I protect it.
In this post I will show you how we can use Portworx Volume Group Snapshots to complement other backup strategies.
Getting Started
Let’s start by getting the name of our Oracle 19c pod and the node it’s running on.
# kubectl get pods -o wide -n oracle-namespace NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE oracle19c-0 1/1 Running 0 2m5s 10.244.6.12 node-1-4 <none>
Next describe the pod to confirm PersistentVolumes and Mount points.
# kubectl describe pods/oracle19c-0 -n oracle-namespace Name: oracle19c-0 Namespace: oracle-namespace ... Mounts: /dev/shm from dshm (rw) /opt/oracle/oradata from ora-data193 (rw) /opt/oracle/scripts/setup from ora-setup193 (rw) /opt/oracle/scripts/startup from ora-startup193 (rw) /var/run/secrets/kubernetes.io/serviceaccount from default-token-j8xsk (ro) ...
And finally the Service (svc) port mapped to the Oracle 19c Listener (port 1521).
# kubectl get svc -n oracle-namespace NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE oracle19c NodePort 10.96.95.212 <none> 1521:30294/TCP,5500:30780/TCP 5m41s
Now we have these details, let’s test remote connectivity from a server with the Oracle client installed using SQLPlus and get the datafile directory for our pluggable database.
$ sqlplus system/Kube#2020@//10.225.115.120:30294/PSTGPDB1 SQL*Plus: Release 19.0.0.0.0 - Production on Mon Feb 8 13:42:31 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Last Successful login time: Thu Feb 04 2021 13:44:17 +00:00 Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> set linesize 200 SQL> select FILE_NAME from DBA_DATA_FILES; FILE_NAME ------------------------------------------------- /opt/oracle/oradata/PSTG/PSTGPDB1/system01.dbf /opt/oracle/oradata/PSTG/PSTGPDB1/sysaux01.dbf /opt/oracle/oradata/PSTG/PSTGPDB1/undotbs01.dbf /opt/oracle/oradata/PSTG/PSTGPDB1/users01.dbf
Persistent Storage
Before we create our test data set, let’s check what space we have allocated to us.
# kubectl get pvc -n oracle-namespace
NAME STATUS VOLUME CAPACITY ACCESS STORAGECLASS AGE
ora-data193-oracle19c-0 Bound pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 200Gi RWO px-ora-sc 3m10s
ora-setup193-oracle19c-0 Bound pvc-e8562900-e2cf-4c21-9d9c-17fc1400bcf5 1Gi RWO px-ora-sc 3m10s
ora-startup193-oracle19c-0 Bound pvc-bb0df994-c368-4d4f-8ace-4185f0244030 1Gi RWO px-ora-sc 3m10s
We can also see our persistent volumes using the Portworx pxctl volume list -v (volume) command.
# pxctl volume list -v ID NAME SIZE HA SHARED ENCRYPTED IO_PRIORITY STATUS SNAP-ENABLED 804859194093411153 pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 200 GiB 3 no no HIGH up - attached on 10.225.115.119 no 290544739439940153 pvc-bb0df994-c368-4d4f-8ace-4185f0244030 1 GiB 3 no no HIGH up - attached on 10.225.115.119 no 777961521511897576 pvc-e8562900-e2cf-4c21-9d9c-17fc1400bcf5 1 GiB 3 no no HIGH up - attached on 10.225.115.119 no
Create SOE Tablespace
Connect again with SQLPlus, and pre-create a tablespace for the Sales Order Entry schema.
$ sqlplus system/Kube#2020@//10.225.115.120:30294/PSTGPDB1 ... SQL> create BIGFILE tablespace soe datafile '/opt/oracle/oradata/PSTG/PSTGPDB1/soe.dbf' size 40G; Tablespace created.
If you want to confirm the space used, shell into the Oracle 19c pod and check usage within the container.
# kubectl exec -it oracle19c-0 -n oracle-namespace -- /bin/bash [oracle@oracle19c-0 PSTGPDB1]$ df -ht ext4 Filesystem Size Used Avail Use% Mounted on /dev/pxd/pxd804859194093411153 197G 46G 142G 25% /opt/oracle/oradata /dev/pxd/pxd290544739439940153 976M 1.3M 908M 1% /opt/oracle/scripts/startup /dev/pxd/pxd777961521511897576 976M 1.3M 908M 1% /opt/oracle/scripts/setup
We can also see this from Portworx using the pxctl inspect command to get Bytes Used per volume.
# pxctl volume inspect pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 Volume : 804859194093411153 Name : pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 Size : 200 GiB .. Bytes used : 46 GiB ..
Load Test Data
Using the details above we can create some test data using the SwingBench Sales Order Entry (soe) wizard.
$./oewizard -allindexes -cl -create -cs //10.225.115.120:30294/PSTGPDB1 -dba system -dbap Kube#2020 -ts soe -part -p soe -scale 2 -tc 8 -u soe -v SwingBench Wizard Author : Dominic Giles Version : 2.6.0.1076 Running in Lights Out Mode using config file : ../wizardconfigs/oewizard.xml Connecting to : jdbc:oracle:thin:@//10.225.115.120:30294/PSTGPDB1 Connected Running script ../sql/soedgcreatetablespace.sql ...
Great, now let’s see how may Order Items we have in the soe schema.
$ sqlplus soe/soe@//10.225.115.120:30294/PSTGPDB1
SQL*Plus: Release 19.0.0.0.0 - Production on Tue Feb 9 16:02:33 2021
Version 19.3.0.0.0
Copyright (c) 1982, 2019, Oracle. All rights reserved.
Last Successful login time: Tue Feb 09 2021 15:56:28 +00:00
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
SQL> select count(*) from ORDER_ITEMS;
COUNT(*)
----------
8570869
Create Group Volume Snapshot
Now we have created our Sales Order Entry schema, let’s take a group level snapshot as a baseline recovery point.
I have labeled all my Oracle 19c persistent volumes with ‘database‘, so we can use this to easily identify them and perform group snapshots.
# kubectl get pvc -l app=database -n oracle-namespace NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS ora-data193-oracle19c-0 Bound pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 200Gi RWO px-ora-sc ora-setup193-oracle19c-0 Bound pvc-e8562900-e2cf-4c21-9d9c-17fc1400bcf5 1Gi RWO px-ora-sc ora-startup193-oracle19c-0 Bound pvc-bb0df994-c368-4d4f-8ace-4185f0244030 1Gi RWO px-ora-sc
I can use the STORK (STorage Orchestrator Runtime for Kubernetes) with a pvcSelector of ‘database’ to take a group level snapshot.
apiVersion: stork.libopenstorage.org/v1alpha1 kind: GroupVolumeSnapshot metadata: name: oracle-soe-snap spec: pvcSelector: matchLabels: app: database restoreNamespaces: - default
Now let’s apply it to create our volume group snapshot.
# kubectl apply -f px-oracle-soe-snap.yaml
groupvolumesnapshot.stork.libopenstorage.org/oracle-soe-snap created
Check Group Snapshot creation with kubectl get groupvolumesnapshot <name>
# kubectl get groupvolumesnapshot oracle-soe-snap
NAME AGE
oracle-soe-snap 69s
While the group snapshot is in progress, the Status will show as InProgress. Once complete, we should see a Status Stage of Final and Status of Successful
# kubectl describe groupvolumesnapshot oracle-soe-snap
Name: oracle-soe-snap
Namespace: oracle-namespace
...
Status:
Num Retries: 0
Stage: Final
Status: Successful
Volume Snapshots:
...
And the 3 volumes snapshotted can been using the Portworx pxctl volume list -s (snapshot) command
# pxctl volume list -s ID NAME SIZE HA SHARED ENCRYPTED IO_PRIORITY STATUS SNAP-ENABLED 1120927399817125922 group_snap_5_1612951202_pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 200 GiB 3 no no HIGH up - detached no 309342640791497687 group_snap_5_1612951202_pvc-bb0df994-c368-4d4f-8ace-4185f0244030 1 GiB 3 no no HIGH up - detached no 131778993473247919 group_snap_5_1612951202_pvc-e8562900-e2cf-4c21-9d9c-17fc1400bcf5 1 GiB 3 no no HIGH up - detached no
Simulate Disaster
So, let’s simulate a disaster by connecting to our PDB and dropping a table.
$ sqlplus soe/soe@//10.225.115.120:30294/PSTGPDB1
...
SQL> drop table ORDER_ITEMS;
Table dropped.
SQL> commit;
Commit complete.
SQL> select count(*) from ORDER_ITEMS;
select count(*) from ORDER_ITEMS
*
ERROR at line 1:
ORA-00942: table or view does not exist
Whoops, we have dropped the wrong table in error, let’s try and restore from our snapshot.
Shutdown Oracle 19c database
Shutdown the Oracle database using kubectl scale with replicas set to 0.
# kubectl scale -n oracle-namespace statefulset oracle19c --replicas=0
statefulset.apps/oracle19c scaled
We can watch the database shutdown using kubectl logs
# kubectl logs pod/oracle19c-0 -n oracle-namespace Stopping container. SIGTERM received, shutting down database! SQL*Plus: Release 19.0.0.0.0 - Production on Wed Feb 10 10:14:44 2021 Version 19.3.0.0.0 Copyright (c) 1982, 2019, Oracle. All rights reserved. Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 SQL> 2021-02-10T10:14:44.287410+00:00 Shutting down ORACLE instance (immediate) (OS id: 1873)
Confirm the Oracle 19c database container is down
# kubectl get pods -n oracle-namespace No resources found in oracle-namespace namespace.
Restore database volumes
I will now attempt to restore the database volumes from the Group Volume snapshots.
# kubectl apply -f px-oracle-soe-restore.yaml
volumesnapshotrestore.stork.libopenstorage.org/oracle-soe-snap-restore created
We can use the storkctl the command to see the status of the restore process.
# storkctl get volumesnapshotrestore NAME SOURCE-SNAPSHOT SOURCE-SNAPSHOT-NAMESPACE STATUS VOLUMES CREATED oracle-soe-snap-restore oracle-soe-snap oracle-namespace Successful 3 10 Feb 21 10:34 UTC
We can also use the kubectl describe to retrieve more detailed information about the status of the restore process.
# kubectl describe volumesnapshotrestore oracle-soe-snap-restore -n default Name: oracle-soe-snap-restore Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"stork.libopenstorage.org/v1alpha1","kind":"VolumeSnapshotRestore","metadata":{"annotations":{},"name":"oracle-soe-snap-rest... API Version: stork.libopenstorage.org/v1alpha1 Kind: VolumeSnapshotRestore Metadata: Creation Timestamp: 2021-02-10T10:34:38Z Finalizers: stork.libopenstorage.org/finalizer-cleanup Generation: 6 Resource Version: 34699738 Self Link: /apis/stork.libopenstorage.org/v1alpha1/namespaces/default/volumesnapshotrestores/oracle-soe-snap-restore UID: 95d12b1f-18ad-4364-b0b3-1461a0c814c2 Spec: Group Snapshot: true Source Name: oracle-soe-snap Source Namespace: oracle-namespace Status: Status: Successful Volumes: Namespace: oracle-namespace Pvc: ora-startup193-oracle19c-0 Reason: Restore is successful Snapshot: oracle-soe-snap-ora-startup193-oracle19c-0-7b6b16f4-71d3-4188-8009-881506ff39df Status: Successful Volume: pvc-bb0df994-c368-4d4f-8ace-4185f0244030 Namespace: oracle-namespace Pvc: ora-setup193-oracle19c-0 Reason: Restore is successful Snapshot: oracle-soe-snap-ora-setup193-oracle19c-0-7b6b16f4-71d3-4188-8009-881506ff39df Status: Successful Volume: pvc-e8562900-e2cf-4c21-9d9c-17fc1400bcf5 Namespace: oracle-namespace Pvc: ora-data193-oracle19c-0 Reason: Restore is successful Snapshot: oracle-soe-snap-ora-data193-oracle19c-0-7b6b16f4-71d3-4188-8009-881506ff39df Status: Successful Volume: pvc-6b710899-96e6-4b96-9a6b-1640f396fcd9 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Successful 36m stork Snapshot in-Place Restore completed
Restart Oracle 19c database
Restart our Oracle 19c database container using kubectl scale with replicas =1
# kubectl scale -n oracle-namespace statefulset oracle19c --replicas=1
statefulset.apps/oracle19c scaled
Check Oracle log file
# kubectl logs pod/oracle19c-0 -n oracle-namespace
...
SQL> Disconnected from Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
The Oracle base remains unchanged with value /opt/oracle
DB is in good health on startup
The Oracle base remains unchanged with value /opt/oracle
#########################
DATABASE IS READY TO USE!
#########################
And check the Order Items table again.
$ sqlplus soe/soe@//10.225.115.120:30294/PSTGPDB1
SQL> select count(*) from ORDER_ITEMS;
COUNT(*)
----------
8570869
Summary
In this post I have shared how we can use STORK (STorage Orchestrator Runtime for Kubernetes) to manage groups of PVCs, enabling snapshots and restores of containerised Oracle database to a known point in time.
If you found the post interesting you may also check-out my other Kubernetes post and have a look at my example manifests files for Oracle, which you can find here.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]
Leave a Reply