Kubernetes Persistent Volume Snapshot and Cloning

In this Blog post I am going to try-out the new Pure Service Orchestrator (PSO) Kubernetes CSI Driver Volume Snapshot and Cloning features for the first time using one of my FlashArray volumes which I created during a previous Kubernetes previous Blog.

Persistent Volume Snapshot

Before we start, lets see if we have any existing persistent volume snapshots

$ kubectl get volumesnapshot -n oracle-namespace
No resources found in oracle-namespace namespace.

Below is an example Kubernetes YAML file which I will use to perform a storage snapshot of my Oracle 19c database volume called ‘ora-data193’, the snapshot will use the suffix ‘snap’.

---
apiVersion: snapshot.storage.k8s.io/v1alpha1
kind: VolumeSnapshot
metadata:
  name: ora-data193-snap
  namespace: oracle-namespace
spec:
  snapshotClassName: pure-snapshotclass
  source:
    name: ora-data193
    kind: PersistentVolumeClaim

We can take a Pure Storage snapshot using the above yaml file e.g.

$ kubectl apply -f snapshot19c.yaml 
volumesnapshot.snapshot.storage.k8s.io/ora-data193-snap created

We can see our new volume snapshot using kubectl get volumesnapshot again

$ kubectl get volumesnapshot -n oracle-namespace
NAME               AGE
ora-data193-snap   3m2s

You can also use kubectl describe volumesnapshot to see full details

kubectl describe volumesnapshot

Ok, let’s logon to the FlashArray and see if the Pure Service Orchestrator has performed a snapshot.

Pure Storage FlashArray Volume Snapshot details

Persistent Volume Cloning

Ok, now we have a snapshot let’s use that as the source of a new storage volume.

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: ora-data193-clone
  namespace: oracle-namespace
spec:
  storageClassName: pure-block
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 20Gi
  dataSource:
    kind: VolumeSnapshot
    name: ora-data193-snap
    apiGroup: snapshot.storage.k8s.io

Using the above yaml file we can create our new persistent volume with

$ kubectl apply -f clone19c.yaml 
persistentvolumeclaim/ora-data193-clone created

And check the status with kubectl get pvc (Persistent Volume Claim)

$ kubectl get pvc -n oracle-namespace
NAME              STATUS VOLUME                                   CAPACITY ACCESS STORAGECLASS
                                                                           MODES     
ora-data193-clone Bound  pvc-0a46648a-3492-4e1c-9e1f-afa1e246ff3e 20Gi     RWO    pure-block     
...
Kubenetes Dashboard – Persistent Volume Claims
FlashArray Volume details

In my next Kubernetes blog post we will look at using the above and other recent learning’s to clone an existing physical Oracle database to into my Kubernetes cluster.

2 thoughts on “Kubernetes Persistent Volume Snapshot and Cloning

Add yours

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: