Background
Within my blog series Getting Started with Kubernetes Database Operators I have created a number covering different databases including:
In this Part 2, I will take a deeper look at how we can use the EDB Cloud Native PostgreSQL Operator with Portworx Storage.
PostgreSQL Cluster
Let’s start by creating a new 10GB 3 node PostgreSQL Cluster using the EDB PostgreSQL Kubernetes Operator we installed in my previous PostgreSQL post and the example yaml file below.
# Example of PX PostgreSQL cluster
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: postgres-cluster
spec:
instances: 3
# Example of rolling update strategy:
# - unsupervised: automated update of the primary once all
# replicas have been upgraded (default)
# - supervised: requires manual supervision to perform
# the switchover of the primary
primaryUpdateStrategy: unsupervised
description: "Portworx PostgreSQL Cluster"
storage:
storageClass: "px-postgresql-sc"
size: 10Gi
Create cluster using kubectl apply -f <filename.yaml>
% kubectl apply -f px-postgreSQL-cluster.yaml cluster.postgresql.k8s.enterprisedb.io/postgres-cluster created
Kubernetes Volumes
Using kubectl get pvc we can see that the Operator has created 3 x 10GB volumes using the Portworx storage class as requested.
% kubectl get pvc -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE postgres-cluster-1 Bound pvc-fc58e44a-ea40-49d1-97f3-c9b8ce9c46b2 10Gi RWO px-postgresql-sc 129m Filesystem postgres-cluster-2 Bound pvc-e1b023a4-0921-4e16-a314-f283ab034695 10Gi RWO px-postgresql-sc 129m Filesystem postgres-cluster-3 Bound pvc-a631e5d6-26b2-4b3a-ae91-3d819e7659f0 10Gi RWO px-postgresql-sc 128m Filesystem
Portworx Volumes
The Portworx pxctl volume list command can also be used to see the 3 volumes created, and to confirm that they have a replication factor (HA) of 2.
% pxctl volume list Defaulted container "portworx" out of: portworx, csi-node-driver-registrar ID NAME SIZE HA SHARED ENCRYPTED PROXY-VOLUME IO_PRIORITY STATUS SNAP-ENABLED 524136825577830642 pvc-a631e5d6-26b2-4b3a-ae91-3d819e7659f0 10 GiB 2 no no no HIGH up - attached on 10.225.115.153 no 205303067792675678 pvc-e1b023a4-0921-4e16-a314-f283ab034695 10 GiB 2 no no no HIGH up - attached on 10.225.115.158 no 188244980759306500 pvc-fc58e44a-ea40-49d1-97f3-c9b8ce9c46b2 10 GiB 2 no no no HIGH up - attached on 10.225.115.152 no
Storage Class
Before we move on, we can also check the settings of our Kubernetes Storage Class with kubectl describe sc/<storage class> for example.
# kubectl describe sc/px-postgresql-sc Name: px-postgresql-sc IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"px-postgresql-sc"},"parameters":{"io_profile":"db_remote","repl":"2"},"provisioner":"kubernetes.io/portworx-volume"} Provisioner: kubernetes.io/portworx-volume Parameters: io_profile=db_remote,repl=2 AllowVolumeExpansion: True MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
And with the Portworx pxctl volume inspect command we can see further information, including details of the replication example.
% pxctl volume inspect pvc-a631e5d6-26b2-4b3a-ae91-3d819e7659f0 Defaulted container "portworx" out of: portworx, csi-node-driver-registrar Volume : 524136825577830642 Name : pvc-a631e5d6-26b2-4b3a-ae91-3d819e7659f0 Size : 10 GiB Format : ext4 HA : 2 IO Priority : HIGH Creation time : Jan 28 09:34:50 UTC 2022 Shared : no Status : up State : Attached: 3f633a0e-3b60-46a5-a4a3-c2a0aa79eec0 (10.225.115.153) Last Attached : Jan 28 09:34:54 UTC 2022 Device Path : /dev/pxd/pxd524136825577830642 Labels : k8s.enterprisedb.io/pvcStatus=initializing,namespace=default,pvc=postgres-cluster-3,repl=2,io_profile=db_remote,k8s.enterprisedb.io/cluster=postgres-cluster,k8s.enterprisedb.io/nodeSerial=3,k8s.enterprisedb.io/operatorVersion=1.11.0 Mount Options : discard Reads : 402 Reads MS : 633 Bytes Read : 8491008 Writes : 4178 Writes MS : 24002 Bytes Written : 272429056 IOs in progress : 0 Bytes used : 39 MiB Replica sets on nodes: Set 0 Node : 10.225.115.153 (Pool 7e422d13-1771-44a4-913d-a1ad32211d1c ) Node : 10.225.115.158 (Pool fed3d62e-42db-4dbc-884b-dc02af721498 ) Replication Status : Up Volume consumers : - Name : postgres-cluster-3 (0fa9ead6-8bfc-4df3-a56a-846356e12076) (Pod) Namespace : default Running on : node-1-7 Controlled by : postgres-cluster (Cluster)
Confirm status of PostgreSQL cluster with kubectl get cluster <cluster name>, for example.
% kubectl get cluster postgres-cluster NAME AGE INSTANCES READY STATUS PRIMARY postgres-cluster 22m 3 3 Cluster in healthy state postgres-cluster-1
Database Creation
Let’s create a PostgreSQL database by shelling into the primary node and using the PostgreSQL command line interface psql.
% kubectl exec -it pod/postgres-cluster-1 -- /bin/bash bash-4.4$ psql psql (14.1) Type "help" for help. postgres=# create database demo; CREATE DATABASE postgres=# \l+ List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description -----------+----------+----------+---------+-------+-----------------------+---------+------------+-------------------------------------------- app | app | UTF8 | C | C | | 8553 kB | pg_default | demo | postgres | UTF8 | C | C | | 8553 kB | pg_default | postgres | postgres | UTF8 | C | C | | 8553 kB | pg_default | default administrative connection database template0 | postgres | UTF8 | C | C | =c/postgres +| 8401 kB | pg_default | unmodifiable empty database | | | | | postgres=CTc/postgres | | | template1 | postgres | UTF8 | C | C | =c/postgres +| 8553 kB | pg_default | default template for new databases | | | | | postgres=CTc/postgres | | | (5 rows) postgres-# ]\q
And populate the database with 10 million rows using pgbench -i (initialise) and -s (scaling), to see the storage utilisation, for example
bash-4.4$ pgbench -i -s 100 demo dropping old tables... NOTICE: table "pgbench_accounts" does not exist, skipping NOTICE: table "pgbench_branches" does not exist, skipping NOTICE: table "pgbench_history" does not exist, skipping NOTICE: table "pgbench_tellers" does not exist, skipping creating tables... generating data (client-side)... 10000000 of 10000000 tuples (100%) done (elapsed 35.08 s, remaining 0.00 s) vacuuming... creating primary keys... done in 92.50 s (drop tables 0.00 s, create tables 0.01 s, client-side generate 35.31 s, vacuum 48.14 s, primary keys 9.04 s). bash-4.4$
As shown in the NOTICE messages, pgbench creates pgbench_accounts, pgbench_branches, pgbench_history, and pgbench_tellers tables to run the transactions for benchmarking.
And review database size.
postgres=# \l+ List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges | Size | Tablespace | Description -----------+----------+----------+---------+-------+-----------------------+---------+------------+-------------------------------------------- app | app | UTF8 | C | C | | 8553 kB | pg_default | demo | postgres | UTF8 | C | C | | 1504 MB | pg_default | postgres | postgres | UTF8 | C | C | | 8553 kB | pg_default | default administrative connection database template0 | postgres | UTF8 | C | C | =c/postgres +| 8401 kB | pg_default | unmodifiable empty database | | | | | postgres=CTc/postgres | | | template1 | postgres | UTF8 | C | C | =c/postgres +| 8553 kB | pg_default | default template for new databases | | | | | postgres=CTc/postgres | | | (5 rows)
Simple Test
Let’s perform a simple TPC-B style test using pgbench -c (number of clients) -T (time in seconds)
bash-4.4$ pgbench -c 10 -T 300 demo pgbench (14.1) starting vacuum...end. transaction type: <builtin: TPC-B (sort of)> scaling factor: 100 query mode: simple number of clients: 10 number of threads: 1 duration: 300 s number of transactions actually processed: 211353 latency average = 14.194 ms initial connection time = 32.439 ms tps = 704.537470 (without initial connection time)
In the test above we can see, it ran with 10 clients, for 10 minutes, giving 704 transactions/sec, next steps are now to repeat the test using different storage class configurations and report results.
Summary
In this post I have shared how we can deploy a PostgreSQL database on Kubernetes using the EDB PostgreSQL Operator and provide additional resilience using Portworx volume replication.
We have looked at how we can use pgbench to test the impact of database and storage configurations on performance.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]
Leave a Reply