Background
In my Getting Started with Kubernetes database Operators series I have previously covered the Oracle database Operator, MySQL database Operator and YugabyteDB Operator, now it’s time for PostgreSQL.
PostgreSQL Database Operators
There are a number of PostgreSQL database Kubernetes Operators available including, Crunchy PostgreSQL for Kubernetes, Zlando PostgreSQL Operator, Percona Distribution for PostgreSQL on Kubenetes to name just a few, however for this blog post I will be taking the popular EnterpriseDB (EDB) PostgreSQL Operator out for a spin.
If you want to try look at using other database Operators visit OperatorHub.io which provides a registry of Kubernetes Operators.
Kubernetes Environment
Below is my Kubernetes environment.
% kubectl get nodes NAME STATUS ROLES AGE VERSION master-1 Ready control-plane,master 36d v1.21.3 node-1-1 Ready <none> 36d v1.21.3 node-1-2 Ready <none> 36d v1.21.3 node-1-3 Ready <none> 36d v1.21.3 node-1-4 Ready <none> 36d v1.21.3 node-1-5 Ready <none> 36d v1.21.3 node-1-6 Ready <none> 36d v1.21.3 node-1-7 Ready <none> 36d v1.21.3
Install Cloud Native PostgreSQL
To begin, let’s install latest Cloud Native PostgreSQL Operator.
% kubectl apply -f https://get.enterprisedb.io/cnp/postgresql-operator-1.11.0.yaml namespace/postgresql-operator-system created customresourcedefinition.apiextensions.k8s.io/backups.postgresql.k8s.enterprisedb.io created customresourcedefinition.apiextensions.k8s.io/clusters.postgresql.k8s.enterprisedb.io created customresourcedefinition.apiextensions.k8s.io/poolers.postgresql.k8s.enterprisedb.io created customresourcedefinition.apiextensions.k8s.io/scheduledbackups.postgresql.k8s.enterprisedb.io created serviceaccount/postgresql-operator-manager created clusterrole.rbac.authorization.k8s.io/postgresql-operator-manager created clusterrolebinding.rbac.authorization.k8s.io/postgresql-operator-manager-rolebinding created configmap/postgresql-operator-default-monitoring created service/postgresql-operator-webhook-service created deployment.apps/postgresql-operator-controller-manager created mutatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-mutating-webhook-configuration created validatingwebhookconfiguration.admissionregistration.k8s.io/postgresql-operator-validating-webhook-configuration created
And confirm the Operator installation
% kubectl get deploy -n postgresql-operator-system postgresql-operator-controller-manager NAME READY UP-TO-DATE AVAILABLE AGE postgresql-operator-controller-manager 1/1 1 1 110s
Create Portworx Storage Class
For example, create a file named px-postgresql-sc.yaml
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: px-postgresql-sc
provisioner: kubernetes.io/portworx-volume
parameters:
repl: "2"
io_profile: "db_remote"
allowVolumeExpansion: true
And create using kubectl apply, for example.
% kubectl apply -f px-postgresql-sc.yaml storageclass.storage.k8s.io/px-postgresql-sc created % kubectl describe sc/px-postgresql-sc Name: px-postgresql-sc IsDefaultClass: No Annotations: kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"px-postgresql-sc"},"parameters":{"io_profile":"db_remote","repl":"2"},"provisioner":"kubernetes.io/portworx-volume"} Provisioner: kubernetes.io/portworx-volume Parameters: io_profile=db_remote,repl=2 AllowVolumeExpansion: True MountOptions: <none> ReclaimPolicy: Delete VolumeBindingMode: Immediate Events: <none>
Deploy PostgreSQL Cluster
Create a file named px-postgresql-cluster.yaml
# Example of PX PostgreSQL cluster
apiVersion: postgresql.k8s.enterprisedb.io/v1
kind: Cluster
metadata:
name: px-cluster
spec:
instances: 3
# Example of rolling update strategy:
# - unsupervised: automated update of the primary once all
# replicas have been upgraded (default)
# - supervised: requires manual supervision to perform
# the switchover of the primary
primaryUpdateStrategy: unsupervised
description: "Portworx PostgreSQL Cluster"
storage:
StorageClass: "px-postgresql-sc"
size: 1Gi
Use kubectl to create a 3-node PostgreSQL cluster.
% kubectl apply -f px-postgresql-cluster.yaml cluster.postgresql.k8s.enterprisedb.io/px-cluster created
Check PostgreSQL pod status
% kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES px-cluster-1 1/1 Running 0 3m9s 10.244.3.6 node-1-6 <none> <none> px-cluster-2 1/1 Running 0 88s 10.244.4.6 node-1-4 <none> <none> px-cluster-3 1/1 Running 0 20s 10.244.7.7 node-1-7 <none> <none>
And the Portworx Persistent Volumes Claims
% kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE px-cluster-1 Bound pvc-326cdba8-6a0e-417d-8106-054a92ea16d8 1Gi RWO px-postgresql-sc 5m54s px-cluster-2 Bound pvc-0667c140-479c-4fe3-bd0c-ba8c7dbcd890 1Gi RWO px-postgresql-sc 5m23s px-cluster-3 Bound pvc-968d7735-1c97-4c3a-82cf-a3e6bc49e083 1Gi RWO px-postgresql-sc 3m44s
Check PostgreSQL cluster status
% kubectl get cluster px-cluster NAME AGE INSTANCES READY STATUS PRIMARY px-cluster 4m49s 3 3 Cluster in healthy state px-cluster-1
Additional details can be obtained with kubectl get cluster -o json, for example.
% kubectl get cluster px-cluster -o json { "apiVersion": "postgresql.k8s.enterprisedb.io/v1", "kind": "Cluster", "metadata": { "annotations": { "kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"postgresql.k8s.enterprisedb.io/v1\",\"kind\":\"Cluster\",\"metadata\":{\"annotations\":{},\"name\":\"px-cluster\",\"namespace\":\"default\"},\"spec\":{\"description\":\"Portworx PostgreSQL Cluster\",\"instances\":3,\"primaryUpdateStrategy\":\"unsupervised\",\"storage\":{\"size\":\"1Gi\",\"storageClass\":\"px-postgresql-sc\"}}}\n" }, "creationTimestamp": "2021-12-17T10:28:17Z", "generation": 1, "managedFields": [ { "apiVersion": "postgresql.k8s.enterprisedb.io/v1", "fieldsType": "FieldsV1", "fieldsV1": { "f:metadata": { "f:annotations": { ".": {}, "f:kubectl.kubernetes.io/last-applied-configuration": {} } }, "f:spec": { ".": {}, "f:description": {}, "f:enableSuperuserAccess": {}, "f:imagePullPolicy": {}, "f:instances": {}, "f:logLevel": {}, "f:maxSyncReplicas": {}, "f:minSyncReplicas": {}, "f:postgresGID": {}, "f:postgresUID": {}, "f:primaryUpdateStrategy": {}, "f:startDelay": {}, "f:stopDelay": {}, "f:storage": { ".": {}, "f:resizeInUseVolumes": {}, "f:size": {}, "f:storageClass": {} }, "f:switchoverDelay": {} } }, "manager": "kubectl-client-side-apply", "operation": "Update", "time": "2021-12-17T10:28:17Z" }, { "apiVersion": "postgresql.k8s.enterprisedb.io/v1", "fieldsType": "FieldsV1", "fieldsV1": { "f:status": { ".": {}, "f:certificates": { ".": {}, "f:clientCASecret": {}, "f:expirations": { ".": {}, "f:px-cluster-ca": {}, "f:px-cluster-replication": {}, "f:px-cluster-server": {} }, "f:replicationTLSSecret": {}, "f:serverAltDNSNames": {}, "f:serverCASecret": {}, "f:serverTLSSecret": {} }, "f:cloudNativePostgresqlCommitHash": {}, "f:cloudNativePostgresqlOperatorHash": {}, "f:configMapResourceVersion": { ".": {}, "f:metrics": { ".": {}, "f:postgresql-operator-default-monitoring": {} } }, "f:currentPrimary": {}, "f:currentPrimaryTimestamp": {}, "f:healthyPVC": {}, "f:instances": {}, "f:instancesStatus": { ".": {}, "f:healthy": {} }, "f:latestGeneratedNode": {}, "f:licenseStatus": { ".": {}, "f:isImplicit": {}, "f:isTrial": {}, "f:licenseExpiration": {}, "f:licenseStatus": {}, "f:repositoryAccess": {}, "f:valid": {} }, "f:phase": {}, "f:poolerIntegrations": { ".": {}, "f:pgBouncerIntegration": {} }, "f:pvcCount": {}, "f:readService": {}, "f:readyInstances": {}, "f:secretsResourceVersion": { ".": {}, "f:applicationSecretVersion": {}, "f:clientCaSecretVersion": {}, "f:replicationSecretVersion": {}, "f:serverCaSecretVersion": {}, "f:serverSecretVersion": {}, "f:superuserSecretVersion": {} }, "f:targetPrimary": {}, "f:targetPrimaryTimestamp": {}, "f:writeService": {} } }, "manager": "manager", "operation": "Update", "time": "2021-12-17T10:31:43Z" } ], "name": "px-cluster", "namespace": "default", "resourceVersion": "14811923", "uid": "f04f16f1-2bed-42cd-b219-b00a8b43274b" }, "spec": { "affinity": { "podAntiAffinityType": "preferred", "topologyKey": "" }, "bootstrap": { "initdb": { "database": "app", "encoding": "UTF8", "localeCType": "C", "localeCollate": "C", "owner": "app" } }, "description": "Portworx PostgreSQL Cluster", "enableSuperuserAccess": true, "imageName": "quay.io/enterprisedb/postgresql:14.1", "imagePullPolicy": "IfNotPresent", "instances": 3, "logLevel": "info", "maxSyncReplicas": 0, "minSyncReplicas": 0, "monitoring": { "customQueriesConfigMap": [ { "key": "queries", "name": "postgresql-operator-default-monitoring" } ], "disableDefaultQueries": false, "enablePodMonitor": false }, "postgresGID": 26, "postgresUID": 26, "postgresql": { "parameters": { "dynamic_shared_memory_type": "posix", "log_destination": "csvlog", "log_directory": "/controller/log", "log_filename": "postgres", "log_rotation_age": "0", "log_rotation_size": "0", "log_truncate_on_rotation": "false", "logging_collector": "on", "max_parallel_workers": "32", "max_replication_slots": "32", "max_worker_processes": "32", "shared_memory_type": "mmap", "shared_preload_libraries": "", "wal_keep_size": "512MB" } }, "primaryUpdateStrategy": "unsupervised", "resources": {}, "startDelay": 30, "stopDelay": 30, "storage": { "resizeInUseVolumes": true, "size": "1Gi", "storageClass": "px-postgresql-sc" }, "switchoverDelay": 40000000 }, "status": { "certificates": { "clientCASecret": "px-cluster-ca", "expirations": { "px-cluster-ca": "2022-03-17 10:23:17 +0000 UTC", "px-cluster-replication": "2022-03-17 10:23:17 +0000 UTC", "px-cluster-server": "2022-03-17 10:23:17 +0000 UTC" }, "replicationTLSSecret": "px-cluster-replication", "serverAltDNSNames": [ "px-cluster-rw", "px-cluster-rw.default", "px-cluster-rw.default.svc", "px-cluster-r", "px-cluster-r.default", "px-cluster-r.default.svc", "px-cluster-ro", "px-cluster-ro.default", "px-cluster-ro.default.svc" ], "serverCASecret": "px-cluster-ca", "serverTLSSecret": "px-cluster-server" }, "cloudNativePostgresqlCommitHash": "4d2cb9a", "cloudNativePostgresqlOperatorHash": "fa8752594a2302529cac373070311dc7e9fdf8df48def123c3d72fb6e6c94c81", "configMapResourceVersion": { "metrics": { "postgresql-operator-default-monitoring": "14809211" } }, "currentPrimary": "px-cluster-1", "currentPrimaryTimestamp": "2021-12-17T10:28:47Z", "healthyPVC": [ "px-cluster-1", "px-cluster-2", "px-cluster-3" ], "instances": 3, "instancesStatus": { "healthy": [ "px-cluster-1", "px-cluster-2", "px-cluster-3" ] }, "latestGeneratedNode": 3, "licenseStatus": { "isImplicit": true, "isTrial": true, "licenseExpiration": "2022-01-16T10:28:17Z", "licenseStatus": "Implicit trial license", "repositoryAccess": false, "valid": true }, "phase": "Cluster in healthy state", "poolerIntegrations": { "pgBouncerIntegration": {} }, "pvcCount": 3, "readService": "px-cluster-r", "readyInstances": 3, "secretsResourceVersion": { "applicationSecretVersion": "14809174", "clientCaSecretVersion": "14809170", "replicationSecretVersion": "14809172", "serverCaSecretVersion": "14809170", "serverSecretVersion": "14809171", "superuserSecretVersion": "14809173" }, "targetPrimary": "px-cluster-1", "targetPrimaryTimestamp": "2021-12-17T10:28:18Z", "writeService": "px-cluster-rw" } }
Use kubectl get cluster -o jsonpath to return specific items, for example.
Get cluster name
% kubectl get cluster px-cluster -o "jsonpath={.*.name}" px-cluster
Get cluster description
% kubectl get cluster px-cluster -o "jsonpath={.spec.description}" Portworx PostgreSQL Cluster
Get cluster current primary
% kubectl get cluster px-cluster -o "jsonpath={.status.currentPrimary}" px-cluster-1
Get cluster status
% kubectl get cluster px-cluster -o "jsonpath={.status.phase}" Cluster in healthy state
Cloud Native PostgreSQL plug-in
EnterpriseDB provides a plugin for kubectl to manage a PostgreSQL cluster in Kubernetes, we can install this with:
$ curl -sSfL \ https://github.com/EnterpriseDB/kubectl-cnp/raw/main/install.sh | \ sudo sh -s -- -b /usr/local/bin
This cnp extension provides some additional features to kubectl.
% kubectl cnp A plugin to manage your Cloud Native PostgreSQL clusters Usage: kubectl-cnp [command] Available Commands: certificate Create a client certificate to connect to PostgreSQL using TLS and Certificate authentication completion generate the autocompletion script for the specified shell help Help about any command promote Promote the pod named [cluster]-[node] or [node] to primary reload Reload the cluster restart Restart the cluster status Get the status of a PostgreSQL cluster version Prints version, commit sha and date of the build Flags: --as string Username to impersonate for the operation --as-group stringArray Group to impersonate for the operation, this flag can be repeated to specify multiple groups. --cache-dir string Default cache directory (default "/Users/rekins/.kube/cache") --certificate-authority string Path to a cert file for the certificate authority --client-certificate string Path to a client certificate file for TLS --client-key string Path to a client key file for TLS --cluster string The name of the kubeconfig cluster to use --context string The name of the kubeconfig context to use -h, --help help for kubectl-cnp --insecure-skip-tls-verify If true, the server's certificate will not be checked for validity. This will make your HTTPS connections insecure --kubeconfig string Path to the kubeconfig file to use for CLI requests. -n, --namespace string If present, the namespace scope for this CLI request --request-timeout string The length of time to wait before giving up on a single server request. Non-zero values should contain a corresponding time unit (e.g. 1s, 2m, 3h). A value of zero means don't timeout requests. (default "0") -s, --server string The address and port of the Kubernetes API server --tls-server-name string Server name to use for server certificate validation. If it is not provided, the hostname used to contact the server is used --token string Bearer token for authentication to the API server --user string The name of the kubeconfig user to use Use "kubectl-cnp [command] --help" for more information about a command.
Cloud Native PostgreSQL Status
CNP shows we are running 2 replicas, if the primary fails for any reason the cluster will failover to one of them.
% kubectl cnp status px-cluster Cluster in healthy state Name: px-cluster Namespace: default PostgreSQL Image: quay.io/enterprisedb/postgresql:14.1 Primary instance: px-cluster-1 Instances: 3 Ready instances: 3 Instances status Pod name Current LSN Received LSN Replay LSN System ID Primary Replicating Replay paused Pending restart Status -------- ----------- ------------ ---------- --------- ------- ----------- ------------- --------------- ------ px-cluster-1 0/7000060 7042616446968066064 ✓ ✗ ✗ ✗ OK px-cluster-2 0/7000060 0/7000060 7042616446968066064 ✗ ✓ ✗ ✗ OK px-cluster-3 0/7000060 0/7000060 7042616446968066064 ✗ ✓ ✗ ✗ OK
Summary
In this post I have shared how we can deploy a PostgreSQL database on Kubernetes using the EDB PostgreSQL Operator and provide additional resilience using Portworx persistent storage. In future posts I will look at failover, backup and restores.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]