Background
In my post Automated Oracle Container Engine for Kubernetes (OKE) build with Terraform I shared how we can use Terraform to automate the creation of an Oracle Kubernetes Engine (OKE) cluster on Oracle Cloud Infrastructure (OCI).
An alternative, and increasingly popular approach is Rancher, Rancher provides both a Command Line Interface (CLI) and Web UI interfaces, and addresses Container management across many different on-premises and Cloud solutions.
This got me thinking, can we also use the Rancher server to deploy an Oracle Kubernetes Engine (OKE) on Oracle Cloud Infrastructure (OCI) ?
Rancher Server
The Rancher Server UI is run from docker, so before you start check the Rancher support matrix for OS and Docker versions.
If you don’t have a compatible Virtual Machine (VM) and have VirtualBox and Vagrant installed you may want to consider using my Oracle Linux 8 VM which I made available on Github https://github.com/raekins/vagrant-ol8.
The Vagrant build uses Ansible to configure an Oracle Linux 8 with docker-ce (20.10) and other related packages.
Using a Linux VM or the newly created Vagrant Oracle Linux 8 virtual machine start the Rancher Server container.
[vagrant@ol8-vagrant ~]$ cat /etc/oracle-release Oracle Linux Server release 8.4 [vagrant@ol8-vagrant ~]$ docker version --format '{{.Server.Version | printf "%.5s" }}' 20.10 [vagrant@ol8-vagrant ~]$ sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p 443:443 rancher/rancher Unable to find image 'rancher/rancher:latest' locally latest: Pulling from rancher/rancher ...
Rancher OKE creation
Point your browser at the Rancher Server UI, for example https://localhost:8443/ and enter and confirm a password for the admin user.
As we will be using Rancher to create an OKE cluster select I want to create or manage multiple clusters.

By default the Rancher server does not include the Oracle Cloud Infrastructure OKE option under With a hosted Kubernetes provider.
We can easily extend the Rancher Server to include OKE by navigating to Tools -> Drivers.
From here select Oracle OKE and click Activate, this will initiate the download of the Oracle OKE Cluster Driver.

Once complete the state will change from Inactive to Active.
Return to the main menu, and click Add Cluster, the Oracle OKE Kubernetes provider should now be listed.

Add Cluster
To deploy an OKE cluster using Rancher you will need the following OCI details:
- Tenancy OCID
- Region
- User OCID
- User fingerprint
- User private key
- Compartment OCID
Enter a name and optional description in the Cluster Name and Description fields.
Open up the OCI Cloud Credentials area and enter Tenancy OCID, User OCID and User fingerprint, select required Region from the pick-list and copy and paste contents of your ~/.oci/oci_api_key.pem into User Private Key.

Authenticate & Configure Cluster
Select Kubernetes Version from pick list, provide Compartment OCID for the compartment to be used and enter Nodes Per AD Count.
A Nodes Per AD Count value of 1 will create a single worker in each of the three Availability Domains within the Region specified.

Configure Virtual Cloud Network
Choose the Virtual Cloud Network (VCN), Quick Create creates 2 public subnets and a private subnet for the worker nodes.

Configure Node Instances
Select Instance Shape and Operating System from the pick-lists, remember not every shape is available in every Region.
I have select VM Standard E3 Flex, as this is a Flex instance the number of OCPUs is required. As, I do not plan to shell into my servers I do not need to provide a public SSH key.
Once ready click Create.

Clusters
You should now be returned to the Clusters page, where can monitor the state of the OKE cluster provision.

Provisioning
If we logon to our Oracle Cloud account and navigate to Developer Services -> Kubernetes Clustes (OKE) we can see the OKE cluster creation has been initiated.


We now need to wait for Rancher to report the OKE cluster as Active.

Once Active, the Name, Provider and number of Nodes defined earlier should be visible.
Oracle Cloud Infrastructure
If we return to OCI and navigate to Developer Services -> Kubernetes Clusters (OKE) we can view the resources created by the Rancher.
Our Rancher initiated OKE Cluster should now be Active.

Node Pool
Following the Node Pool links from the OKE Cluster, we can confirm Kubernetes Version, Shape, Image Name and number of OCPUs assigned.

We can see that the 3 worker nodes have been created in the 3 Availability Domains (ADs) within my Region.
Local Access
If you plan to access the OKE Cluster from your laptop, return to the Cluster page.
From the OCI hamburger menu Developer Services -> Kubernetes Cluster (OKE) and select OKE cluster.
Select Access Cluster and Local Access, following the instructions provided.
- Confirm OCI CLI has version number higher than 2.24.0, for example
% oci -v 2.26.2
2. Create a directory for the OKE kubeconfigfile
% mkdir -p $HOME/.kube
3. Create kubeconfigfile using the oci ce cluster create-kubeconfig –cluster-id command provided.
% oci ce cluster create-kubeconfig --cluster-id ocid1.cluster.oc1.uk-london-1.xxx --file $HOME/.kube/config --region uk-london-1 --token-version 2.0.0
4. Set KUBECONFIG to the location of the .kube/config file.
% export KUBECONFIG=$HOME/.kube/config
Kubectl
Now we have our OKE Cluster up and running and configured a kubeconfig we should be able to use kubectl to manage the OKE Cluster.
I am currently using kubectl versions 1.20.8 on my laptop.
% kubectl version --short | awk -Fv '/Server Version: / {print $3}' 1.20.8
Let’s see if we can see our 3 Kubernetes nodes using kubectl get nodes
% kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 10.0.10.154 Ready node 119s v1.20.8 10.0.10.154 132.XXX.XXX.59 Oracle Linux Server 7.9 5.4.17-2102.202.5.el7uek.x86_64 cri-o://1.20.2 10.0.10.179 Ready node 106s v1.20.8 10.0.10.179 132.XXX.XXX.87 Oracle Linux Server 7.9 5.4.17-2102.202.5.el7uek.x86_64 cri-o://1.20.2 10.0.10.95 Ready node 69s v1.20.8 10.0.10.95 132.XXX.XXX.13 Oracle Linux Server 7.9 5.4.17-2102.202.5.el7uek.x86_64 cri-o://1.20.2
Do we have any pods running in the kube-system namespace
% kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-749d57869-nh897 1/1 Running 0 99m coredns-749d57869-vx2sj 1/1 Running 0 92m coredns-749d57869-wz5cl 1/1 Running 0 92m csi-oci-node-2sgjm 1/1 Running 0 94m csi-oci-node-dzd4j 1/1 Running 1 93m csi-oci-node-zvtw5 1/1 Running 1 94m kube-dns-autoscaler-6cbdf96f6d-gdjw5 1/1 Running 0 99m kube-flannel-ds-8dzcd 1/1 Running 1 94m kube-flannel-ds-qtd8t 1/1 Running 2 93m kube-flannel-ds-z47gz 1/1 Running 2 94m kube-proxy-9dpx9 1/1 Running 0 94m kube-proxy-hx7xh 1/1 Running 0 93m kube-proxy-kk9q9 1/1 Running 0 94m proxymux-client-5hdnc 1/1 Running 0 94m proxymux-client-6mgx9 1/1 Running 0 93m proxymux-client-q7qtg 1/1 Running 0 94m
Also, what storageclasses are available.
% kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE oci (default) oracle.com/oci Delete Immediate false 96m oci-bv blockvolume.csi.oraclecloud.com Delete WaitForFirstConsumer false 96m
Summary
In this post I have shared how Rancher can be used to provision a Kubernetes cluster on the Oracle Cloud Container Engine for Kubernetes on Oracle Cloud Infrastructure (OCI)
In my next post I will show how we can use Rancher to install Portworx into our OKE cluster.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]
Hi
it is very nice blog with the oracle and kubernetes.
what about the DR in the kubernetes like oracle dataguard ?
can we make a better DR using the kubernetes in the oracle database environment without using the dataguard ?
Thanks
kwanyoung
Hi Kwanyoung,
Thank you for taking time to share your feedback.
Yes, using a Kubernetes technology like Portworx we can implement replication to have our data written to multiple storage nodes, or implement synchronous or asynchronous DR.
https://docs.portworx.com/portworx-install-with-kubernetes/disaster-recovery/px-metro/
https://docs.portworx.com/portworx-install-with-kubernetes/disaster-recovery/async-dr/