Background
In my last post I shared how to deploy Portworx by Pure Storage on the Oracle Container Engine for Kubernetes (OKE) within the Oracle Cloud Infrastructure (OCI).
In this post I will share how we can automate the build of an OKE cluster using Terraform.
Terraform for Oracle Cloud Infrastructure (OCI)
If you want get started with OCI automation using Terraform as your Infrastructure as Code (IaC) tool of choice. I have some good news, Oracle has done most of the heavy lifting for us by creating a number of Terraform modules for OCI, including OKE.
You can find listing of all the available modules at the Terraform Registry

Terraform for Oracle Container Engine (OKE)
In this Blog we will be using the Terraform OKE Module Installer for OCI ( oracle-terraform-modules / oke ).

This Terraform module provisions all the necessary resources for Oracle Container Engine and supports multiple topologies to address most needs.
Let’s start by going to the documentation and clicking on Pre-requisites , note the documentation includes links to the Oracle GitHub repo ( terraform-oci-oke ) for source and documentation.
Complete pre-requisites including downloading and installing Terraform and Git.
Confirm Terraform is installed, available in your path and version with:
% terraform -v Terraform v0.14.3
Now pull Terraform OKE module
% git clone https://github.com/oracle-terraform-modules/terraform-oci-oke.git Cloning into 'terraform-oci-oke'... remote: Enumerating objects: 116, done. remote: Counting objects: 100% (116/116), done. remote: Compressing objects: 100% (97/97), done. remote: Total 2347 (delta 57), reused 45 (delta 19), pack-reused 2231 Receiving objects: 100% (2347/2347), 3.47 MiB | 2.14 MiB/s, done. Resolving deltas: 100% (1640/1640), done.git
Create a provider.tf file and add the following:
provider "oci" { fingerprint = var.api_fingerprint private_key_path = var.api_private_key_path region = var.region tenancy_ocid = var.tenancy_id user_ocid = var.user_id }
Copy the terraform.tfvars.example file to terraform.tfvars
% cp terraform.tfvars.example terraform.tfvars
Edit the terrraform.tfvars file update the following to your required OCI / OKE values:
I have listed below the variables and sections I updated.
- Identity and access parameters
- api_fingerprint
- api_private_key_path
- region
- tenancy_id
- user_id
- general oci parameters
- compartment_id
- label_prefix
- ssh keys
- ssh_private_key_path (Required if bastion is enabled)
- ssh_public_key_path (Required if bastion is enabled)
- Networking
- vcn_dns_label
- vcn_name
- Bastion
- bastion_enabled
- bastion_shape
- bastion_timezone
- Operator
- operator_shape
- operator_timezone
- oke
- cluster_name
- worker_mode
Note: not all compute shapes may be available to you, due to region or licence restrictions.
For example my regions is uk-london-1, and I am using the compute shape VM.Standard.E2.1 rather than the default VM.Standard.E3.Flex as this is not available to me.
Creating the OKE Cluster
Initialise working directory containing Terraform configuration files.
% terraform init
Initializing modules...
Initializing the backend...
Initializing provider plugins...
- Reusing previous version of hashicorp/oci from the dependency lock file
- Reusing previous version of hashicorp/template from the dependency lock file
- Reusing previous version of hashicorp/null from the dependency lock file
- Reusing previous version of hashicorp/local from the dependency lock file
- Installing hashicorp/oci v4.7.0...
- Installed hashicorp/oci v4.7.0 (signed by HashiCorp)
- Installing hashicorp/template v2.2.0...
- Installed hashicorp/template v2.2.0 (signed by HashiCorp)
- Installing hashicorp/null v3.0.0...
- Installed hashicorp/null v3.0.0 (signed by HashiCorp)
- Installing hashicorp/local v2.0.0...
- Installed hashicorp/local v2.0.0 (signed by HashiCorp)
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
Run the plan to see required changes.
% terraform plan An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: ... Plan: 31 to add, 0 to change, 0 to destroy. Changes to Outputs: + bastion_public_ip = (known after apply) + ig_route_id = (known after apply) + kubeconfig = "export KUBECONFIG=generated/kubeconfig" + nat_route_id = (known after apply) + operator_private_ip = (known after apply) + ssh_to_bastion = (known after apply) + ssh_to_operator = (known after apply) + subnet_ids = (known after apply) + vcn_id = (known after apply) ------------------------------------------------------------------------ Note: You didn't specify an "-out" parameter to save this plan, so Terraform can't guarantee that exactly these actions will be performed if "terraform apply" is subsequently run.
Apply to create OKE cluster, VCN and other components.
% terraform apply An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create <= read (data resources) Terraform will perform the following actions: ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value:yes module.oke.null_resource.create_local_kubeconfig: Creating... module.oke.null_resource.create_local_kubeconfig: Provisioning with 'local-exec'... module.oke.null_resource.create_local_kubeconfig (local-exec): Executing: ["/bin/sh" "-c" "rm -rf generated"] module.oke.null_resource.create_local_kubeconfig: Provisioning with 'local-exec'... module.oke.null_resource.create_local_kubeconfig (local-exec): Executing: ["/bin/sh" "-c" "mkdir generated"] module.oke.null_resource.create_local_kubeconfig: Provisioning with 'local-exec'... module.oke.null_resource.create_local_kubeconfig (local-exec): Executing: ["/bin/sh" "-c" "touch generated/kubeconfig"] module.oke.null_resource.create_local_kubeconfig: Creation complete after 0s [id=8607499120473297965] module.base.module.bastion.oci_ons_notification_topic.bastion_notification[0]: Creating... module.base.module.vcn.oci_core_vcn.vcn: Creating... ... module.oke.null_resource.create_service_account[0] (remote-exec): Error from server (AlreadyExists): clusterrolebindings.rbac.authorization.k8s.io "cluster-admin" already exists module.oke.null_resource.create_service_account[0]: Creation complete after 12s [id=5170337420313684055] Apply complete! Resources: 31 added, 0 changed, 0 destroyed. Outputs: ...
Oracle Cloud Infrastructure (OCI)
Now lets logon to OCI and click on the hamburger menu from the main dashboard.
Kubernetes Clusters
Navigate to Developer Services -> Kubernetes Clusters to see our newly created Oracle Container Engine for Kubernetes (OKE) cluster in the compartment specified.


Compute Instances
We can also see our virtual machines by navigating to Compute -> Instances

From here we can see our Bastion, Operator and Kubernetes worker compute nodes created by Terraform using the shapes provided.

Virtual Cloud Networks
To see our Virtual Cloud Network (VCN) navigate to Networking -> Virtual Cloud Networks


OKE Tear-Down
We can also you Terraform to tear-down our Kubernetes Cluster if no longer required by typing terraform destroy and confirming e.g.
% terraform destroy
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
- destroy
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value:
Summary
In this post I have shared how we can use Infrastructure as Code (IaC) to rapidly spin-up and tear-down a Kubernetes cluster using Terraform.
In my next post I will show how we can create OCI block volumes and attach them to our OKE instances for future use with Portworx.
[twitter-follow screen_name=’RonEkins’ show_count=’yes’]