Sometimes I don’t want to do things “the hard way” – I just need a testing environment. Kubernetes is one of those things when I’m developing applications. So when I needed a development cluster for my newest app, I wanted something fast, easy and without a lot of dependencies. Luckily, the latest releases of Kubernetes has made great strides in reducing the time between wanting a Kubernetes cluster and actually using a Kubernetes cluster – If you are on a public cloud, not OpenStack. So I fired up my code editor and created a quick way to deploy simple Kubernetes clusters with terraform and kubeadm.
This post provides the instructions and bootstrap scripts to create a Kubernetes development cluster on an OpenStack cloud with minimal dependencies (you will only needs to install terraform
, this repo and kubectl
on your laptop) to get a fully functional cluster running. At the end of the process, you should see a network environment like this in your OpenStack Horizon dashboard.
There are some limitations to this. Most notably, it is not highly available and requires Ubuntu 16.04 as an operating systems (althought kubeadm
supports other OSes). You can read about the other limitations of kubeadm
here.
This will be a five step process:
- Install prerequisites on laptop
- Download kubernetes terraform repository
- Configure the terraform scripts for your cloud
- Run terraform to create the cluster
- Configure your laptop to use your new cluster
Install Prerequisites
To start this process, you’ll need some software on your Mac (yes, I said Mac. I’ve tailored to this here. It will work on other platforms, but there might be some minor differences). I usually install most of my software with Homebrew.
$ brew install kubectl terraform jq
I’ve installed three utilities:
kubectl
(qube-cuttle) which is a CLI tool to interact with your Kubernetes clusterterraform
(a tool by Hashicorp) which provisions and manages cloud infrastructure (used to create OpenStack instances, networks and routers)jq
which parses and formats JSON data on the command line (not really needed)
Download Repository
Once we have the requisite tools installed, we need to pull down the terra-kubeadm-on-os
github repository (link).
$ curl -o tf.zip https://github.com/slashk/terra-kubeadm-on-os/archive/master.zip
$ unzip tf.zip
$ cd terra-kubeadm-on-os-master
You can also do this by git clone https://github.com/slashk/terra-kubeadm-on-os/
if you would like.
Configure Terraform
Next, we need to configure the scripts for our OpenStack cloud. You’ll need to login to your OpenStack Horizon dashboard as we’ll need to lookup information in our account.
This set of scripts will create everything that we need to for our cluster:
- Master and worker node instances
- Tenant network and subnet to connect the instances
- Router to connect the instances to Internet (or provider network)
To accomplish this, we need to tell Terraform a few details about our OpenStack cloud.
$ cp sample.tfvars terraform.tfvars
$ vi terraform.tfvars
The terraform.tfvars
file uses a simple variable = "value"
format where you can customize it for your OpenStack cloud and preferences.
# Contained in your keystonerc file
region = "RegionOne"
user_name = "ken"
tenant_name = "k8s"
password = "this.is.not.my.password"
auth_url = "http://10.0.2.201:5000/v2.0"
# use `openstack image list` to get id of Ubuntu 16.04 LTS image
image_id = "6f5981a2-2e64-4381-ba68-e25a15c220e0"
# username of that image (ubuntu)
ssh_user_name = "ubuntu"
# find with `openstack floating ip pool list`
pool = "lab"
# Use `openstack flavor list` to find an appropriate flavor
master_flavor = "kube-master"
worker_flavor = "kube-master"
# keyfile path on your laptop
ssh_key_file = "~/.ssh/terraform"
# gateway of your external network
external_gateway = "fdcb4758-44da-4d15-ad7d-d7fce1d973ce"
# customize to your cluster size
worker_count = "2"
# kube_token can be any 6 digit.16 digit combination
kube_token = "123456.0123456789012345"
# dns_nameservers is the DNS server for your new subnet
dns_nameservers = ["10.0.2.1"]
# tenant_net_cidr is whatever CIDR to use for your new subnet
tenant_net_cidr = "192.168.50.0/24"
# a valid kubernetes version from the releases page at github
kube_version = "v1.5.2"
As you can see, I’ve setup this to cluster for 1 master, 2 worker nodes running Kubernetes version 1.5.2 on nodes sized with the kube-master
flavor (that I specially created in my OpenStack cloud).
Run Terraform
With your configuration set, you just need to kick this off. I usually do a terraform plan
and give it a quick sanity test before letting it rip with a terraform apply
.
$ terraform plan
<< plan output>>
$ terraform apply
… And wait. On my cloud, it takes about 5-7 minutes to complete (mostly updating packages).
At the end, you should see something like this:
... snip ...
openstack_compute_instance_v2.kube-worker.1: Creation complete
Apply complete! Resources: 10 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
Outputs:
master_private_ip = 192.168.50.3
master_public_ip = 10.0.2.138
sshkey = ~/.ssh/terraform
token = 123456.0123456789012345
username = ubuntu
worker_private_ip = [
192.168.50.5,
192.168.50.4
]
Configure Laptop
Now that the cluster is working, let’s setup our laptop for to use the new cluster. I’ve created two scripts (post-install.sh
and commands.alias
) to automate this process. The first one finishes up some configuration tasks on the cluster (configure networking, load dashboard and weavescope, etc.) and then transfers the kubectl
configuration back to your laptop. At the end of the run, it will do a sanity check to make sure everything is working:
$ ./post-install.sh
... snip ...
Kubernetes master is running at https://10.0.2.138:6443
KubeDNS is running at https://10.0.2.138:6443/api/v1/proxy/namespaces/kube-system/services/kube-dns
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.6", GitCommit:"e569a27d02001e343cb68086bc06d47804f62af6", GitTreeState:"clean", BuildDate:"2016-11-12T05:22:15Z", GoVersion:"go1.7.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.1", GitCommit:"82450d03cb057bab0950214ef122b67c83fb11df", GitTreeState:"clean", BuildDate:"2016-12-14T00:52:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
SCRIPT COMPLETE.
The next script ($ source ./commands.alias
) is just a set of convenience bash aliases:
kc
runskubectl
with the newly transferred configuration filedashproxy
which sets up a tunnel to access the Kubernetes Dashboard from your laptopscopeproxy
which sets up a tunnel to access Weavescope to visualize your cluster
Use Your Cluster
Let’s check a few things on the new cluster with our kc
alias (for kubectl
):
$ kc get nodes
NAME STATUS AGE
kube-master Ready,master 4d
kube-worker-0 Ready 4d
kube-worker-1 Ready 4d
This shows the three nodes we’ve installed. A different form of the get
subcommand shows all our pods:
$ kc get pods -o wide --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE
default weave-scope-agent-d6mkw 1/1 Running 0 4d 192.168.50.3 kube-master
default weave-scope-agent-tf28q 1/1 Running 0 4d 192.168.50.6 kube-worker-1
default weave-scope-agent-zq36h 1/1 Running 0 4d 192.168.50.5 kube-worker-0
default weave-scope-app-1387651679-033vb 1/1 Running 0 4d 10.44.0.1 kube-worker-0
kube-system dummy-2088944543-fr2d4 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system etcd-kube-master 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system kube-apiserver-kube-master 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system kube-controller-manager-kube-master 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system kube-discovery-1769846148-dn0c7 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system kube-dns-2924299975-s49zg 4/4 Running 0 4d 10.32.0.2 kube-master
kube-system kube-proxy-fsm1c 1/1 Running 0 4d 192.168.50.5 kube-worker-0
kube-system kube-proxy-s9k7t 1/1 Running 0 4d 192.168.50.6 kube-worker-1
kube-system kube-proxy-xcbcp 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system kube-scheduler-kube-master 1/1 Running 0 4d 192.168.50.3 kube-master
kube-system kubernetes-dashboard-3203831700-42qrk 1/1 Running 0 4d 10.36.0.1 kube-worker-1
kube-system weave-net-12gv6 2/2 Running 0 4d 192.168.50.3 kube-master
kube-system weave-net-d2k67 2/2 Running 0 4d 192.168.50.5 kube-worker-0
kube-system weave-net-pzrb5 2/2 Running 1 4d 192.168.50.6 kube-worker-1
You can continue to experiment with the kubectl
tool to examine and manipulate your cluster.
Two GUI management tools have also been installed on the cluster for you: Kubernetes Dashboard and Weave Scope. You an access either over a tunnel to your Kubernetes control plane with the aliases you just installed.
To view the the Kubernetes dashboard, use the dashproxy
alias:
$ dashproxy
Forwarding from 127.0.0.1:9090 -> 9090
Forwarding from [::1]:9090 -> 9090
Handling connection for 9090
Handling connection for 9090
Then navigate your browser to http://localhost:9090/
and you should see the dashboard.
To try Weavescope, use the scopeproxy
alias:
$ scopeproxy
Forwarding from 127.0.0.1:4040 -> 4040
Forwarding from [::1]:4040 -> 4040
Handling connection for 4040
Handling connection for 4040
Then navigate your browser to http://localhost:4040/
and you should see your new cluster layout.
To launch some actual applications on your new cluster, follow the “socks shop” example in the Kubernetes docs at Installing a sample application .
Cleanup
To clean up your entire cluster (i.e. delete everything you just created), use the terraform destroy
command.
$ terraform destroy
Do you really want to destroy?
Terraform will delete all your managed infrastructure.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
openstack_networking_router_v2.kube: Refreshing state... (ID: 08e02e39-6601-4a96-af3a-3fb1b9d8a044)
openstack_compute_secgroup_v2.kube: Refreshing state... (ID: 9c9be179-edc3-4b13-ba38-7faa5338adbe)
openstack_compute_keypair_v2.kube: Refreshing state... (ID: SSH keypair for kube instances)
... snip ....
openstack_networking_router_v2.kube: Destruction complete
openstack_networking_network_v2.kube: Destruction complete
Destroy complete! Resources: 10 destroyed.
Everything that has been created should now be gone (subnet, routers and all). If you get an error, try running the terraform destroy
command again.