Kubernetes local playground alternatives

Introduction

In this article we will explore different alternatives for spinning up a cluster locally for testing, practicing or just developing an application.

The source code and/or documentation of the projects that we will be testing are listed here:

There are more alternatives like Microk8s but I will leave that as an exercise for the reader.

If you want to give it a try to each one make sure to follow their recommended way of install or your distro/system way.

The first two (minikube and kind) we will see how to configure a CNI plugin in order to be able to use Network Policies, in the other two environments you can customize everything and these are best for learning rather than for daily usage but if you have enough ram you could do that as well.

We will be using the following pods and network policy to test that it works, we will create 3 pods, 1 client and 2 app backends, one backend will be listening in port TCP/1111 and the other in the port TCP/2222, in our netpolicy we will only allow our client to connect to app1:

If you want to learn more about netcat and friends go to: Cat and friends: netcat and socat

Minikube

Minikube is heavily used but it can be too heavy sometimes, in any case we will see an example of making it work with network policies, the good thing is that it has a lot of documentation because a lot of people use it and it is updated often:

Give it a couple of minutes to start, for new versions of minikube you can install it like this, otherwise you can specify that you will install the CNI plugin and then just install the manifests.
Then let’s validate that it works

Note that we add the timeout command with 5 seconds wait so we don’t have to really wait for nc timeout which by default is no timeout, we also tested with nc timeout.

You can get more info for minikube using Cilium on their docs

Remember to clean up
KIND

KIND is really lightweight and fast, I usually test and develop using KIND the main reason is that almost everything works like in a real cluster but it has no overhead, it’s simple to install and easy to run, first we need to put this config in place to tell kind not to use it’s default CNI.

Then we can create the cluster and install calico (there is a small gotcha here, you need to check that the calico node pods come up if not kill them and they should come up and everything will start working normally, this is due to the environment variable that gets added after the deployment for it to work with KIND):

You can check for more config options for KIND here

Validation
Testing again:
Kubeadm and vagrant

This is an interesting scenario and it’s great to understand how clusters are configured using kubeadm also to practice things such as adding/removing/upgrading the nodes, backup and restore etcd, etc. if you want to test this one clone this repo: Kubernetes with kubeadm using vagrant

Next, lets copy the kubeconfig and deploy our resources then test (this deployment is using weave)
Test it (wait until the pods are in ready state)
For more info refer to the readme in the repo and the scripts in there, it should be straight forward to follow and reproduce, remember to clean up:
Kubernetes the hard way and vagrant

This is probably the most complex scenario and it’s purely educational you get to generate all the certificates by hand basically and configure everything by yourself (see the original repo for instructions in how to do that in gcloud if you are interested), if you want to test this one clone this repo: Kubernetes the hard way using vagrant, but be patient and ready to debug if something doesn’t go well.

Validation:

Install the manifests and test it:
Clean up
Wrap up

Each alternative has its use case, test each one and pick the one that best fit your needs.

Clean up

Remember to clean up to recover some resources in your machine.

Errata

If you spot any error or have any suggestion, please send me a message so it gets fixed.

Also, you can check the source code and changes in the generated code and the sources here