rkube: Rancher2 Kubernetes cluster on a single VM using RKE

There are many solutions to run a complete Kubernetes cluster in a VM on your machine, minikube, microk8s or even with kubeadm. So embarking into what others have done before me, I wanted to do the same with RKE. Mostly because I work with Rancher2 lately and I want to experiment on VirtualBox without remorse.

Enter rkube (the name directly inspired from minikube and rke). It does not do the many things that minikube does, but it is closer to my work environments.

We use vagrant to boot an Ubuntu Bionic box. It creates a 4G RAM / 2 CPU machine. We provision the machine using ansible_local and install docker from the Ubuntu archives. This is version 17 for Bionic. If you need a newer version, check the docker documentation and modify ansible.yml accordingly.

Once the machine boots up and is provisioned, it is ready for use. You will find the kubectl configuration file named kube_cluster_config.yml installed in the cloned repository directory. You can now run a simple echo server with:

kubectl --kubeconfig kube_cluster_config.yml apply -f echo.yml

Check that the cluster is deployed with:

kubectl --kubeconfig kube_cluster_config.yml get pod
kubectl --kubeconfig kube_cluster_config.yml get deployment
kubectl --kubeconfig kube_cluster_config.yml get svc
kubectl --kubeconfig kube_cluster_config.yml get ingress

and you can visit the echo server at http://192.168.98.100/echo Ignore the SSL error. We have not created a specific SSL certificate for the Ingress controller yet.

You can change the IP address you can connect to the RKE VM in the Vagrantfile.

Suppose you now want to upgrade the Kubernetes version. vagrant ssh into the VM and run rke config -l -s -a and pick the new version that you want to install. Look for the containers named hypercube. You now edit /vagrant/cluster.yml and run rke up --config /vagrant/cluster.yml.

Note that thanks to vagrant’s niceties, the /vagrant directory within the VM is the directory you cloned the repository into.

I developed the whole thing in Windows 10, so it should be able to run just about anywhere. I hope you like it and help me make it a bit better if you find it useful.

You can browse rkube here

resizing a vagrant box disk

[ I am about to do what others have done before me and blog about it one more time ]

While I do enjoy working with Windows 10, I am still not using WSL (waiting for WSL2) and work with either chocolatey or a vagrant Ubuntu box. It so happens that after pulling a few docker images the default 10G disk if full and you cannot work anymore. So, let’s resize the disk:

The disk on my ubuntu/bionic64 box, is a VMDK one. So before resizing, we need to transform it to a VDI first, which is easier for VirtualBox to handle:

VBoxManage clonehd .\ubuntu-bionic-18.04-cloudimg.vmdk .\ubuntu-bionic-18.04-cloudimg.vdi --format vdi

Now we can resize it, to say 20G:

VBoxManage modifymedium disk .\ubuntu-bionic-18.04-cloudimg.vdi --resize 20000

We’re almost there. We need to tell vagrant to boot from the VDI disk now. To do so open VirtualBox and visit the storage settings of the vagrant VM. Remove the VDMK disk(s) there and add the VDI on SCSI0 port. That’s it. We’re one step closer. Close VirtualBox and vagrant up now to boot from the VDI.

Now you have a 20G disk, but still a 10G partition. parted to the rescue:

$ sudo parted /dev/sda
(parted) resizepart 

It will ask you the partition number. You answer 1 (which is the /dev/sda1). It will ask you for the end of the partition. You answer -1 (which means until the end of disk). quit and you’re out.

You have changed the partition size, but still the filesystem reports the old size. resize2fs (assuming a compatible filesystem) and:

$ sudo resize2fs /dev/sda1

Now you’re done. You may want to vagrant reload to check whether everything works fine. Once you’re sure of that you can delete the old VMDK disk.

vagrant, ansible local and docker

This is a minor annoyance to people who want to work with docker on their vagrant boxes and provision them with the ansible_local provisioner.

To have docker installed in your box, you simply need to enable the docker provisoner in your Vagrantfile:

config.vm.provision "docker", run: "once"

Since you’re using the ansible_local provisiner, you might skip this and write a task that installs docker from get.docker.com or wherever it suits you anyway, but I prefer this as vagrant knows how to best install docker onto itself.

Now obviously you can have the provisioner pull images for you, but for any crazy reason you want to pass most, if not all, of the provisioning to ansible. And thus you want to use among others the docker_image module. So you write something like:

- name: install python-docker
  become: true
  apt:
    name: python-docker
    state: present

- name: install docker images
  docker_image:
    name: busybox

Well this is going to greet you with an error message when you up the machine for the fist time:

Error message

TASK [install docker images] ***************************************************
fatal: [default]: FAILED! => {“changed”: false, “msg”: “Error connecting: Error while fetching server API version: (‘Connection aborted.’, error(13, ‘Permission denied’))”}
to retry, use: –limit @/vagrant/ansible.retry

Whereas when you happily run vagrant provision right away:

TASK [install docker images] ***************************************************
changed: [default]

Why does this happen? Because even though the installation of docker makes the vagrant user a member of group docker, this becomes effective with the next login.

The quickest way to bypass this is to make that part of your first run of ansible provisioning as super user:

- name: install docker images
  become: true  
  docker_image:
    name: busybox

I am using the docker_image module only as an example here for lack of a better example with other docker modules on a Saturday morning. Pulling images is something that is of course very easy to do with the vagrant docker provisioner itself.

default: Running ansible-playbook…

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [install python-docker] ***************************************************
changed: [default]

TASK [install docker images] ***************************************************
changed: [default]

PLAY RECAP *********************************************************************
default : ok=3 changed=2 unreachable=0 failed=0

Vagrant, ansible_local and pip

I try to provision my Vagrant boxes with the ansible_local provisioner. The other day I was using the pip ansible module while I was booting the box, but was getting errors while installing packages. It turns out that the pip version I had when I created the environment needed an upgrade. Sure you can run a pip install pip --upgrade from the command line, but how do you do so within a playbook? Pretty easy it seems:

---
- hosts: all
  tasks:
    - name: create the needed virtual environment and upgrade pip
      pip:
        chdir: /home/vagrant
        virtualenv: work
        virtualenv_command: /usr/bin/python3 -mvenv
        name: pip
        extra_args: --upgrade

    - name: now install the requirements
      pip:
        chdir: /home/vagrant
        virtualenv: work
        virtualenv_command: /usr/bin/python3 -mvenv
        requirements: /vagrant/requirements.txt

(Link to pastebin here in case the YAML above does not render correctly for you.)

I hope it helps you too.