sometimes you need to change your docker network

I was working locally with some containers running over docker-compose. Everything was OK, except I could not access a specific service within our network. Here is the issue: docker by default assigns IP addresses to containers from the pool and docker-compose from It just so happened that what I needed to access lived on space. So what to do to overcome the nuisance? Obviously you cannot renumber a whole network for some temporary IP overlapping. Let’s abuse reserved IP space instead. Here is the relevant part of my daemon.json now:

  "default-address-pools": [
    { "base": "", "size": 28 },
    { "base": "", "size": 28 },
    { "base": "", "size": 28 }

According to RFC5737 are reserved for documentation purposes. I’d say local work is close enough to documentation to warrant the abuse, since we also adhere to its operational implications. Plus I wager that most of the people while they always remember classic RFC1918 addresses, seldom take into account TEST-NET-1 and friends.

Sometimes you need JDK8 in your docker image …

… and depending the base image this may not be available, even from the ppa repositories. Moreover, you cannot download it directly from Oracle without a username / password anymore, so this may break automation, if you do not host your own repositories. Have no fear, sdkman can come to the rescue and fetch for you a supported JDK8 version. You need to pay some extra care with JAVA_HOME and PATH though:

FROM python:3.6-slim
RUN apt-get update && apt-get upgrade -y
RUN apt-get install -y curl zip unzip
RUN curl -fsSL -o
RUN sh ./
RUN bash -c "source /root/.sdkman/bin/ && sdk install java 8.0.212-amzn"
ENV JAVA_HOME=/root/.sdkman/candidates/java/current
CMD bash

Since curl | sudo bash is a topic of contention, do your homework and decide whether you’re going to use it as displayed in the Dockerfile above, or employ some extra verification mechanism of your choice.

minikube instead of docker-machine

docker-machine is a cool project that allows you to work with different docker environments, especially when your machine or OS (like Windows 10 Home) does not allow for native docker. My primary use case for docker-machine was spinning up a VirtualBox host and using it to run the docker service.

It seems that docker-machine is not in so much active development any more and I decided to switch to another project for my particular use case: Windows 10 Home with VirtualBox. One solution is minikube:

minikube start
minikube docker-env | Invoke-Expression # Windows PowerShell
eval $(minikube docker-env) # bash shell
docker pull python:3
docker images

and you’re set to go.

ansible, docker-compose, iptables and DOCKER-USER

NOTE: manipulating DOCKER-USER is beyond anyone’s sanity. The information bellow seems to work sometimes (like when I wrote the post) and others not. That is why you will find posts with similar advise on the Net that may or may not work for you. I plan to revisit this and figure out what is wrong, making the following information only temporarily correct.

When you want to run ZooNavigator, the recommendation to get you started is via this docker-compose.yml. However, Docker manages your iptables (unless you go the –iptables=false way) and certain ports will be left wide open. This may not be what you want to do. Docker provides the DOCKER-USER chain for user defined rules that are not affected by service restarts and this is where you want to work. Most of my googling resulted in recipes that did not work, because their final rule was to deny anything from after having allowed whatever was to be whitelisted. I solved this in the following example playbook, and the rules worked like a charm. Others that may find themselves in the same situation may want to give it a shot:

- name: maintain the DOCKER-USER access list
  hosts: zoonavigators
    - wl_hosts:
      - ""
      - ""
    - wl_ports:
      - "7070"
      - "7071"

  - name: check for iptables-services
      name: iptables-services
      state: latest

  - name: enable iptables-services
      name: iptables
      enabled: yes
      state: started

  - name: flush DOCKER-USER
      chain: DOCKER-USER
      flush: true

  - name: whitelist for DOCKER-USER
      chain: DOCKER-USER
      protocol: tcp
      ctstate: NEW
      syn: match
      source: "{{ item[0] }}"
      destination_port: "{{ item[1] }}"
      jump: ACCEPT
      - "{{ wl_hosts }}"
      - "{{ wl_ports }}"

  - name: drop non whitelisted connections to DOCKER-USER
      chain: DOCKER-USER
      protocol: tcp
      #source: ""
      destination_port: "{{ item }}"
      jump: DROP
      - "{{ wl_ports }}"

  - name: save new iptables
      /usr/libexec/iptables/iptables.init save

Line 46 is the key. The obvious choice would have been source: "" but this did not work for me.

[pastebin here]

vagrant, ansible local and docker

This is a minor annoyance to people who want to work with docker on their vagrant boxes and provision them with the ansible_local provisioner.

To have docker installed in your box, you simply need to enable the docker provisoner in your Vagrantfile:

config.vm.provision "docker", run: "once"

Since you’re using the ansible_local provisiner, you might skip this and write a task that installs docker from or wherever it suits you anyway, but I prefer this as vagrant knows how to best install docker onto itself.

Now obviously you can have the provisioner pull images for you, but for any crazy reason you want to pass most, if not all, of the provisioning to ansible. And thus you want to use among others the docker_image module. So you write something like:

- name: install python-docker
  become: true
    name: python-docker
    state: present

- name: install docker images
    name: busybox

Well this is going to greet you with an error message when you up the machine for the fist time:

Error message

TASK [install docker images] ***************************************************
fatal: [default]: FAILED! => {“changed”: false, “msg”: “Error connecting: Error while fetching server API version: (‘Connection aborted.’, error(13, ‘Permission denied’))”}
to retry, use: –limit @/vagrant/ansible.retry

Whereas when you happily run vagrant provision right away:

TASK [install docker images] ***************************************************
changed: [default]

Why does this happen? Because even though the installation of docker makes the vagrant user a member of group docker, this becomes effective with the next login.

The quickest way to bypass this is to make that part of your first run of ansible provisioning as super user:

- name: install docker images
  become: true  
    name: busybox

I am using the docker_image module only as an example here for lack of a better example with other docker modules on a Saturday morning. Pulling images is something that is of course very easy to do with the vagrant docker provisioner itself.

default: Running ansible-playbook…

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [install python-docker] ***************************************************
changed: [default]

TASK [install docker images] ***************************************************
changed: [default]

PLAY RECAP *********************************************************************
default : ok=3 changed=2 unreachable=0 failed=0