ansible, docker-compose, iptables and DOCKER-USER

NOTE: manipulating DOCKER-USER is beyond anyone’s sanity. The information bellow seems to work sometimes (like when I wrote the post) and others not. That is why you will find posts with similar advise on the Net that may or may not work for you. I plan to revisit this and figure out what is wrong, making the following information only temporarily correct.

When you want to run ZooNavigator, the recommendation to get you started is via this docker-compose.yml. However, Docker manages your iptables (unless you go the –iptables=false way) and certain ports will be left wide open. This may not be what you want to do. Docker provides the DOCKER-USER chain for user defined rules that are not affected by service restarts and this is where you want to work. Most of my googling resulted in recipes that did not work, because their final rule was to deny anything from after having allowed whatever was to be whitelisted. I solved this in the following example playbook, and the rules worked like a charm. Others that may find themselves in the same situation may want to give it a shot:

- name: maintain the DOCKER-USER access list
  hosts: zoonavigators
    - wl_hosts:
      - ""
      - ""
    - wl_ports:
      - "7070"
      - "7071"

  - name: check for iptables-services
      name: iptables-services
      state: latest

  - name: enable iptables-services
      name: iptables
      enabled: yes
      state: started

  - name: flush DOCKER-USER
      chain: DOCKER-USER
      flush: true

  - name: whitelist for DOCKER-USER
      chain: DOCKER-USER
      protocol: tcp
      ctstate: NEW
      syn: match
      source: "{{ item[0] }}"
      destination_port: "{{ item[1] }}"
      jump: ACCEPT
      - "{{ wl_hosts }}"
      - "{{ wl_ports }}"

  - name: drop non whitelisted connections to DOCKER-USER
      chain: DOCKER-USER
      protocol: tcp
      #source: ""
      destination_port: "{{ item }}"
      jump: DROP
      - "{{ wl_ports }}"

  - name: save new iptables
      /usr/libexec/iptables/iptables.init save

Line 46 is the key. The obvious choice would have been source: "" but this did not work for me.

[pastebin here]


Let’s do a Koch snowflake

Good friend Dimitris, after reading my previous post pointed me to Koch snowflakes. How cool is a line of infinite length that covers a finite surface! A Koch snowflake turns out to be easily constructed with turtle as suggested by the Wikipedia article. Well, you also get to learn about Thue-Morse sequences and evil numbers in the process. To be honest, this is also a good toy case, using a real sequence, to learn how to use yield.

Koch snowflake
Koch snowflake
import turtle
import functools

# Compute the next digit of the Thue-Morse sequence
# Learn about evil and oddium numbers in the process.

def thue_morse_seq(n=0):
  while True:
    yield functools.reduce(lambda x, y: x + y, map(int, bin(n)[2:])) % 2
    n += 1

if __name__ == "__main__":

  window = turtle.Screen()
  window.bgcolor('light gray')

  pen = turtle.Turtle()
  pen.color('dark blue')
  pen.setpos(0, 0)

  n = thue_morse_seq(0)
  while True:
    if next(n) == 0:


Rule 110 with Turtle

I like a lot and from time to time I use it for 10 liners instead of my command line. Especially when I do not want to install a brand new language implementation or even creating a new Python environment for five minutes.

I was thinking about Rule 110 and how most of the examples I’ve seen from hobbyists are ASCII based as opposed to more proper, nicer graphics by researchers of the CA area. And I was thinking whether using Turtle could be used to display it a bit better in one’s spare time. That is because I still have not figured out how TkInter works with It turns out that you can make something nice with Turtle:

rule 110
Rule 110 elementary cellular automaton

I have pasted the rather rudimentary, ready, python code here.

The Doomsday Machine: Confessions of a Nuclear War Planner

A strange game. The only winning move is not to play. How about a nice game of chess?Joshua

The Doomsday Machine: Confessions of a Nuclear War PlannerThe Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg

My rating: 4 of 5 stars

It took me a long time to finish the book. It contains too much information to divulge and I needed to do the occasional back and forth to remind me of stuff I’d read in the previous chapters.

If you ever wonder how one prepares for a nuclear conflict, target decision, actually starting such a war, it extent and even the tactical deployment, along with any problems that may arise in the process, and there are many, this is a book you need t read.

Fatalities after a nuclear strike according to plan
Fatalities after a nuclear strike according to plan

Humanity it seems has averted omnicide a number of times. Not only in the case of the Cuban crisis which is detailed, but also on a number of other cases also. You get to get a glimpse within the many different stakeholders during planning and deployment, conflicting interests, expected number of deaths, presidential and other civilian and military views on the matter. Highly illuminating book. One I came across by pure chance.

View all my reviews

The second system effect

The second system effect is the observation by Fred Brookes that states that following the success of a system, the architect is doomed to have a failure in his next one, where they will put in all the features and what not that were left out in the first one. Because now they know how to do it.

I was reminded of this directly while reading Looking Back at Postgres, where Joseph M. Hellerstein makes exactly that observation:

The highest-order lesson I draw comes from the fact that that Postgres defied Fred Brooks’ “Second System Effect”. Brooks argued that designers often follow up on a successful first system with a second system that fails due to being overburdened with features and ideas. Postgres was Stonebraker’s second system, and it was certainly chock full of features and ideas. Yet the system succeeded in prototyping many of the ideas, while delivering a software infrastructure that carried a number of the ideas to a successful conclusion.

Mike Stonebraker‘s first system was Ingres. He worked on both Ingres and Postgres while at Berkeley. He later moved to MIT to continue doing interesting database related stuff. Here is what Hellerstein writes at the end of the paper:

Another lesson is that a broad focus—“one size fits many”—can be a winning approach for both research and practice. To coin some names, “MIT Stonebraker” made a lot of noise in the database world in the early 2000s that “one size doesn’t fit all.” Under this banner he launched a flotilla of influential projects and startups, but none took on the scope of Postgres. It seems that “Berkeley Stonebraker” defies the later wisdom of “MIT Stonebraker,” and I have no issue with that. Of course there’s wisdom in the “one size doesn’t fit all” motto (it’s always possible to find modest markets for custom designs!), but the success of “Berkeley Stonebraker’s” signature system—well beyond its original intents—demonstrates that a broad majority of database problems can be solved well with a good general-purpose architecture. Moreover, the design of that architecture is a technical challenge and accomplishment in its own right. In the end—as in most science and engineering debates— there isn’t only one good way to do things. Both Stonebrakers have lessons to teach us. But at base, I’m still a fan of the broader agenda that “Berkeley Stonebraker” embraced.

And then it hit me: Postgres defies Brookes’s law, because it is not a second system. The second system is “MIT Stonebraker”.

And now I hope the database gods show mercy on me. At least I am a fan of “Berkeley Stonebraker” too.

I blog to myself

I do not take much care of this blog the last few years. Not much input or something to share. It is not that I am not journaling; I am. Just to myself. Every day. I keep a daily log of what happened. This started when one day I realized that I could not remember what I had done the previous week. Work related mostly, but indeed I could not. Had my manager asked me “Why did we pay you for this week?” I could not tell him.

So I started keeping one or two sentences per day, at the end of the day. What I thought was the most important thing to remember a month from now. And then other things happened: outages, maybe something that got me angry, something I learned, something I enjoyed, something I am planning. They all go in there.

Sometimes I neglect scribbling even a sentence a day. Maybe for four days in a row. But then when I sit down I make an effort to remember. At first something comes into mind and I put it down. Then, usually when I am noting something of another day, something pops up (“Wait, was this yesterday, or the day before?”) and the timeline of events falls into order.

These days I have my file open while I am doing stuff and sometimes I keep my notes while things happen. Like when I had Zookeepers that refused to bind to IPv4 addresses. Well, export KAFKA_OPTS="" goes into the journal and for future reference.

So there, I write to myself daily and somehow this is helping me a lot. In recalling back things, keeping track of my day, stuff like that. But I guess there’s some light version of the spoon theory for blogging, and since I blog to myself every day, I’ve not much to share. Whatever few glimpses are there, usually go to Facebook, or twitter.

Agility in planning (not)

I’ve read about the inflexibilities of Gosplan while going through Red Plenty. These days I am reading Confessions of a Nuclear War Planner and it gives me an opportunity to examine inflexibilities in thinking on the western bloc side:

The price of bringing all the theatre and component service plans into harmony with each other, into one plan, was the total elimination of any flexibility in carrying it out.

Yes no flexibility at all in a military machine where the no plan survives first contact with the enemy is dated around the same time (1963). In this case the inflexibility was due to the lack of staff and computer time available to complete alternatives. This is similar to Gosplan’s problems: they had so many inputs to their models, that their planning for the current year was completing around October of said year.

Ambition in planning, lack of resources and definite inflexibility in taking another route because of already committed resources. Wow project management does not change at all, in any field and in any bloc.

I am 20% into the book and I am scared. It seems to me that we have survived out of pure luck.

(Random incoherent thoughts; I know.)