cursive

Figlet, a program to make large letters out of ordinary text was featured in Lobsters and it reminded me of cursive, a similar program that we used back when ASCII art was the only decoration one could add to their email signature. And since it seems that cursive is not carried by any Linux distribution, I set out to find the sources. Which I did thanks to FreeBSD.

I downloaded the source and tried to compile it, and it required xstr. Yet another old program! I think the last time one can see it, is OpenBSD-5.5 manual. I mean even the 5.6 changelog writes:

“mkstr was intended for the limited architecture of the PDP 11 family.” Time moves on, memory gets cheaper. There’s no need for mkstr or xstr.

Hacks like xstr and mkstr might be of interest to embedded, or otherwise deprived, systems people as cool hacks and peculiarities, but really no actual need to maintain them.

So I set out to make the cursive source compilable with xstr again (it compiles without it just fine, just type make lcursive instead). And the minimally changed source code, to eradicate compiler warnings, of a program that can happily sign your email since 1985 is now on Github:

   __
  /  `
 /--   ____  o ____  ,
(___, / / (_/_(_) (_/___
           /       /
         -'       '

Shoutouts to my good friend Panagiotis C. who was the person who showed me cursive and figlet some 25 years ago or so.

Advertisements

ansible, docker-compose, iptables and DOCKER-USER

NOTE: manipulating DOCKER-USER is beyond anyone’s sanity. The information bellow seems to work sometimes (like when I wrote the post) and others not. That is why you will find posts with similar advise on the Net that may or may not work for you. I plan to revisit this and figure out what is wrong, making the following information only temporarily correct.

When you want to run ZooNavigator, the recommendation to get you started is via this docker-compose.yml. However, Docker manages your iptables (unless you go the –iptables=false way) and certain ports will be left wide open. This may not be what you want to do. Docker provides the DOCKER-USER chain for user defined rules that are not affected by service restarts and this is where you want to work. Most of my googling resulted in recipes that did not work, because their final rule was to deny anything from 0.0.0.0/0 after having allowed whatever was to be whitelisted. I solved this in the following example playbook, and the rules worked like a charm. Others that may find themselves in the same situation may want to give it a shot:

---
- name: maintain the DOCKER-USER access list
  hosts: zoonavigators
  vars:
    - wl_hosts:
      - "172.31.0.1"
      - "172.31.0.2"
    - wl_ports:
      - "7070"
      - "7071"
  tasks:

  - name: check for iptables-services
    yum:
      name: iptables-services
      state: latest

  - name: enable iptables-services
    service:
      name: iptables
      enabled: yes
      state: started

  - name: flush DOCKER-USER
    iptables:
      chain: DOCKER-USER
      flush: true

  - name: whitelist for DOCKER-USER
    iptables:
      chain: DOCKER-USER
      protocol: tcp
      ctstate: NEW
      syn: match
      source: "{{ item[0] }}"
      destination_port: "{{ item[1] }}"
      jump: ACCEPT
    with_nested:
      - "{{ wl_hosts }}"
      - "{{ wl_ports }}"

  - name: drop non whitelisted connections to DOCKER-USER
    iptables:
      chain: DOCKER-USER
      protocol: tcp
      #source: "0.0.0.0/0"
      destination_port: "{{ item }}"
      jump: DROP
    with_items:
      - "{{ wl_ports }}"

  - name: save new iptables
    command:
      /usr/libexec/iptables/iptables.init save

Line 46 is the key. The obvious choice would have been source: "0.0.0.0/0" but this did not work for me.

[pastebin here]

Let’s do a Koch snowflake

Good friend Dimitris, after reading my previous post pointed me to Koch snowflakes. How cool is a line of infinite length that covers a finite surface! A Koch snowflake turns out to be easily constructed with turtle as suggested by the Wikipedia article. Well, you also get to learn about Thue-Morse sequences and evil numbers in the process. To be honest, this is also a good toy case, using a real sequence, to learn how to use yield.

Koch snowflake
Koch snowflake
import turtle
import functools

# Compute the next digit of the Thue-Morse sequence
# https://oeis.org/A010060
# Learn about evil and oddium numbers in the process.

def thue_morse_seq(n=0):
  while True:
    yield functools.reduce(lambda x, y: x + y, map(int, bin(n)[2:])) % 2
    n += 1

if __name__ == "__main__":

  window = turtle.Screen()
  window.bgcolor('light gray')

  pen = turtle.Turtle()
  pen.speed(20)
  pen.color('dark blue')
  pen.pensize(1)
  pen.shape('classic')
  pen.penup()
  pen.setpos(0, 0)
  pen.pendown()

  n = thue_morse_seq(0)
  while True:
    if next(n) == 0:
      pen.forward(2)
    else:
      pen.left(60)

[pastebin]

Rule 110 with Turtle

I like repl.it a lot and from time to time I use it for 10 liners instead of my command line. Especially when I do not want to install a brand new language implementation or even creating a new Python environment for five minutes.

I was thinking about Rule 110 and how most of the examples I’ve seen from hobbyists are ASCII based as opposed to more proper, nicer graphics by researchers of the CA area. And I was thinking whether using Turtle could be used to display it a bit better in one’s spare time. That is because I still have not figured out how TkInter works with repl.it. It turns out that you can make something nice with Turtle:

rule 110
Rule 110 elementary cellular automaton

I have pasted the rather rudimentary, repl.it ready, python code here.

The Doomsday Machine: Confessions of a Nuclear War Planner

A strange game. The only winning move is not to play. How about a nice game of chess?Joshua

The Doomsday Machine: Confessions of a Nuclear War PlannerThe Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg

My rating: 4 of 5 stars

It took me a long time to finish the book. It contains too much information to divulge and I needed to do the occasional back and forth to remind me of stuff I’d read in the previous chapters.

If you ever wonder how one prepares for a nuclear conflict, target decision, actually starting such a war, it extent and even the tactical deployment, along with any problems that may arise in the process, and there are many, this is a book you need t read.

Fatalities after a nuclear strike according to plan
Fatalities after a nuclear strike according to plan

Humanity it seems has averted omnicide a number of times. Not only in the case of the Cuban crisis which is detailed, but also on a number of other cases also. You get to get a glimpse within the many different stakeholders during planning and deployment, conflicting interests, expected number of deaths, presidential and other civilian and military views on the matter. Highly illuminating book. One I came across by pure chance.

View all my reviews

The second system effect

The second system effect is the observation by Fred Brookes that states that following the success of a system, the architect is doomed to have a failure in his next one, where they will put in all the features and what not that were left out in the first one. Because now they know how to do it.

I was reminded of this directly while reading Looking Back at Postgres, where Joseph M. Hellerstein makes exactly that observation:

The highest-order lesson I draw comes from the fact that that Postgres defied Fred Brooks’ “Second System Effect”. Brooks argued that designers often follow up on a successful first system with a second system that fails due to being overburdened with features and ideas. Postgres was Stonebraker’s second system, and it was certainly chock full of features and ideas. Yet the system succeeded in prototyping many of the ideas, while delivering a software infrastructure that carried a number of the ideas to a successful conclusion.

Mike Stonebraker‘s first system was Ingres. He worked on both Ingres and Postgres while at Berkeley. He later moved to MIT to continue doing interesting database related stuff. Here is what Hellerstein writes at the end of the paper:

Another lesson is that a broad focus—“one size fits many”—can be a winning approach for both research and practice. To coin some names, “MIT Stonebraker” made a lot of noise in the database world in the early 2000s that “one size doesn’t fit all.” Under this banner he launched a flotilla of influential projects and startups, but none took on the scope of Postgres. It seems that “Berkeley Stonebraker” defies the later wisdom of “MIT Stonebraker,” and I have no issue with that. Of course there’s wisdom in the “one size doesn’t fit all” motto (it’s always possible to find modest markets for custom designs!), but the success of “Berkeley Stonebraker’s” signature system—well beyond its original intents—demonstrates that a broad majority of database problems can be solved well with a good general-purpose architecture. Moreover, the design of that architecture is a technical challenge and accomplishment in its own right. In the end—as in most science and engineering debates— there isn’t only one good way to do things. Both Stonebrakers have lessons to teach us. But at base, I’m still a fan of the broader agenda that “Berkeley Stonebraker” embraced.

And then it hit me: Postgres defies Brookes’s law, because it is not a second system. The second system is “MIT Stonebraker”.

And now I hope the database gods show mercy on me. At least I am a fan of “Berkeley Stonebraker” too.

I blog to myself

I do not take much care of this blog the last few years. Not much input or something to share. It is not that I am not journaling; I am. Just to myself. Every day. I keep a daily log of what happened. This started when one day I realized that I could not remember what I had done the previous week. Work related mostly, but indeed I could not. Had my manager asked me “Why did we pay you for this week?” I could not tell him.

So I started keeping one or two sentences per day, at the end of the day. What I thought was the most important thing to remember a month from now. And then other things happened: outages, maybe something that got me angry, something I learned, something I enjoyed, something I am planning. They all go in there.

Sometimes I neglect scribbling even a sentence a day. Maybe for four days in a row. But then when I sit down I make an effort to remember. At first something comes into mind and I put it down. Then, usually when I am noting something of another day, something pops up (“Wait, was this yesterday, or the day before?”) and the timeline of events falls into order.

These days I have my file open while I am doing stuff and sometimes I keep my notes while things happen. Like when I had Zookeepers that refused to bind to IPv4 addresses. Well, export KAFKA_OPTS="-Djava.net.preferIPv4Stack=True" goes into the journal and for future reference.

So there, I write to myself daily and somehow this is helping me a lot. In recalling back things, keeping track of my day, stuff like that. But I guess there’s some light version of the spoon theory for blogging, and since I blog to myself every day, I’ve not much to share. Whatever few glimpses are there, usually go to Facebook, or twitter.