The Doomsday Machine: Confessions of a Nuclear War Planner

A strange game. The only winning move is not to play. How about a nice game of chess?Joshua

The Doomsday Machine: Confessions of a Nuclear War PlannerThe Doomsday Machine: Confessions of a Nuclear War Planner by Daniel Ellsberg

My rating: 4 of 5 stars

It took me a long time to finish the book. It contains too much information to divulge and I needed to do the occasional back and forth to remind me of stuff I’d read in the previous chapters.

If you ever wonder how one prepares for a nuclear conflict, target decision, actually starting such a war, it extent and even the tactical deployment, along with any problems that may arise in the process, and there are many, this is a book you need t read.

Fatalities after a nuclear strike according to plan
Fatalities after a nuclear strike according to plan

Humanity it seems has averted omnicide a number of times. Not only in the case of the Cuban crisis which is detailed, but also on a number of other cases also. You get to get a glimpse within the many different stakeholders during planning and deployment, conflicting interests, expected number of deaths, presidential and other civilian and military views on the matter. Highly illuminating book. One I came across by pure chance.

View all my reviews


The second system effect

The second system effect is the observation by Fred Brookes that states that following the success of a system, the architect is doomed to have a failure in his next one, where they will put in all the features and what not that were left out in the first one. Because now they know how to do it.

I was reminded of this directly while reading Looking Back at Postgres, where Joseph M. Hellerstein makes exactly that observation:

The highest-order lesson I draw comes from the fact that that Postgres defied Fred Brooks’ “Second System Effect”. Brooks argued that designers often follow up on a successful first system with a second system that fails due to being overburdened with features and ideas. Postgres was Stonebraker’s second system, and it was certainly chock full of features and ideas. Yet the system succeeded in prototyping many of the ideas, while delivering a software infrastructure that carried a number of the ideas to a successful conclusion.

Mike Stonebraker‘s first system was Ingres. He worked on both Ingres and Postgres while at Berkeley. He later moved to MIT to continue doing interesting database related stuff. Here is what Hellerstein writes at the end of the paper:

Another lesson is that a broad focus—“one size fits many”—can be a winning approach for both research and practice. To coin some names, “MIT Stonebraker” made a lot of noise in the database world in the early 2000s that “one size doesn’t fit all.” Under this banner he launched a flotilla of influential projects and startups, but none took on the scope of Postgres. It seems that “Berkeley Stonebraker” defies the later wisdom of “MIT Stonebraker,” and I have no issue with that. Of course there’s wisdom in the “one size doesn’t fit all” motto (it’s always possible to find modest markets for custom designs!), but the success of “Berkeley Stonebraker’s” signature system—well beyond its original intents—demonstrates that a broad majority of database problems can be solved well with a good general-purpose architecture. Moreover, the design of that architecture is a technical challenge and accomplishment in its own right. In the end—as in most science and engineering debates— there isn’t only one good way to do things. Both Stonebrakers have lessons to teach us. But at base, I’m still a fan of the broader agenda that “Berkeley Stonebraker” embraced.

And then it hit me: Postgres defies Brookes’s law, because it is not a second system. The second system is “MIT Stonebraker”.

And now I hope the database gods show mercy on me. At least I am a fan of “Berkeley Stonebraker” too.

I blog to myself

I do not take much care of this blog the last few years. Not much input or something to share. It is not that I am not journaling; I am. Just to myself. Every day. I keep a daily log of what happened. This started when one day I realized that I could not remember what I had done the previous week. Work related mostly, but indeed I could not. Had my manager asked me “Why did we pay you for this week?” I could not tell him.

So I started keeping one or two sentences per day, at the end of the day. What I thought was the most important thing to remember a month from now. And then other things happened: outages, maybe something that got me angry, something I learned, something I enjoyed, something I am planning. They all go in there.

Sometimes I neglect scribbling even a sentence a day. Maybe for four days in a row. But then when I sit down I make an effort to remember. At first something comes into mind and I put it down. Then, usually when I am noting something of another day, something pops up (“Wait, was this yesterday, or the day before?”) and the timeline of events falls into order.

These days I have my file open while I am doing stuff and sometimes I keep my notes while things happen. Like when I had Zookeepers that refused to bind to IPv4 addresses. Well, export KAFKA_OPTS="" goes into the journal and for future reference.

So there, I write to myself daily and somehow this is helping me a lot. In recalling back things, keeping track of my day, stuff like that. But I guess there’s some light version of the spoon theory for blogging, and since I blog to myself every day, I’ve not much to share. Whatever few glimpses are there, usually go to Facebook, or twitter.

Agility in planning (not)

I’ve read about the inflexibilities of Gosplan while going through Red Plenty. These days I am reading Confessions of a Nuclear War Planner and it gives me an opportunity to examine inflexibilities in thinking on the western bloc side:

The price of bringing all the theatre and component service plans into harmony with each other, into one plan, was the total elimination of any flexibility in carrying it out.

Yes no flexibility at all in a military machine where the no plan survives first contact with the enemy is dated around the same time (1963). In this case the inflexibility was due to the lack of staff and computer time available to complete alternatives. This is similar to Gosplan’s problems: they had so many inputs to their models, that their planning for the current year was completing around October of said year.

Ambition in planning, lack of resources and definite inflexibility in taking another route because of already committed resources. Wow project management does not change at all, in any field and in any bloc.

I am 20% into the book and I am scared. It seems to me that we have survived out of pure luck.

(Random incoherent thoughts; I know.)

In sed matching \d might not be what you would expect

A friend asked me the other day whether a certain “search and replace” operation over a credit card number could be done with sed: Given a number like 5105 1051 0510 5100, replace the first three components with something and leave the last one intact.

So my first take on this was:

# echo 5105 1051 0510 5100 | sed -e 's/^\([0-9]\{4\} \)\{3\}/lala /'
lala 5100

which works, but is not very legible. So here is taking advantage of the -r flag, if your modern sed supports it:

# echo 5105 1051 0510 5100 | sed -re 's/^([[:digit:]]{4} ){3}/lala /' 
lala 5100

So my friend asked, why not use \d instead of [[:digit:]] (or even [0-9])?

# echo 5105 1051 0510 5100 | sed -re 's/^(\d{4} ){3}/lala /' 
5105 1051 0510 5100

Why does this not work? Because as it is pointed in the manual:

In addition, this version of sed supports several escape characters (some of which are multi-character) to insert non-printable characters in scripts (\a, \c, \d, \o, \r, \t, \v, \x). These can cause similar problems with scripts written for other seds.

There. I guess that is why I still do not make much use of the -r flag and prefer to escape parentheses when doing matches in sed.

Confessions of a Necromancer

Confessions of a NecromancerConfessions of a Necromancer by Pieter Hintjens

I knew of Hintjens’s work (Xitami, ZeroMQ, etc) but not much more of him. The book popped up in a Slack I am a member of while discussing Torvalds’s decision to take a step back and work on himself.

Hintjens writes a technical memoir. At least that is the first part of the book. And because he writes stuff about the era of computing I grew up into, I like it. He reminded me of technologies, tricks and methods I had long forgotten. I even learned new old stuff that I had never come across.

And the there is the second part of the book. The most important and most interesting one. What can I say about it? Not much I am afraid. I can only declare my respect for his effort to document the process and his voyage towards the end. I envy his clarity, even though I cannot even begin to imagine the cost for it to be maintained during the cancer treatment process.

Highly touching.

View all my reviews

PS: I am trying to see whether using Goodreads to write my thoughts on books I read is a thing that I like.

vagrant, ansible local and docker

This is a minor annoyance to people who want to work with docker on their vagrant boxes and provision them with the ansible_local provisioner.

To have docker installed in your box, you simply need to enable the docker provisoner in your Vagrantfile:

config.vm.provision "docker", run: "once"

Since you’re using the ansible_local provisiner, you might skip this and write a task that installs docker from or wherever it suits you anyway, but I prefer this as vagrant knows how to best install docker onto itself.

Now obviously you can have the provisioner pull images for you, but for any crazy reason you want to pass most, if not all, of the provisioning to ansible. And thus you want to use among others the docker_image module. So you write something like:

- name: install python-docker
  become: true
    name: python-docker
    state: present

- name: install docker images
    name: busybox

Well this is going to greet you with an error message when you up the machine for the fist time:

Error message

TASK [install docker images] ***************************************************
fatal: [default]: FAILED! => {“changed”: false, “msg”: “Error connecting: Error while fetching server API version: (‘Connection aborted.’, error(13, ‘Permission denied’))”}
to retry, use: –limit @/vagrant/ansible.retry

Whereas when you happily run vagrant provision right away:

TASK [install docker images] ***************************************************
changed: [default]

Why does this happen? Because even though the installation of docker makes the vagrant user a member of group docker, this becomes effective with the next login.

The quickest way to bypass this is to make that part of your first run of ansible provisioning as super user:

- name: install docker images
  become: true  
    name: busybox

I am using the docker_image module only as an example here for lack of a better example with other docker modules on a Saturday morning. Pulling images is something that is of course very easy to do with the vagrant docker provisioner itself.

default: Running ansible-playbook…

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
ok: [default]

TASK [install python-docker] ***************************************************
changed: [default]

TASK [install docker images] ***************************************************
changed: [default]

PLAY RECAP *********************************************************************
default : ok=3 changed=2 unreachable=0 failed=0