Greenspun’s tenth rule and mutt

Mutt is a mail client that I used probably from 199? to 2001. Prior to that I was using elm and felt that because I could not understand its code, a switch was in order.

So what Greenspun’s tenth rule has to do with a text mail client? Let’s remember what the rule says:

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Yesterday, mutt version 2.o was announced. I only found out about it because a friend pinged me. And sure enough, in the release notes MuttLisp is mentioned, a first version of which appeared here.

Being a mail client, mutt already follows Zawinsky’s Law. Now it follows Greenspun’s tenth rule too.

Today’s #soundtrack

[ I posted this originally on Facebook, but since there’s some dust collected in the blog, well … ]

Today’s #soundtrack is two albums I had not listened to for a long long time:

  • Dreams of Freedom (Ambient Translations of Bob Marley)
  • Chant Down Babylon – a hip-hop remix of Bob Marley songs

As I am writing this, I decided that it is high time I put all my CDs in boxes store them away and make some space. (I parted with my Rap / Hip-Hop collection in 2005; let’s part with the other half 15 years later). For sentimental reasons I will only keep the John Coltrane and JAZZ & TZAZ MAGAZINE ones on display.

We do not even have a proper CD player anymore. We have a Tivoli Model One and its aux port is shared between a Google Audio (where we cast sound) and a portable CD player (also not so frequently used).

Streaming automation has made this almost a no brainer choice. For now at least. I think I own more music than I can listen to anyway. Even on streaming services I rarely look for new releases. I just look for the convenience of recalling something I already own or know about and may not own it for some reason. Like the first 8 albums of Iron Maiden 🙂

The “oil lamp” law

[ Originally a Facebook post, copied here for posterity. ]

Some thirty years ago I was told the story of a server with an oil lamp on the side (the kind that Greek Orthodox people light to honor God and the Saints). It was put there to humor the situation: the server need not break under any circumstance.

Well, it has been my experience of many years, sectors and shops of different sizes, that no matter what, there is always at least one key system that “needs” an oil lamp by its side in the organization. A system that is critical enough to warrant all the attention it gets, yet so critical that nobody risks upgrading / changing / phasing it out during their tenure (the system is guaranteed to outlive them; I count three such systems that have outlived me). Untouchable systems that get replaced only when they physically die.

Seek out who needs an oil lamp. Plan accordingly.

[ There’s another “law” that follows as a result of the oil-lamp, but maybe for another time. ]

sometimes you need to change your docker network

I was working locally with some containers running over docker-compose. Everything was OK, except I could not access a specific service within our network. Here is the issue: docker by default assigns IP addresses to containers from the 172.17.0.0/16 pool and docker-compose from 172.18.0.0/16. It just so happened that what I needed to access lived on 172.18.0.0/16 space. So what to do to overcome the nuisance? Obviously you cannot renumber a whole network for some temporary IP overlapping. Let’s abuse reserved IP space instead. Here is the relevant part of my daemon.json now:

{
  "default-address-pools": [
    { "base": "192.0.2.0/24", "size": 28 },
    { "base": "198.51.100.0/24", "size": 28 },
    { "base": "203.0.113.0/24", "size": 28 }
  ]
}

According to RFC5737 are reserved for documentation purposes. I’d say local work is close enough to documentation to warrant the abuse, since we also adhere to its operational implications. Plus I wager that most of the people while they always remember classic RFC1918 addresses, seldom take into account TEST-NET-1 and friends.

Happy SysAdmin Day

I’m into this for so many years, I cannot even remember when I started. root, IT guy, system administrator, goto person, SRE, DevOps, whatever acronym life brings next. And all that, because at some point in time while still an undergraduate, I told my friend Panos:

“You see those guys? One day, we’ll be doing their work.”
“Nah”, he said, “they’re gods”.

Because that’s what they looked to us. They still do to me. Because at least one of them, with whom I’ve kept contact, is a moving CS encyclopedia. And then it struck me. They did not really want to do the work. They needed a platform to test all the cool things they read about, in production. They were architects before it was cool.

And that is what their “divine” power was: to know about all things CS. We did end up doing their work.

Somehow, that’s what drives me. I drop things off along the way (“I will not invest in learning this”) but still, I do not drop as many as the typical 9to5er. And that takes its toll, it is painful and once in a while rewarding.

Happy Sysadmin Day.

How to get all the indices in ElasticSearch with Python?

This seems to be a pretty common question. And the most common answer is to use the Python ElasticSearch client and the get_alias() method like this:

import Elasticsearch

es = elasticsearch.Elasticsearch(hosts=[ES_HOST], )
idx_list = [x for x in es.indices.get_alias("*").keys() ]

This is the most common answer one can see in StackOverflow. But ElasticSearch offers us the cat API which is better suited for such a query. So a better way to approach this can be:

import Elasticsearch

es = elasticsearch.Elasticsearch(hosts=[ES_HOST], )
idx_list = es.cat.indices(index='foobar-20*', h='index', s='index:desc').split()

The above example asks an even more elaborate query: Of all the indices, return to us those who match the pattern foobar-20*, return only the index name from the fields that the cat API returns, and by the way, sort the returned index names in descending order.

If the database offers us a way to do things, it is best that we ask it to.

Running tinyproxy inside Kubernetes

Now why would you want to do that? Because sometimes, you have to get data from customers and they whitelist specific IP addresses for you to get their data from. But the whole concept of Kubernetes means that, in general, you do not care where your process runs on (as long as it runs on the Kubernetes cluster) and also you get some advantage from the general elasticity it offers (because you know, an unspoken design principle is that your clusters are on the the cloud, and basically throwaway machines, alas life has different plans).

So assuming that you have a somewhat elastic cluster with some nodes that never get disposed and some nodes added and deleted for elasticity, how would you secure access to the customer’s data from a fixed point? You run a proxy service (like tinyproxy, haproxy, or whatever) on the fixed servers of your cluster (which of course the client has whitelisted). You need to label them somehow in the beginning, like kubectl label nodes fixed-node-1 tinyproxy=true. You now need a docker container to run tinyproxy from. Let’s build one using the Dockerfile below:

FROM centos:7
RUN yum install -y epel-release
RUN yum install -y tinyproxy
# This is needed to allow global access to tinyproxy.
# See the comments in tinyproxy.conf and tweak to your
# needs if you want something different.
RUN sed -i.bak -e s/^Allow/#Allow/ /etc/tinyproxy/tinyproxy.conf
ENTRYPOINT [ "/usr/sbin/tinyproxy", "-d", "-c", "/etc/tinyproxy/tinyproxy.conf" ]

Let’s build and push it to docker hub (obviously you’ll want to push to your own docker repository):

docker build -t yiorgos/tinyproxy .
docker push yiorgos/tinyproxy

We can now attempt to deploy a deployment in our cluster:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tinyproxy
  namespace: adamo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tinyproxy
  template:
    metadata:
      labels:
        app: tinyproxy
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: tinyproxy
                operator: In
                values:
                - worker-4
                - worker-3
      containers:
      - image: yiorgos/tinyproxy
        imagePullPolicy: Always
        name: tinyproxy
        ports:
        - containerPort: 8888
          name: 8888tcp02
          protocol: TCP```

We can apply the above with `kubectl apply -f tinyproxy-deployment.yml`. The trained eye will recognize however that this YAML file is somehow [Rancher](http://rancher.com) related. Indeed it is, I deployed the above container with Rancher2 and got the YAML back with `kubectl -n proxy get deployment tinyproxy -o yaml`.  So what is left now to make it usable within the cluster? A service to expose this to other deployments within Kubernetes:

apiVersion: v1 kind: Service metadata: name: tinyproxy namespace: proxy spec: type: ClusterIP ports:

  • name: 8888tcp02 port: 8888 protocol: TCP targetPort: 8888

We apply this with `kubectl apply -f tinyproxy-service.yml` and we are now set. All pods within your cluster can now access the tinyproxy and get access to whatever they need to by connecting to `tinyproxy.proxy.svc.cluster.local:8888`. I am running this in its own namespace. This may come handy if you use [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and you want to restrict access to the proxy server for certain pods within the cluster.

This is something you can use with [haproxy](https://hub.docker.com/_/haproxy) containers instead of tinyproxy, if you so like.

email will never die

These thoughts are triggered by the hey release, but they are otherwise unrelated.

Every now and then, an application comes along, grabs substantial userbase and claims to have solved the “issue” with email, texting and whatever memorabilia you want to share with your friends. Provided you’re part of their locked garden. Various pricing models are applied here. Math your heart out.

However, here is the thing: each one of them, requires your lock-in to that application. That’s how I ended up with 16 texting (or similar) applications on my phone. Because my contacts, personal and customers demand their favorite platform for my attention.

And that is why email will never die. It works most of the time and it does not matter who your provider is: Gmail, Yahoo, Hotmail, Protonmail, Fastmail, a local Exchange server, some postfix running on an RPi. It does not matter. Granted, when your message does not go through, there may be insurmountable hoops to jump, but in the end, email is still the ubiquitous text application that works for everyone, everywhere between big walled gardens and even your own backyard if you care to run it on your own. And that is why, even though email is not instant messaging, people (OK not my kids) still treat it like it being such.

I am still undecided whether to invest in Hey. The application looks good, the web client also, but the cost of switching from my current walled garden (that also offers some identity services) is big.

If you want to make a dent in the world…

I was reading the beginning of the History of Clojure (a language that I have no time to invest in learning, but still interesting to me):

I started working on Clojure in 2005, during a sabbatical I funded out of retirement savings. The purpose of the sabbatical was to give myself the opportunity to work on whatever I found interesting, without regard to outcome, commercial viability or the opinions of others. One might say these are prerequisites for working on Lisps or functional languages.

If you want to make any dent in the world, you either make it at a personal cost (not always monetary), or with other people’s money. Savings I have none. Self realizations are evident.