The “oil lamp” law

[ Originally a Facebook post, copied here for posterity. ]

Some thirty years ago I was told the story of a server with an oil lamp on the side (the kind that Greek Orthodox people light to honor God and the Saints). It was put there to humor the situation: the server need not break under any circumstance.

Well, it has been my experience of many years, sectors and shops of different sizes, that no matter what, there is always at least one key system that “needs” an oil lamp by its side in the organization. A system that is critical enough to warrant all the attention it gets, yet so critical that nobody risks upgrading / changing / phasing it out during their tenure (the system is guaranteed to outlive them; I count three such systems that have outlived me). Untouchable systems that get replaced only when they physically die.

Seek out who needs an oil lamp. Plan accordingly.

[ There’s another “law” that follows as a result of the oil-lamp, but maybe for another time. ]

sometimes you need to change your docker network

I was working locally with some containers running over docker-compose. Everything was OK, except I could not access a specific service within our network. Here is the issue: docker by default assigns IP addresses to containers from the pool and docker-compose from It just so happened that what I needed to access lived on space. So what to do to overcome the nuisance? Obviously you cannot renumber a whole network for some temporary IP overlapping. Let’s abuse reserved IP space instead. Here is the relevant part of my daemon.json now:

  "default-address-pools": [
    { "base": "", "size": 28 },
    { "base": "", "size": 28 },
    { "base": "", "size": 28 }

According to RFC5737 are reserved for documentation purposes. I’d say local work is close enough to documentation to warrant the abuse, since we also adhere to its operational implications. Plus I wager that most of the people while they always remember classic RFC1918 addresses, seldom take into account TEST-NET-1 and friends.

Happy SysAdmin Day

I’m into this for so many years, I cannot even remember when I started. root, IT guy, system administrator, goto person, SRE, DevOps, whatever acronym life brings next. And all that, because at some point in time while still an undergraduate, I told my friend Panos:

“You see those guys? One day, we’ll be doing their work.”
“Nah”, he said, “they’re gods”.

Because that’s what they looked to us. They still do to me. Because at least one of them, with whom I’ve kept contact, is a moving CS encyclopedia. And then it struck me. They did not really want to do the work. They needed a platform to test all the cool things they read about, in production. They were architects before it was cool.

And that is what their “divine” power was: to know about all things CS. We did end up doing their work.

Somehow, that’s what drives me. I drop things off along the way (“I will not invest in learning this”) but still, I do not drop as many as the typical 9to5er. And that takes its toll, it is painful and once in a while rewarding.

Happy Sysadmin Day.

How to get all the indices in ElasticSearch with Python?

This seems to be a pretty common question. And the most common answer is to use the Python ElasticSearch client and the get_alias() method like this:

import Elasticsearch

es = elasticsearch.Elasticsearch(hosts=[ES_HOST], )
idx_list = [x for x in es.indices.get_alias("*").keys() ]

This is the most common answer one can see in StackOverflow. But ElasticSearch offers us the cat API which is better suited for such a query. So a better way to approach this can be:

import Elasticsearch

es = elasticsearch.Elasticsearch(hosts=[ES_HOST], )
idx_list ='foobar-20*', h='index', s='index:desc').split()

The above example asks an even more elaborate query: Of all the indices, return to us those who match the pattern foobar-20*, return only the index name from the fields that the cat API returns, and by the way, sort the returned index names in descending order.

If the database offers us a way to do things, it is best that we ask it to.

Running tinyproxy inside Kubernetes

Now why would you want to do that? Because sometimes, you have to get data from customers and they whitelist specific IP addresses for you to get their data from. But the whole concept of Kubernetes means that, in general, you do not care where your process runs on (as long as it runs on the Kubernetes cluster) and also you get some advantage from the general elasticity it offers (because you know, an unspoken design principle is that your clusters are on the the cloud, and basically throwaway machines, alas life has different plans).

So assuming that you have a somewhat elastic cluster with some nodes that never get disposed and some nodes added and deleted for elasticity, how would you secure access to the customer’s data from a fixed point? You run a proxy service (like tinyproxy, haproxy, or whatever) on the fixed servers of your cluster (which of course the client has whitelisted). You need to label them somehow in the beginning, like kubectl label nodes fixed-node-1 tinyproxy=true. You now need a docker container to run tinyproxy from. Let’s build one using the Dockerfile below:

FROM centos:7
RUN yum install -y epel-release
RUN yum install -y tinyproxy
# This is needed to allow global access to tinyproxy.
# See the comments in tinyproxy.conf and tweak to your
# needs if you want something different.
RUN sed -i.bak -e s/^Allow/#Allow/ /etc/tinyproxy/tinyproxy.conf
ENTRYPOINT [ "/usr/sbin/tinyproxy", "-d", "-c", "/etc/tinyproxy/tinyproxy.conf" ]

Let’s build and push it to docker hub (obviously you’ll want to push to your own docker repository):

docker build -t yiorgos/tinyproxy .
docker push yiorgos/tinyproxy

We can now attempt to deploy a deployment in our cluster:

apiVersion: apps/v1
kind: Deployment
  name: tinyproxy
  namespace: adamo
  replicas: 1
      app: tinyproxy
        app: tinyproxy
            - matchExpressions:
              - key: tinyproxy
                operator: In
                - worker-4
                - worker-3
      - image: yiorgos/tinyproxy
        imagePullPolicy: Always
        name: tinyproxy
        - containerPort: 8888
          name: 8888tcp02
          protocol: TCP```

We can apply the above with `kubectl apply -f tinyproxy-deployment.yml`. The trained eye will recognize however that this YAML file is somehow [Rancher]( related. Indeed it is, I deployed the above container with Rancher2 and got the YAML back with `kubectl -n proxy get deployment tinyproxy -o yaml`.  So what is left now to make it usable within the cluster? A service to expose this to other deployments within Kubernetes:

apiVersion: v1 kind: Service metadata: name: tinyproxy namespace: proxy spec: type: ClusterIP ports:

  • name: 8888tcp02 port: 8888 protocol: TCP targetPort: 8888

We apply this with `kubectl apply -f tinyproxy-service.yml` and we are now set. All pods within your cluster can now access the tinyproxy and get access to whatever they need to by connecting to `tinyproxy.proxy.svc.cluster.local:8888`. I am running this in its own namespace. This may come handy if you use [network policies]( and you want to restrict access to the proxy server for certain pods within the cluster.

This is something you can use with [haproxy]( containers instead of tinyproxy, if you so like.

email will never die

These thoughts are triggered by the hey release, but they are otherwise unrelated.

Every now and then, an application comes along, grabs substantial userbase and claims to have solved the “issue” with email, texting and whatever memorabilia you want to share with your friends. Provided you’re part of their locked garden. Various pricing models are applied here. Math your heart out.

However, here is the thing: each one of them, requires your lock-in to that application. That’s how I ended up with 16 texting (or similar) applications on my phone. Because my contacts, personal and customers demand their favorite platform for my attention.

And that is why email will never die. It works most of the time and it does not matter who your provider is: Gmail, Yahoo, Hotmail, Protonmail, Fastmail, a local Exchange server, some postfix running on an RPi. It does not matter. Granted, when your message does not go through, there may be insurmountable hoops to jump, but in the end, email is still the ubiquitous text application that works for everyone, everywhere between big walled gardens and even your own backyard if you care to run it on your own. And that is why, even though email is not instant messaging, people (OK not my kids) still treat it like it being such.

I am still undecided whether to invest in Hey. The application looks good, the web client also, but the cost of switching from my current walled garden (that also offers some identity services) is big.

If you want to make a dent in the world…

I was reading the beginning of the History of Clojure (a language that I have no time to invest in learning, but still interesting to me):

I started working on Clojure in 2005, during a sabbatical I funded out of retirement savings. The purpose of the sabbatical was to give myself the opportunity to work on whatever I found interesting, without regard to outcome, commercial viability or the opinions of others. One might say these are prerequisites for working on Lisps or functional languages.

If you want to make any dent in the world, you either make it at a personal cost (not always monetary), or with other people’s money. Savings I have none. Self realizations are evident.

Work from home vs Work with home

[ Το αντιγράφω από ανάρτηση που έκανα στο Facebook ]

Υπάρχει πολύς κόσμος που επειδή δεν έχει την εμπειρία του WFH, νομίζει πως αυτό που ζούμε τώρα, στις επείγουσες καταστάσεις του προηγούμενου διμήνου είναι η τηλεεργασία. Δεν είναι.

WFH, σημαίνει πως είμαι στο σπίτι (ή σε κάποιο γραφείο όχι πολύ μακριά) και δεν χρειάζεται να είμαι στην έδρα του εργοδότη μου (που για να πάω και να γυρίσω μπορεί να χρειάζομαι 2 και βάλε ώρες κάθε μέρα).

Σημαίνει επίσης πως εγώ είμαι στο σπίτι και έχω ένα χώρο που για όσο δουλεύω είναι ο χώρος εργασίας μου και δεν είναι το υπνοδωμάτιο που μπορεί να είναι προσβάσιμο στον καθένα. Δεν είναι.

Σημαίνει επίσης, πως εγώ είμαι στο σπίτι. Τα παιδιά (αν υπάρχουν) είναι στο σχολείο και το ίδιο και το έταιρο ήμισυ, είναι στον εργασιακό του χώρο (εκτός κι αν επίσης τηλεεργάζεται).

Σημαίνει επίσης πως υπάρχει γραμμή σύνδεσης στο δίκτυο αρκετή για να σηκώσει τη δουλειά, υπάρχει εναλλακτική με data από την κινητή αν χρειάζεται (είναι ξεκαθαρισμένο ποιος τα πληρώνει αυτά εξαρχής) και πως υπάρχει υπολογιστής για να κάνω εγώ και μόνο εγώ την δουλειά μου σε αυτόν.

Τώρα όταν έχεις ανθρώπους που βρεθήκανε ξαφνικά να πρέπει να δουλέψουν από το σπίτι έχοντας ένα μόνο υπολογιστή (για δύο εργαζόμενους στην καλή περίπτωση) που υπήρχε για recreational activities και χωρίς το απαραίτητο software και επιπλέον και παιδιά να πρέπει να κάνουν εργασίες, skype για φροντιστήριο, να παίξουν κάτι και να παρακολουθήσουν streaming με την πιο φτηνή σύνδεση που αντέχει το πορτοφόλι τους, την ώρα που εσύ θα πρέπει να εργάζεσαι, αυτό δεν το λένε “δουλειά από το σπίτι”, αλλά “δουλειά με το σπίτι”.

Τα γράφω αυτά γιατί το τυχαίο νοικοκυριό δύσκολα θα έχει την υπολογιστική επάρκεια που έχει το δικό μου (όπως κι εγώ δεν έχω τόσους κάβουρες ή κόφτες όσους έχουν ένας ηλεκτρολόγος και ένας υδραυλικός).

Two tricks that make life when dual booting between Windows 10 and Ubuntu easier

So you have your laptop with Windows 10 and you also need to run Ubuntu for some reason. Even if Ubuntu is the main OS, you may want to keep Windows around for the occasional system upgrade (Dell Update comes to mind for example) and software that runs exclusively one one of the two platforms (UCINET is such a program for me).

You are then faced with two problems:

  • Default boot operating system
  • The clock gets descynchronized when rebooting between the two operating systems.

StackExchange comes to the rescue. For the first problem you have to modify grub. I have chosen to make so that upon reboot, it will boot the previous operating system it did, unless I choose otherwise via the menu. I use the saved method from this answer.

For the second issue, there are a number of answers that usually involve tweaking systemd or the windows registry, but the easiest thing you can do is to ensure that the windows time service is started automatically with a delay.


newsletters that I value

Let’s break this blog’s hiatus with an uninteresting post.

I am still a heavy email user. I am subscribed to more mailing lists and newsletters than most can handle, and even I, after almost 30 years of email usage, am feeling INBOX fatigue. I unsubscribe and delete unread mail without remorse. However there are a handful of newsletters that I think I am going to keep around for as long as they are around. These are:

  • Fermat’s Library. I won’t claim that I understand either every paper, or commentary on paper of the week. But I get interesting, yet never pursued, ideas sometimes.
  • The morning paper. More CS centric and also I won’t claim to always grasp the subject (or even like it). But still, good things to consider here.
  • Quanta magazine. Science concepts delivered in a way they can be understood.
  • The porcupine newsletter. Because if I had time, I’d want to post a newsletter and follow its style.

So what are yours?