The old Magic Mouse

I have an old Apple Magic Mouse, I got around 2010. It has grown so old that the notch of the aluminum cover that holds the batteries is cut, and as such, it does not close. I tape it in order to close the cover.

However, as people who use the same Magic Mouse for years have notices, it also randomly disconnects. Some people hit it to reconnect, but it disconnects again soon enough. The solution for this is to fold a piece of paper (like a Post-It) and place it between the batteries.

You owe me a coffee now :)

PS. You may not want to tape the aluminum cover. In such a case your solution is some blue-tac and hard paper

making nginx return HTTP errors randomly

It’s been a while since I last posted, so let’s write a tiny hack I did because I wanted to get random errors from an nginx server.

location / {
  return 200 "Hello $remote_addr, I am $server_addr\n";

location /healthz {
  set_by_lua_block $health {
    local x = math.random(1, 100)
    if x <= 20 then return 500 else return 200 end

  if ($health = 200) {
    return 200 "OK\n";

  return 500;

The above snippet recognizes your IP address and greets you when you hit any URL on the web server. However, when you hit /healthz, then 20% of the time it returns Internal Server Error (HTTP 500). So your monitor might get tricked into thinking the application is not working as expected.

Bear in mind that the Lua PRNG is not seeded and if you plan to use this more seriously you need to initialize the Lua module and seed the PRNG properly. Otherwise you will always get errors at the exact sequence of calls.

VGA display connected to USB-C adapter stuck after power saving mode

I have a Philips 190SW VGA monitor which, while it cannot display a super resolution, it can be a descent second display for things that you need, yet they do not need to grab your immediate attention. So I got a USB-C to VGA adapter and hooked it up.

Sure enough Windows 10 recognized the monitor and I arranged the displays to my liking and was very happy. Only there was a problem. After the power saving mode kicked in, Windows lost the monitor. It could never figure out that it should come off the power saving mode when I started working again. The quick, yet not very practical, solution was to unplug and plug again the USB-C adapter.

Searching the Internet I saw that I am not alone in this issue and the proposed solutions vary and seem like random shots in the dark. There is even at least one adapter by StarTech offering a presentation mode claiming that it never allows the (VGA) projector to go to sleep mode. And the best hack I found (sorry, I’ve lost the reference) that suggested:

> In my case, one of my VGA monitor’s was connected to a VGA to DVI connector going into my graphics card. I simply pulled PIN 12 from the VGA cable going into the VGA to DVI converter, problem solved. Monitor does not go into Power Savings mode, shown as Generic Non-PNP monitor in Device Manager. The signal for Auto Detect/PNP is not transmitted through to the VGA to DVI converter and the problem is gone!!!

Short before actually going through with this, I decided to boot to Linux (Ubuntu) and see whether the behavior gets replicated there. It did not! Bang. So it must have something to do with how Windows handles the monitor. Rebooting back to Windows, I fired up the Device Manager and went ahead and updated the monitor driver to …Generic Pnp Monitor. Success!

If you’ve scoured through the net for this very same problem with no solution, then this post might help you out too.

PS: Sometimes, even the above trick does not help. In those cases it seems your only alternative is to disable turning off sleeping your screen altogether, and just set a screen saver.

tensorflow/serving exporting to Prometheus

This has pained me in the past because hardly any information was around. Tensorflow serving allows for Prometheus metrics. However figuring out how to write the appropriate file was kind of harder than the length of the file itself:

    enable: true,
    path: "/metrics"

If you’re running many different tensorflow serving servers, you can make it a ConfigMap common to all of them. In such a case, part of your deployment manifest could be like:

      - image: tensorflow/serving:1.14.0
        - --port=8500
        - --rest_api_port=8501
        - --model_config_file=/models/serving.config
        - --monitoring_config_file=/etc/monitoring.config        

Running redash on Kubernetes

Redash is a very handy tool that allows for you to connect to various data sources and produce interesting graphs. Your BI people most likely love it already.

Redash makes use of Redis, Postgres and a number of services written in Django as can be seen in this example docker-compose.yml file. However, there is very scarce information on how to run it on Kubernetes. I suspect that part of the reason is that while docker-compose.yml makes use of YAML’s merge, kubectl does not allow for this. So there exist templates that make a lot of redundant copies of a large block of lines. There must be a better way, right?

Since the example deployment with docker-compose runs all services on a single host, I decided to run my example deployment in a single pod with multiple containers. You can always switch to a better deployment to suit your needs if you like afterwards.

Next, was my quest on how to deal with the redundancy needed for the environment variables used by the different Redash containers. If only there was a template or macro language I could use. Well the most readily available, with the less installation hassle (if not already on your system) is m4. And you do not have to do weird stuff as you will see. Using m4 allows us to run something like m4 redash-deployment-simple.m4 | kubectl apply -f - and be done with it:

define(redash_environment, `
        - name: PYTHONUNBUFFERED
          value: "0"
        - name: REDASH_REDIS_URL
          value: "redis://"
        - name: REDASH_MAIL_USERNAME
          value: "redash"
        - name: REDASH_MAIL_USE_TLS
          value: "true"
        - name: REDASH_MAIL_USE_SSL
          value: "false"
        - name: REDASH_MAIL_SERVER
          value: ""
        - name: REDASH_MAIL_PORT
          value: "587"
        - name: REDASH_MAIL_PASSWORD
          value: "password"
          value: ""
        - name: REDASH_LOG_LEVEL
          value: "INFO"
        - name: REDASH_DATABASE_URL
          value: "postgresql://redash:redash@"
        - name: REDASH_COOKIE_SECRET
          value: "not-so-secret"
          value: "redash.query_runner.python"

apiVersion: apps/v1
kind: Deployment
  name: redash
    app: redash
  replicas: 1
      app: redash
      maxSurge: 0
      maxUnavailable: 1
    type: RollingUpdate
        app: redash
      - name: redis
        image: redis
        - name: redis
          containerPort: 6379
      - name: postgres
        image: postgres:11
        - name: POSTGRES_USER
          value: redash
        - name: POSTGRES_PASSWORD
          value: redash
        - name: POSTGRES_DB
          value: redash
        - name: postgres
          containerPort: 5432
      - name: server
        image: redash/redash
        args: [ "server" ]
        - name: REDASH_WEB_WORKERS
          value: "2"
        - name: redash
          containerPort: 5000
      - name:  scheduler
        image: redash/redash
        args: [ "scheduler" ]
        - name: QUEUES
          value: "celery"
        - name: WORKERS_COUNT
          value: "1"
      - name: schedulded-worker
        image: redash/redash
        args: [ "worker" ]
        - name: QUEUES
          value: "scheduled_queries,schemas"
        - name: WORKERS_COUNT
          value: "1"
      - name: adhoc-worker
        image: redash/redash
        args: [ "worker" ]
        - name: QUEUES
          value: "queries"
        - name: WORKERS_COUNT
          value: "1"
apiVersion: v1
kind: Service
  name: redash-nodeport
  type: NodePort
    app: redash
  - port: 5000
    targetPort: 5000

You can grab redash-deployment.m4 from Pastebin. What we did above was to define the macro redash_environment (with care for proper indentation) and use this in the container definitions in the Pod instead of copy-pasting that bunch of lines four times. Yes, you could have done it with any other template processor too.

You’re almost done. Postgres is not configured so, you need to connect and initialize the database:

$ kubectl exec -it redash-f8556648b-tw949 -c server -- bash
redash@redash-f8556648b-tw949:/app$ ./ database create_tables
redash@redash-f8556648b-tw949:/app$ exit

I used the above configuration to quickly launch Redash on a Windows machine that runs the Docker Desktop Kubernetes distribution. For example no permanent storage for Postgres is defined. In a production installation it could very well be that said Postgres lives outside the cluster, so there is no need for such a container. The same might hold true for the Redis container too.

What I wanted to demonstrate, was that due to this specific circumstance, a 40+ year old tool may come to your assistance without needing to install any other weird templating tool or what. And also how to react in cases where you need !!merge and it is not supported by your parser.

It turns out we do pay postage for our email

Well, not everybody, but some of us we do. Let me explain myself:

12 years ago after reading about TipJoy, a Y Combinator startup, I thought that this might be a scheme that could be used to force mass senders to pay something in order to ensure (via the investment cost to post to my inbox). I just thought of it kind the wrong way, and not in terms of snail-mail. I thought the recipient should be paid to read the email. But anyway TipJoy folded and life did its own thing and this was an idea left to collect dust.

It turns out that a whole industry focused on email delivery sprang in the meantime. IP reputation became a thing and the small server you operated from your house was part of a dialup or DSL pool that was used by spambots. No matter your intentions and your rigor in setting up your email server, your sender’s reputation was next to nothing. The same held true for your ISPs outgoing mail server too. The same is (still) true if you try to setup a VM to a cloud provider (if you’re allowed to send outgoing email at all). Once you needed to know about SMTP and POP3 only (IMAP too if you wanted to be fancier). Now you needed to learn new stuff too, SPF, DKIM, greylisting, RBL, DMARC, the list goes on. That’s how the sending industry was formed, providing guaranteed delivery, metrics like open-rate and more complex analytics.

Here I am now, some years later, and I am running a small mailing list for the Greek Database research community (I am not a database researcher, just a friend of the community). This mailing list always had reachability problems, even when I was running it on infrastructure I totally controlled when I was working on an ISP. Spam bots and inexperienced users always resulted in some sort of blacklisting that a single person postmaster team struggled to handle. There were delivery problems with a lot of personal time devoted to unblock them.

Since 2014 that I quit the postmaster business, I am running the list using Mailgun (It could have been any other M3AAWG member, I just picked them because of Bob Metcalfe mentioning them on twitter sometime). They used to provide a free service and the list was well within those limits, but they changed that. So there’s a monthly cost that varies from $1 to $10 depending the traffic. Delivery has been stellar ever since I switched to Mailgun and the few issues the mailing list had, were glitches from my side.

So it turns out that you do pay some postage to send your email after-all and the M3AAWG members are the couriers.

Moving from vagrant+VirtualBox to WSL2

The past two years I had been working on Windows 10 Home. I had a setup that was really easy for me, using a vagrant box per project and using minikube as my docker host. But …

… I wanted to upgrade to Windows 10 Pro. The Windows upgrade was relatively easy and then started the issues. Hyper-V does not play nice with VirtualBox which was the provider for my vagrant boxes. One option was to

> bcdedit /set hypervisorlaunchtype off

which allowed me to work as before, but one of the reasons to make this upgrade to Pro was to run Docker Desktop natively. With hypervisorlaunchtype set to off this is not possible since WSL2 does not run.

So I took a tarfile of each ~vagrant user per virtual machine, then bcdedit /set hypervisorlaunchtype auto, rebooted and I had both WSL2 and docker desktop operational. You can also have minikube run on top of Hyper-V and of course you can always run the kubernetes that comes with the docker desktop.

Because I did not have an elaborate setup with my VMs, the transition seems to have happened without many issues. I now have one user per project in my WSL and still can keep tarfile backups or whatever else if needed.

As for VMs, I am thinking of giving multipass a chance.

Greenspun’s tenth rule and mutt

Mutt is a mail client that I used probably from 199? to 2001. Prior to that I was using elm and felt that because I could not understand its code, a switch was in order.

So what Greenspun’s tenth rule has to do with a text mail client? Let’s remember what the rule says:

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Yesterday, mutt version 2.o was announced. I only found out about it because a friend pinged me. And sure enough, in the release notes MuttLisp is mentioned, a first version of which appeared here.

Being a mail client, mutt already follows Zawinsky’s Law. Now it follows Greenspun’s tenth rule too.

Today’s #soundtrack

[ I posted this originally on Facebook, but since there’s some dust collected in the blog, well … ]

Today’s #soundtrack is two albums I had not listened to for a long long time:

  • Dreams of Freedom (Ambient Translations of Bob Marley)
  • Chant Down Babylon – a hip-hop remix of Bob Marley songs

As I am writing this, I decided that it is high time I put all my CDs in boxes store them away and make some space. (I parted with my Rap / Hip-Hop collection in 2005; let’s part with the other half 15 years later). For sentimental reasons I will only keep the John Coltrane and JAZZ & TZAZ MAGAZINE ones on display.

We do not even have a proper CD player anymore. We have a Tivoli Model One and its aux port is shared between a Google Audio (where we cast sound) and a portable CD player (also not so frequently used).

Streaming automation has made this almost a no brainer choice. For now at least. I think I own more music than I can listen to anyway. Even on streaming services I rarely look for new releases. I just look for the convenience of recalling something I already own or know about and may not own it for some reason. Like the first 8 albums of Iron Maiden 🙂