/dev/null as a service

This is kind of an obvious trick, but you never know when you’re going to need a web server that just receives stuff and responds back that all is well, while discarding its input. If you’re in such a situation:

docker run -p 8080:8080 docker.elastic.co/logstash/logstash-oss:7.15.0 -e "input { http { } } output { sink {} }"

It puts the http input plugin and sink to good use.

Return a blank favicon.ico with Python bottle

Bottle is a fine framework when you need to quickly start a small Python application and serve web calls from it. However, you will soon notice that browser calls to your application ask for favicon.ico. Of course you can ignore those calls, but what if you want to serve them too?

Once choice is to use server_static() and return the icon as described here. But what if you want to return a blank icon, since this is a quick hack anyway and you do not want to spend time searching for whatever icon or not?

Starting from the description of a blank favicon, you can add this to your code:

@get('/favicon.ico')
def get_favicon():

    response.content_type = 'image/x-icon'

    return "data:image/x-icon;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQEAYAAABPYyMiAAAABmJLR0T///////8JWPfcAAAACXBIWXMAAABIAAAASABGyWs+AAAAF0lEQVRIx2NgGAWjYBSMglEwCkbBSAcACBAAAeaR9cIAAAAASUVORK5CYII="

What if my Kubernetes cluster on AWS cannot pull from ECR?

When you run an RKE cluster on AWS, pulling images from an ECR is something to be expected. However, it is not the easiest of things to do that, since the credentials produced by aws ecr get-login expire every few hours and thus, you need something to refresh them. In Kubernetes world this means we should use a CronJob.

What we do in this post is a simple improvement on other work here and here. The biggest difference is using the Bitnami docker images for aws-cli and kubectl to achieve the same result.

So we need a CronJob that will schedule a pod to run every hour and refresh the credentials. This job will:

  • run in a specific namespace and refresh the credentials there
  • have the ability to delete and re-create the credentials
  • do its best not to leak them
  • use "well known" images outside the ECR in question

To this end our Pod needs an initContainer that is going to run aws ecr get-login and store it in an ephemeral space (emptyDir) for the main container to pick it up. The main container in turn, will pick up the password generated by the init Container and complete the credential refreshing.

Are we done yet? No, because by default the service account the Pod operates on, does not have the ability to delete and create credentials. So we need to create an appropriate Role and RoleBinding for this.

All of the above is reproduced in the below YAML. If you do not wish to hardcode the AWS region and ECR URL, you can of course make them environment variables.

---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: dns
  name: ecr-secret-role
rules:
- apiGroups: [""]
  resources:
  - secrets
  - serviceaccounts
  - serviceaccounts/token
  verbs:
  - 'create'
  - 'delete'
  - 'get'
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: dns
  name: ecr-secret-rolebinding
subjects:
- kind: ServiceAccount
  name: default
  namespace: dns
roleRef:
  kind: Role
  name: ecr-secret-role
  apiGroup: ""
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: ecr-update-login
  namespace: dns
spec:
  schedule: "38 */1 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          restartPolicy: Never
          initContainers:
          - name: awscli
            image: bitnami/aws-cli
            command:
            - /bin/bash
            - -c
            - |-
              aws ecr get-login --region REGION_HERE | cut -d' ' -f 6 > /ecr/ecr.token
            volumeMounts:
            - mountPath: /ecr
              name: ecr-volume
          containers:
          - name: kubectl
            image: bitnami/kubectl
            command:
            - /bin/bash
            - -c
            - |-
              kubectl -n dns delete secret --ignore-not-found ecr-registry
              kubectl -n dns create secret docker-registry ecr-registry --docker-username=AWS --docker-password=$(cat /ecr/ecr.token) --docker-server=ECR_URL_HERE
            volumeMounts:
            - mountPath: /ecr
              name: ecr-volume
          volumes:
          - name: ecr-volume
            emptyDir: {}

You can now use the ECR secret to pull images by adding to your Pod spec:

 imagePullSecrets:
  - name: ecr-registry

And yes, it is possible to solve the same issue with instance profiles:

”ecr:GetAuthorizationToken”,
“ecr:BatchCheckLayerAvailability”,
“ecr:GetDownloadUrlForLayer”,
“ecr:GetRepositoryPolicy”,
“ecr:DescribeRepositories”,
“ecr:ListImages”,
“ecr:BatchGetImage”

This is a solution when for whatever reason, for when you cannot. Plus it may give you ideas for other helpers using the aws-cli and kubectl. Or for other clouds and registries even.

Why I like Groovy

I am writing this following a discussion with a colleague, where he pointed out his dislike of the Jenkins pipeline language and I commented that I liked Groovy. He sounded astonished, so this gives me a chance to elaborate a bit on that:

I run systems and I am not a Software Engineer, but I do write glue code all the time. The past few years I’ve come across a number of Jenkins installations, and Groovy is a bit of a required asset for more complex Jenkins stuff. That’s how I got my working, trial and error, knowledge of Groovy.

I happen to like functional languages. But since I am not a SWE and since in my location no FP jobs existed in the market, I am paid to do stuff people know I do well, not stuff I want to play with. To this end, Groovy is the closest thing to FP I can get paid working with. And it is a trick in life to find what is play that people think is work that want to pay you for.

On the same track, Groovy runs on the JVM. Why not Clojure you say? Because I can get paid working with Groovy, I can only be considered a junior engineer seeking Clojure work. And time is a valuable and constrained resource. I do not have infinite free time to learn Clojure. I did happen to be allowed to learn Groovy to save the day on a system.

Even though I first worked with Java when it run on SPARC Solaris 2.3 machines, I decided to not invest my time in the language, foreseeing that such an investment of time, will make me a monoglot, and I sure enjoy more my ability to switch languages, even for glue code and/or projects. But, again it happens that the JVM is one of the most engineered pieces of software of the past 25 years and almost everywhere you will find it running. So when you run systems you need to understand it somehow. And you will be required some times to write code that runs on it.

Hence my answer: Groovy and the book next to me on the desk.

It is not my first choice, but it seems to be the optimal for me. Today Groovy is number 12 on the TIOBE index and Go (my next similar choice because of Kubernetes) 14.

Beware of true in Pod specifications

true is a reserved word in YAML and this can bite you when you least expect it. Consider the following extremely simple Pod and livenessProbe:

apiVersion: v1
kind: Pod
metadata:
  name: true-test
spec:
  containers:
  - name: nginx
    image: nginx
    livenessProbe:
      exec:
        command:
        - true

Let’s create this, shall we?

$ kubectl create -f true.yaml 

Error from server (BadRequest): error when creating "true.yaml": Pod in version "v1" cannot be handled as a Pod: v1.Pod.Spec: v1.PodSpec.Containers: []v1.Container: v1.Container.LivenessProbe: v1.Probe.Handler: Exec: v1.ExecAction.Command: []string: ReadString: expects " or n, but found t, error found in #10 byte of ...|ommand":[true]}},"na|..., bigger context ...|age":"nginx","livenessProbe":{"exec":{"command":[true]}},"name":"nginx"}]}}
|...
$

All because in the above specification, true is not quoted, as it should:

    livenessProbe:
      exec:
        command:
        - "true"

A different take, if you do not want to mess with true would be:

   livenessProbe:
      exec:
        command:
        - exit

Beware though, if you want to exit 0 you need to quote again:

   livenessProbe:
      exec:
        command:
        - exit
        - "0"

Oh, the many ways you can waste your time…

Πότε επιτέλους θα μάθουν οι υπεύθυνοι;

TL;DR: Ποτέ.

Ένα από τα καλύτερα μαθήματα που πήρα ποτέ, ήταν σε ένα από τα συνέδρια του Athens ISACA Chapter Infocom. Ο ομιλητής άρχισε να λέει κάτι σαν:

– Πόσοι από εδώ έχετε εντοπίσει θέματα που αν “σκάσουν” θα είναι πραγματικά προβλήματα και διαπιστώνετε πως το management δεν κάνει κάτι για αυτά;

Το κοινό κοιταζόταν μεταξύ τους, γιατί πολύ απλά η απάντηση ήταν: όλοι.

Ο ομιλητής, consultant σε μια από τις big 4, είπε μετά: Αυτό συμβαίνει because management takes a bet: Δεν θα σκάσει στην βάρδια μου.

Πόσο είναι αυτή η βάρδια; 2 χρόνια; Τρία χρόνια; Μετά έχει αποχωρήσει για άλλο πόστο ή έχει αποσυρθεί. Όταν σκάσει είναι αλλουνού το πρόβλημα, αυτού που το έχει στα χέρια του.

Για αυτό λοιπόν, οι υπεύθυνοι ενδιαφέρονται μόνο για έργα που έχουν κορδέλα. Επειδή προάγουν την καριέρα και ετοιμάζουν το επόμενο βήμα τους. Μια αντιπυρική ή ένας καθαρισμός, τι φωτογραφίες να δώσει; Καμία.

Καλά θα μου πεις και ο υπάλληλος από κάτω που έχει διάρκεια στην παραμονή του στην υπηρεσία; Ο υπάλληλος, φιλότιμος ή αφιλότιμος, μια χαρά είναι καλυμμένος από την συμπεριφορά των πολιτικών υπευθύνων. Όταν του πουν “αδερφέ κι εσύ τι έκανες τόσο καιρό;” θα ανοίξει το συρτάρι, θα βγάλει τα σημειώματα (με αριθμό πρωτοκόλλου) προς κάθε διοίκηση και θα πει “Aυτό” .

Το Δημόσιο δεν ευνοεί το προσωπικό ρίσκο του εργαζομένου (αν πάνε καλά τα πράγματα δεν ακούει ούτε μπράβο γιατί χάλασε την σούπα, αν πάνε κακά, δεν τον στηρίζει κανείς γιατί ρίσκαρε μόνος του) και διαχέει την ευθύνη αριστοτεχνικά.

Οπότε κάθε φορά, ο υπεύθυνος, παίζει ένα στοίχημα: θα σκάσει όσο κάθομαι στην καρέκλα; Αν ναι, ας το προλάβω. Αλλιώς σε δυο χρόνια ποιος με θυμάται αν δεν έκανα κάτι;

Management avoids errors of commission by making errors of omission.

[Originally a FB post]

Vagrant was unable to mount VirtualBox shared folders

After upgrading my ubuntu/focal64 box I got greeted by this wonderful message:

Vagrant was unable to mount VirtualBox shared folders. This is usually
because the filesystem "vboxsf" is not available. This filesystem is
made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the
guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attempted was:

mount -t vboxsf -o uid=1000,gid=1000 vagrant /vagrant

The error output from the command was:

: Invalid argument

The solution was rather simple: A sudo apt-get install -y virtualbox-guest-dkms inside the VirtualBox guest followed by a vagrant reload at the host.

In case it matters, the VirtualBox host machine was an Ubuntu 21.04.

F1 (random) thoughts

I had not watched a F1 championship for years. Maybe the occasional race once or twice per year. My interest in the sport was renewed by Formula 1: Drive to Survive. It offered a unique (although with a bit of reality) insight in the sport. So I watched the second half of last year’s championship and am watching the 2021 also.

I started wondering about the telemetry, monitoring, observability tools the teams use. After all, using your current understanding of things to understand something new is what we humans do most of the time. I understand monitoring and analytics infrastructures, I have an interest how people setup these in F1. Atlas 10 was mentioned in my FB feed by a friend.

I then started paying attention to the small advertising stickers on the cars. Not for the usual suspects like Oracle, Kaspersky and Citrix. JuliaHub what are you doing there? Julia is a programming language for scientific programming for those who don’t know about it. Not everything is about Python in computational science. Fascinating.

And then there was the most interesting observation, Tezos. I’ve seen it on McLaren and Red Bull. And to show that advertising works, I had $5 invested once in Etherium. I converted it to Tezos :)

While you’re here, bored, check this WIPO decision about the f1.com domain name once upon a time.

So on which Jenkins system am I running on?

It is often the case that you run a staging / test Jenkins server that has identically configured jobs as the production one. In such cases you want your pipeline to be able to distinguish in which system it runs on.

One way to do so it by checking the value of the BUILD_URL environment variable. However, this is not very helpful when you’re running the master inside a container, in which case you get back the container hostname in response.

There are also a number of solutions in StackOverflow you can look at, but you may opt to utilise the fact that you can add labels to each master accordingly and then query the master for the value of the labels it carries. Our solution depends on the httpRequest plugin in order to query the master.

import groovy.json.JsonSlurper

def get_jenkins_master_labels() {
    def response = httpRequest httpMode: 'GET', url: "http://127.0.0.1:8080/computer/(master)/api/json"
    def j = new JsonSlurper().parseText(response.content)
    return j.assignedLabels.name
}

def MASTER_NODE = get_jenkins_master_labels()

pipeline {
    agent {
        label 'docker'
    }
    stages {
        stage("test") {
            steps {
                println MASTER_NODE
            }
        }
    }
}

The trick here is that the part outside of the pipeline { ... } block runs directly on the master, so we can go ahead and call http://127.0.0.1:8080/computer/(master)/api/json to figure out stuff. get_jenkins_master_labels() queries the master and returns a list of all the labels assigned to the master (or a single string, master if no other labels are assigned to it). By checking the values of the list, one can infer in which Jenkins environment they are running on and continue from there.