Running tinyproxy inside Kubernetes

Now why would you want to do that? Because sometimes, you have to get data from customers and they whitelist specific IP addresses for you to get their data from. But the whole concept of Kubernetes means that, in general, you do not care where your process runs on (as long as it runs on the Kubernetes cluster) and also you get some advantage from the general elasticity it offers (because you know, an unspoken design principle is that your clusters are on the the cloud, and basically throwaway machines, alas life has different plans).

So assuming that you have a somewhat elastic cluster with some nodes that never get disposed and some nodes added and deleted for elasticity, how would you secure access to the customer’s data from a fixed point? You run a proxy service (like tinyproxy, haproxy, or whatever) on the fixed servers of your cluster (which of course the client has whitelisted). You need to label them somehow in the beginning, like kubectl label nodes fixed-node-1 tinyproxy=true. You now need a docker container to run tinyproxy from. Let’s build one using the Dockerfile below:

FROM centos:7
RUN yum install -y epel-release
RUN yum install -y tinyproxy
# This is needed to allow global access to tinyproxy.
# See the comments in tinyproxy.conf and tweak to your
# needs if you want something different.
RUN sed -i.bak -e s/^Allow/#Allow/ /etc/tinyproxy/tinyproxy.conf
ENTRYPOINT [ "/usr/sbin/tinyproxy", "-d", "-c", "/etc/tinyproxy/tinyproxy.conf" ]

Let’s build and push it to docker hub (obviously you’ll want to push to your own docker repository):

docker build -t yiorgos/tinyproxy .
docker push yiorgos/tinyproxy

We can now attempt to deploy a deployment in our cluster:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tinyproxy
  namespace: adamo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tinyproxy
  template:
    metadata:
      labels:
        app: tinyproxy
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: tinyproxy
                operator: In
                values:
                - worker-4
                - worker-3
      containers:
      - image: yiorgos/tinyproxy
        imagePullPolicy: Always
        name: tinyproxy
        ports:
        - containerPort: 8888
          name: 8888tcp02
          protocol: TCP```

We can apply the above with `kubectl apply -f tinyproxy-deployment.yml`. The trained eye will recognize however that this YAML file is somehow [Rancher](http://rancher.com) related. Indeed it is, I deployed the above container with Rancher2 and got the YAML back with `kubectl -n proxy get deployment tinyproxy -o yaml`.  So what is left now to make it usable within the cluster? A service to expose this to other deployments within Kubernetes:

apiVersion: v1 kind: Service metadata: name: tinyproxy namespace: proxy spec: type: ClusterIP ports:

  • name: 8888tcp02 port: 8888 protocol: TCP targetPort: 8888

We apply this with `kubectl apply -f tinyproxy-service.yml` and we are now set. All pods within your cluster can now access the tinyproxy and get access to whatever they need to by connecting to `tinyproxy.proxy.svc.cluster.local:8888`. I am running this in its own namespace. This may come handy if you use [network policies](https://kubernetes.io/docs/concepts/services-networking/network-policies/) and you want to restrict access to the proxy server for certain pods within the cluster.

This is something you can use with [haproxy](https://hub.docker.com/_/haproxy) containers instead of tinyproxy, if you so like.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s