February 24, 2017
09:56 -0500
Hubert Chathi: Well, thats just embarassing.
0 Comments
February 23, 2017
22:12 -0500
Hubert Chathi: Anyone proxied by @cloudflare.com or using sites proxied by them: your private data may have been leaked #
0 Comments
11:18 -0500
Hubert Chathi: SHA-1 is officially broken #
0 Comments
February 15, 2017
19:02 -0500
Hubert Chathi: RIP Stuart McLean
0 Comments
February 11, 2017
10:00 -0500
Hubert Chathi: Sign the petition to ask the government to honour their promise to fix our electoral system #
0 Comments
January 31, 2017
08:54 -0500
Hubert Chathi: Got our free Parks Canada Discovery Pass yesterday. Get yours at http://www.parksorders.ca/
0 Comments
January 25, 2017

On transparency

21:01 -0500

I've written briefly before about the value of companies being open and transparent. Back then, I wrote that the way that companies react when things go wrong is a good way to differentiate between them. No matter what company you deal with, things will go wrong at one point or another. Some companies try to avoid responsibility, or only tell you that something has happened if you ask them. Others companies are much more open about what happened.

Matrix.org (and the associated Riot.im) is an example of a team that falls into the latter category. And last night's incident is a good example. Their post-mortem blog post is a great example for others to follow. It gives a detailed timeline of what happened and why the outage occurred. And it finishes off with steps that they will take to prevent future incidents.

Kudos to the Matrix.org team for their transparency.

0 Comments
December 25, 2016
09:30 -0500
Hubert Chathi: Merry Christmas
0 Comments
December 1, 2016

Let's Encrypt for Kubernetes

21:08 -0500

A while ago, I blogged about automatic Let's Encrypt certificate renewal with nginx. Since then, I've also set up renewal in our Kubernetes cluster.

Like with nginx, I'm using acme-tiny to do the renewals. For Kubernetes, I created a Docker image. It reads the Let's Encrypt secret key from /etc/acme-tiny/secrets/account.key, and CSR files from /etc/acme-tiny/csrs/{name}.csr. In Kubernetes, these can be set up by mounting a Secrets store and a ConfigMap, respectively. It also reads the current certificates from /etc/acme-tiny/certs/{name}, which should also be set up by mounting a ConfigMap (called certificates), since that is where the container will put the new certificates.

Starting an acme-tiny pod will start an nginx server to store the .well-known directory for the Acme challenge. Running /opt/acme-tiny-utils/renew in the pod will renew the certificate if it will expire within 20 days (running it with the -f option will disable the check). Of course, we want the renewal to be automated, so we want to set up a sort of cron task. Kubernetes has cron jobs since 1.4, but at the time I was setting this up, we were still on 1.3. Kubernetes also does cron jobs by creating a new pod, whereas the way I want this to work is to run a program in an existing pod (though it could be set up to work the other way too). So I have another cron Docker image, which I have set up to run

kubectl exec `kubectl get pods --namespace=lb -l role=acme -o name | cut -d / -f 2` --namespace=lb ./renew sbscalculus.com

every day. That command finds the acme-tiny pod and runs the renew command, telling it to renew the sbscalculus.com certificate.

Now in order for the Acme challenge to work, HTTP requests to /.well-known/acme-challenge/ get redirected to acme-tiny rather than to the regular pods serving those services. Our services are behind our HAProxy image. So I have a 0acmetiny entry (the 0 causes it to be sorted before all other entries) in the services ConfigMap for HAProxy that reads:

    {
      "namespace": "lb",
      "selector": {
        "role": "acme"
      },
      "hostnames": ["^.*"],
      "path": "^/\\.well-known/acme-challenge/.*$",
      "ports": [80]
    }

This causes HAProxy to all the Acme challeges to the acme-tiny pod, while leaving all the other requests alone.

And that's how we have our certificates automatically renewed from Let's Encrypt.

0 Comments
November 21, 2016
21:31 -0500
Hubert Chathi: Congratulations to @riot.im and @matrix.org for the beta release of cross-platform end-to-end encryption
0 Comments
November 8, 2016
08:09 -0500
Hubert Chathi: Americans: hold your nose and vote today. And may the lesser of two evils win.
0 Comments
November 3, 2016
16:34 -0400
Hubert Chathi: My solution to people not changing passwords on their devices: set the default password to "IAmAnIdiotForNotChangingThePassword"
0 Comments
October 25, 2016
09:27 -0400
Hubert Chathi: Amanda's picture was in our local newspaper
0 Comments
September 7, 2016
16:50 -0400
Hubert Chathi: Domain owners, beware of the Chinese Domain Scam. I've gotten some of these emails too.
0 Comments
September 6, 2016

Buildbot latent build slaves

12:28 -0400

I've blogged before about using Buildbot to build our application server. One problem with it is that the build (and testing) process can be memory intensive, which can sometimes exceed the memory that we have available in our Kubernetes cluster. I could add another worker node, but that would be a waste, since we do builds infrequently.

Fortunately, the Buildbot developers have already built a solution to this: latent buildslaves. A latent buildslave is a virtual server that is provisioned on-demand. That means that when a build isn't active, then we don't have to pay for an extra server to be active; we only have to pay for the compute time that we actually need for builds (plus a bit of storage space).

I chose to use AWS EC2 as the basis of our buildslave. Buildbot also supports OpenStack, so I could have just used DreamCompute, which we already use for our Kubernetes cluster, but with AWS EC2, we can take advantage of spot instances and save even more money if we needed to. In any event, the setup would have been pretty much the same.

Setting up a latent buildslave on AWS is pretty straightforward. First, create an EC2 instance in order to build a base image for the buildslave. I started with a Debian image. Then, install any necessary software for the buildslave. For us, that included the buildslave software itself (Debian package buildbot-slave), git, tinc, npm, and Docker. Most of our build process happens inside of Docker containers, so we don't need anything else. We use tinc to build a virtual network with our Kubernetes cluster, so that we can push Docker images to our own private Docker repository.

After installing the necessary software, we need to configure it. It's configured just like a normal buildslave would be configured: I configured tinc, added an ssh key so that it could check out our source code, configured Docker so that it could push to our repository, and of course configured the Buildbot slave itself. Once it's configured, I cleaned up the image a bit (truncated logs, cleared bash history, etc.), and then took a snapshot in the AWS control panel, giving it a name so that it would show up as an AMI.

Finally, I added the latent buildslave in our Buildbot master configuration, giving it the name of the AMI that was created. Once set up, it ran pretty much as expected. I pushed out a change, Buildbot master created a new EC2 instance, built our application server, pushed and deployed it to our Kubernetes cluster, and after a short delay (to make sure there are no other builds), deleted the EC2 instance. In all, the EC2 instance ran for about 20 minutes. Timings will vary, of course, but it will run for less than an hour. If we were paying full price for a t2.micro instance in us-east-1, each build would cost just over 1 cent. We also need to add in the storage cost for the AMI which, given that I started with an 8GB image, will cost us at most 80 cents per month (since EBS snapshots don't store empty blocks, it should be less than that). We probably average about two builds a month, giving us an average monthly cost of at most 83 cents, which is not too bad.

0 Comments
August 16, 2016
12:55 -0400
Hubert Chathi: Sad to hear about the fire at the @cloverleaffarms.ca meat processing plant
0 Comments
August 8, 2016
02:05 -0400
Hubert Chathi: Success! Congrats Jesse and Patricia
0 Comments
July 8, 2016
13:58 -0400
Hubert Chathi: wants one: http://www.robotical.io/
0 Comments
June 22, 2016

Load balancing Kubernetes pods

10:10 -0400

At work, we recently switched from Tutum (now Docker Cloud) to Kubernetes. Part of that work was building up a load balancer. Kubernetes has built-in load balancing capabilities, but it only works with Google Compute or AWS, which we are not using. It also requires a public IP address for each service, which usually means extra (unnecessary) costs.

Having previously worked with Tutum's HAProxy image, I figured I could do the same thing with Kubernetes. A quick web search didn't turn up any existing project, so I quickly wrote my own. Basically, we have HAProxy handling all incoming HTTP(S) connections and passing them off to different services based on the Host header. There's a watcher that watches Kubernetes such as new/deleted pods for relevant changes and updates the HAProxy configuration so that it always sends requests to the right place. I also improved the setup by adding in a Varnish cache for some of our services. Here's how it all works.

We have two sets of pods: an set of HAProxy pods and a set of Varnish pods. Each pod has a Python process that watches etcd for Kubernetes changes, updates the appropriate (HAProxy or Varnish) configuration, and tells HAProxy/Varnish about the new configuration. Why do we watch etcd instead of using the Kubernetes API directly? Because, as far as I can tell, in the Kubernetes API, you can only watch one type of object (either pods, configmaps, secrets, etc.) for changes, whereas we need to watch multiple types at once, so dealing with the Kubernetes API means that we would need to make multiple simultaneous API requests, which would just make things more complicated.

Unlike Tutum's HAProxy image, which only allows you to change certain settings using environment variables, our entire configuration template is configurable using Jinja2 templates. This gives us a lot more flexibility, including being able to plug in Varnish fairly easily without having to make any code changes to the HAProxy configurator. Also, configuration variables for services are stored in their own ConfigMap, rather than as environment variables in the target pods which allows us to make configuration changes without restarting the pods.

When combining HAProxy and Varnish, one question to ask is how to arrange them: HAProxy in front of Varnish, or Varnish in front of HAProxy? We are using a setup similar to the one recommended in the HAProxy blog. In that setup, HAProxy handles all requests and passes non-cacheable requests directly to the backend servers. Cacheable requests are, of course, passed to Varnish. If Varnish has a cache miss, then it passes the request back to HAProxy, which then hands off the request to the backend server. As the article points out, in the event of a cache miss, there's a lot of requests, but cache misses should be very infrequent since Varnish only sees cacheable content. One main difference between the setup we have and the one in the article is that in the article, HAProxy listens on two IP addresses: one for requests coming from the public, and one for requests coming from Varnish. In our setup, we don't have two IP addresses for HAProxy to use. Instead, Varnish adds a request header that indicates that the request is coming from it, and HAProxy checks for that header.

At first, I set the Python process as the pod's command (the pod's PID 1), but ran into a slight issue. HAProxy reloads its configuration by, well, not reloading its configuration; it starts a new set of processes with the new configuration, which means that we ended up with a lot of zombie processes. To fix this, I could have changed the Python process to reap the zombies, but it was easier to just use Yelp's dumb-init instead.

We have the HAProxy pods managed as a DaemonSet, so it's running on every node, and the pods are set to use host networking for better performance. HAProxy itself is small enough that, at least with our current traffic, it doesn't affect the nodes much, so it isn't a problem for us right now to run it on every node. If we get enough traffic that it does make a difference, we can dedicate a node to it without much problem. One thing about this setup is that, even though it uses Kubernetes, HAProxy and Varnish don't need to be managed by Kubernetes. It just needs to be able to talk to etcd. So if we ever need a dedicated load balancer, we can spin up a node (or nodes) that just runs HAProxy and/or Varnish, say, using a DaemonSet and nodeSelector. Varnish is managed as a normal Kubernetes deployment and uses the normal container networking, so there's a bit of overhead there, but is fine for now. Again, if we have more concerns about performance, we can change our configuration easily enough.

It all seems to be working fairly well so far. There are some configuration tweaks that I'll have to go make, and there's one strange issue where Varnish doesn't like one of our services and just returns an empty response. But other than that, Varnish and HAProxy are just doing what they're supposed to do.

All the code is available on GitHub (HAProxy, Varnish).

0 Comments
June 21, 2016
16:03 -0400
Hubert Chathi: At 14 months, # had 14 teeth. He's now 15 months and has 15 teeth, and is on track to have 16 teeth by the time he's 16 months. If current trends continue, by the time he's 4 years old, he'll have 48 teeth. #
0 Comments