Note: this is not a comprehensive comparison of Kubernetes and Docker Cloud.
It is just based on my own experiences. I am also using Tutum and Docker Cloud
more or less interchangeably, since Tutum became Docker Cloud.
At work, we used to use Tutum for
orchestrating our Docker containers for our
Calculus practice problems site. While it was in
beta, Tutum was free, but Tutum has now become
Docker cloud and costs about $15 per month per
managed node per month, on top of server costs. Although we got three free
nodes since we were Tutum beta testers, we still felt the pricing was a bit
steep, since the management costs would be more than the hosting costs. Even
more so since we would have needed more private Docker repositories than what
would have been included.
So I started looking for self-hosted alternatives. The one I settle on was
Kubernetes, which originated from Google. Obviously,
if you go self-hosted, you need to have enough system administration knowledge
to do it, whereas with Docker Cloud, you don't need to know anything about
system administration. It's also a bit more time consuming to set up — it
took me about a week to set up Kubernetes (though most of that time was
scripting the process so that we could do it again more quickly next time),
whereas with Tutum, it took less than a day to get up and running.
Kubernetes will require at least one server for itself — if you want to ensure
high availability, you'll want to run multiple masters. We're running on top
of CoreOS, and a 512MB node seems a bit tight for the master for our setup. A
1GB node was big enough that, although they recommend not to, I allowed the
master to schedule running other pods.
Kubernetes seems to have a large-ish overhead on the worker nodes
(a.k.a. minions). Running top, the system processes take up at least 200MB,
which means that on a 512MB node, you'd only have about 300MB to run your own
pods unless you have swap space. I have no idea what the overhead on a
Tutum/Docker cloud node was, since I didn't have access to check.
Previously, under Tutum, we were running on 5*512MB nodes, each of which had
512MB swap space. Currently, we're running on 3*1GB worker nodes plus 1*1GB
master node (which also serves as a worker), no swap. (We'll probably need to
add another worker in the near future (or maybe another combined master/worker)
though under Tutum, we would have probably needed another node with the changes
that I'm planning anyways.) Since we also moved from DigitalOcean to
DreamHost (affiliate link) and
their new DreamCompute service (which just came out of Beta as we were looking
into self-hosting), our new setup ended up costing $1 less per month.
Under Tutum, the only way to pass in configuration (other than baking it into
your Docker image, or unless you run your own configuration server) is through
environment variables. With Kubernetes, you have more options, such as
ConfigMaps and Secrets. That gives you more flexibility and allows (depending
on your setup) on changing configuration on-the-fly. For example, I created an
auto-updating HAProxy configuration
that allows you to specify a configuration template via a ConfigMap. When you
update the ConfigMap, HAProxy gets immediately reconfigured with almost no
downtime. This is in contrast to the
Tutum equivalent, in which
a configuration change (via environment variables) would require a restart and
hence more downtime.
The other configuration methods also allows the configuration to be more
decoupled. For example, with Tutum's HAProxy, the configuration for a service
such as virtual host names are specified using the target container's
environment variables, which means that if you want to change the set of
virtual hosts or the SSL certificate, you would need to restart your
application containers. Since our application server takes a little while to
restart, we want to avoid having to do that. On the other hand, if the
configuration were set in HAProxy's environment, then it would be lost to other
services that might want to use it (such as monitoring software that would
might use the HTTP_CHECK variable). With a ConfigMap, however, the
configuration does not need to belong to one side or the other; it can stand on
its own, and so it doesn't interfere with the application container, and can be
accessed by other pods.
Kubernetes can be all configured using YAML (or JSON) files, which means that
everything can be version controlled. Under Tutum, things are primarily
configured via the web interface, though they do have a command-line tool that
you could use as well. However, the command-line tool uses a different syntax
for creating versus updating, whereas with Kubernetes, you can just "kubectl
apply -f", so even if you use the Tutum CLI and keep a script under version
control for creating your services, it's easier to forget to change your script
after you've changed a service.
There are a few things that Tutum does that Kubernetes doesn't do. For
example, Tutum has built-in node management (if you use AWS, DigitalOcean, or
one of the other providers that it is made to work with), whereas with
Kubernetes, you're responsible for setting up your own nodes. Though there are
apparently tools built on top of Kubernetes that do similar things, I never
really looked into them, since we currently don't need to bring up/take down
nodes very frequently. Tutum also has more deployment strategies (such as
"emptiest node" and "high availability"), which was not that important for us,
but might be more important for others.
Based on my experience so far, Kubernetes seems to be a better fit for us. For
people who are unable/unwilling to administer their own servers, Docker Cloud
would definitely be the better choice, and starting with Tutum definitely gave
me time to look around in the Docker ecosystem before diving into a self-hosted