May 24, 2015

First steps in CI/CD with Buildbot

15:59 -0400

At work, I've been looking into Continuous Integration and Continuous Delivery/Deployment. So far, our procedures have been mostly manual, which means that some things take longer than necessary, and sometimes things get missed. The more that can be automated, the less developer time has to be spent on mundane tasks, and the less brain power needed to remember all the steps.

There are many CI solutions out there, and after investigating a bunch of them, I settled on using Buildbot for a few reasons:

  • it can manage multiple codebases for the same project, unlike many of the simpler CI tools. This is important since the back end for the next iteration of our product is based on plugins that live in individual git repositories.
  • it is lightweight enough to run on our low-powered VPS.
  • it has a flexible configuration language (its configuration file is Python code) and is easily extendable.

Right now, we're in development mode for our product, and I want to make sure that our development test site is always running the latest available code. That means combining plugins together, running unit tests, and if everything checks out, deploying. Eventually, my hope is to be able to tag a branch and have our production site update automatically.

The setup

Our code has one main tree, with plugins each in their own directory within a special plugins directory. The development test site should track the master branch on the main tree and all relevant plugins. For ease of deployment (especially for new development environments), we want to use git submodules to pull in all the relevant plugins. However, the master branch will be the basis of all deployments, which may use different plugins, or different versions of plugins, and so should not have any plugins specified in itself. Instead, we have one branch for each deployed version, which includes as submodules the plugins that are used for that build.

The builds

Managing git submodules can be a bit of a pain. Especially since we're not developing on the branch that actually contains the submodules, managing them would require switching branches, pulling the correct versions of the submodules, and pushing.

The first step in automation, then, is to automatically update the deployment branch whenever either a plugin or the main tree are updated. Buildbot has a list of the plugins that are used in a deployment branch, along with the branch that it follows. Each plugin is associated with a repository, and we use the codebase setting in Buildbot to keep the plugins separate. We then have a scheduler listen on the appropriate codebases, and triggers a build whenever any pushes are made. A Buildbot slave then checks out the latest version of the deployment branch, merges in the changes to the main tree and the submodules, and then pushes out a new version of the deployment branch.

Naturally, pushes to plugins and to the main tree are generally grouped. For example, changes in one plugin may require, or enable, changes in other plugins. We don't want a separate commit in our deployment branch for each change in each plugin, so we take advantage of Buildbot's ability to merge changes. We also wait for the git repositories to be stable for two minutes before running the build, to make sure that all the changes are caught. This reduces the number of commits we have in our deployment branch, making things a bit cleaner.

When Buildbot pushes out a new version of the deployment branch, this in turn triggers another build in Buildbot. Buildbot checks out the full sources, including submodules, installs the required node modules, installs configuration files for testing, and then runs the unit tests. If the tests all pass, then this triggers yet another build.

The final build checks out the latest sources into the web application directory for the test site, and then notifies the web server (currently using Passenger) to restart the web application.

Next steps

This setup seems to be working fairly well so far, but the setup isn't complete yet. Being a first attempt, I'm sure there are many improvements that can be made to the setup, both in terms of flexibility and performance. Especially since we only have one site being updated, the configuration works fine for now, but can probably be made more general to make it easier to deploy multiple sites.

One major issue in the current setup, though, is the lack of notifications. Currently, in order to check the state of the build, I need to view Buildbot's web UI, which is inconvenient. Buildbot has email notification built in, but I just haven't had the chance to set it up yet. When I do set it up, I will likely set it to notify on any failure, as well as whenever a deployment is made.

I'd also like to get XMPP notifications, which isn't built into Buildbot, and so is something that I would have to write myself. Buildbot is based on Twisted, which has an XMPP module built in, so it should be doable. I think the XMPP module is a bit outdated, but we don't need any fancy functionality, so hopefully it will work well enough.

I'm looking into using Docker for deployments once we're ready to push this project to production, so I'll need to look into creating build steps for Docker. The VPS that we're currently using for Buildbot is OpenVZ-based, and so does not support Docker, so we'd need to put a Buildbot slave on a Docker-capable host for building and testing the Docker images, or even use a Docker container as a Buildbot slave, which would be even better.

There's probably a lot that can be done to improve the output in the UI too. For example, when the unit tests are run, it only reports whether the tests passed or failed. It should be possible to create a custom build step that will report how many tests failed.

Assessment

Although Buildbot seems like the best fit for our setup, it isn't perfect. The main thing that I'd like is better project support. Buildbot allows you to set projects on change sets, but I'd like to be able to set projects on builds as well, in order to filter by projects in the waterfall view.

All in all, Buildbot seems like a worthwhile tool that is flexible, yet easy enough to configure. It's no-nonsense and just does what it claims to do. The documentation is well done, and for simple projects, you should be able to just dive right in without any issues. For more complex projects, it's helpful to understand what's going on before charging right in. Of course, I just charged right in without understanding certain concepts, so I had to redo some stuff to make it work better, but the fact that I was able to actually get it to work in the first place, even doing it the wrong way, gives some indication to its power.

0 Comments
April 15, 2015

Switching to nginx

09:17 -0400

I think that I've been running lighttpd for almost as long as I've had a VPS, but I've recently decided to switch to nginx.

The main reason that I've decided to switch is that lighttpd no longer seems to be actively developed. They still do bug fix releases, but aside from that, development seems to have been stalled. They have been working on their 1.5 branch for years, without marking it as stable. In fact, they even started working on a 2.0 branch without first releasing 1.5, which was a warning sign that development was losing focus.

Nginx has some weirdnesses and unexpected design decisions, though. For example

One feature that I will miss from lighttpd is its ability to automatically split SCRIPT_NAME and PATH_INFO based on what files are actually on the filesystem. I depend on that feature in my own CMS, which means I'll have to implement it myself, which is slightly inconvenient, but not too big of a deal.

I slightly prefer the lighttpd configuration file format, but that could be just a matter of what I'm used to.

Switching to nginx means that I'll be able to try out Passenger, which seems like a very interesting application server.

I've already switched my dev machine. Next I'll switch our home server, and once I have the CMS changes done, I'll switch my VPS.

0 Comments
March 29, 2015

New album: 2015-03 (20 pictures)

15:28 -0400
[untitled] [untitled] [untitled] [untitled] (16 more...)
View entire album.

Photos of Gareth from March 2015.

0 Comments
December 30, 2013

Random testing

17:55 -0500

My current project at work requires implementing non-trivial data structures and algorithms, and despite my best efforts (unit testing consisting of over 600 assertions), I don't have everything unit tested. In order to find bugs in my code, I've created a randomized tester.

First of all, the code is structured so that all operations are decoupled from the interface, which means that it can be scripted; anything that a user can do from the interface can also be done programmatically. Of course, this is a requirement for any testable code.

I want to make sure that the code is tested in a variety of scenarios, but without having to create the tests manually. So I let the computer generate it (pseudo)randomly. Basically, my test starts with a document (which, for now, is hard-coded). The program then creates a series of random operations to apply to the document: it randomly selects a type of operation, and then randomly generates an operation of that type. It then runs some tests on the resulting document, and checks for errors.

Most of the time, when doing random things, you don't want things to be repeatable; if you write a program to generate coin flips, you don't want the same results every time you run the program. In this case, however, I need to be able to re-run the exact same tests over and over; if the tests find a bug, I need to be able to recreate the conditions leading to the bug, so that I can find the cause. Unfortunately, JavaScript's default random number generator (unlike many other programming languages) is automatically seeded, and provides no way of setting the seed. That isn't a major problem, though — we just need to use an alternate random number generator. In this case, I used an implementation of the Mersenne Twister. Now, I just hard-code the seed, and every time I run the tester, I get the same results. And if I want a different test, I just need to change the seed.

It seems to be working well so far. I've managed to squish some bugs, uncover some logic errors, and, of course, some silly errors too. Of course, the main downside is that I can't be sure that the random tests cover all possible scenarios, but the sheer number of tests that are generated far exceeds what I would be able to reasonably do by hand, and my hand-written tests weren't covering all possible scenarios anyways.

Addendum: I should add that when the randomized tester finds a bug, I try to distill it to a minimal test case and create a new unit test based on the randomized tester result.

0 Comments
December 12, 2013

LinkedIn RSS feed retirement

14:06 -0500

Dear LinkedIn,

Since you are retiring the LinkedIn Network RSS Feed, as of December 19, I will be visiting LinkedIn even less. Removal of the RSS feed makes it less convenient for me to follow my network activity, and as a result, I will not be using LinkedIn as much as I used to.

0 Comments
December 3, 2013

Jungle Gym

10:23 -0500
Date:
2013-12-02
Place:
home
People:
Ivan

The jungle gym that we built for the kids is a hit with our little fire fighter.

0 Comments
September 29, 2013

Switching to RamNode

00:13 -0400

After many years at UltraHosting, I've switched my VPS hosting to RamNode (affiliate link). The switch should have been fairly transparent, with the exception that my Jabber server would have been unreachable for a little while, while DNS records propagated.

Why did I switch?

First of all, RamNode is much less expensive. For all that technology has been getting cheaper and better, I had been paying the same for my VPS as when I had first ordered it, with no improvements. With RamNode, I'm paying less, and getting a better system(twice the RAM, twice the CPU, and supposedly faster due to SSD caching).

Secondly, RamNode supports IPv6. Each VPS gets 16 IPv6 addresses (which IMHO is overkill, but I'm not complaining). Native, not tunnelled. I find it surprising that most providers still don't support IPv6. UltraHosting didn't even support TUN/TAP, so I couldn't even get a tunnelled IPv6.

Finally, RamNode, from the reviews that I've read, has really great support. They are very open with maintenance issues. I would rank my experiences with UltraHosting's support as "mediocre". They dealt with my support issues fine, but it was still lacking. I had ongoing time synchronization issues, and I wasn't provided with information regarding their outgoing SMTP setup (necessary for proper SPF records). I haven't had to deal with RamNode support yet, but the reviews that I've read indicate that their support is very responsive, and the fact that they publicize maintenance issues is promising.

But aren't you worried about the NSA?

RamNode is a US-based company, and the servers are located in the US or Netherlands, whereas UltraHosting is a Canadian company, and the servers are located in Canada. With all the noise about the NSA recently, it might seem risky to move my data to the US.

However, my server doesn't store any private information — aside from my SSL key (which the NSA can already spoof, due to the nature of SSL), and a limited SSH key to back up my data to my home server. Everything else that's stored on my server, the NSA already has access to. (Pretty much all the email that I send and receive passes through the US already.)

I am, however, taking some extra precautions, and avoiding transmitting any sensitive data to or through my server unprotected. But that's good security practices anyways.

So, overall, the NSA and surveillance in general is a concern, but does not affect my VPS server.

0 Comments
September 6, 2013

The chief virtues of permaculture: a tongue-in-cheek introduction to permaculture

15:57 -0400

Note: This article was originally written for the Beaver Creek Dam News.

Larry Wall, the creator of the Perl computer programming language, identifies the chief virtues of a programmer as laziness, impatience, and hubris (and I, as a programmer, have plenty of all three). I would say that laziness is also one of the chief virtues of permaculture. The term "permaculture" comes from shortening "permanent agriculture," or "permanent culture," with the aim of creating an ecosystem that will outlast the designer. (I can think of no greater state of laziness than being dead.) The ideal permaculture design is one in which the only work that needs to be done is harvesting. While the reality is that no design will completely eliminate human work, even if only to guide the progression of the ecosystem, the goal is to avoid as much work as possible. Most of the heavy lifting, once the design is established, is done by nature. For example, rather than spraying insecticides to kill crop-eating bugs, permaculture lures in beneficial organisms to control pests. Rather than heavily fertilizing and tilling soil, permaculture uses the plants themselves, along with bacteria, fungi, worms, and bugs to build soil fertility. Permaculturists also often enlist the aid of animals in order to till and fertilize the soil, to control weeds, and sometimes even to help harvest. While traditional gardening is dominated by annuals, which must be planted (and in some cases transplanted) every year, permaculture places a greater emphasis on perennial and self-sowing plants.

Although the end goal is to avoid doing work, the necessity is that a lot of work must be done in planning and developing a permaculture design. One must determine which plants are needed in order to fulfil the required functions. The soil may need a jump start (especially if the soil had previously been abused), often through a technique known as sheet mulching, which can be very labour intensive. However, the ultimate payoff is a garden that mostly cares for itself and that requires much less labour than conventional gardening (and maybe even less labour than going to the grocery store).

Another virtue of permaculture is greediness. Permaculture tries to squeeze as much productivity out of the land as it can. Conventional gardens grow individual crops in each location. Meanwhile, forest gardening, one of the keystones of permaculture, aims to have seven layers of plants, all growing together (or possibly eight layers, if you take Stamets’ advice of growing mushrooms). Forest gardening attempts to mimic the way that forests grow in nature. Forests are highly productive areas, requiring no input from humans to achieve such a high level of productivity. While most people will think about the trees when thinking about a forest, the forest would not be as productive or as healthy without the shrubs, ground cover, flowers, and vines. Permaculture also aims to reduce "wasted" space, such as paths for accessing the plants, using patterns such as keyhole beds. Keyhole beds also encourage laziness: you can sit in the middle of a keyhole bed and harvest an array of crops around you.

Not satisfied with just demanding much from the land, permaculture expects much from plants as well. Most gardeners will grow a plant for a single purpose, such as for ornamental, or edibility purposes. However, a single plant can perform multiple functions, such as providing food, attracting or sustaining beneficial organisms, providing shade during hot summer days, providing shelter from cold winds, improving soil, providing beautiful flowers, providing fragrance, or providing wood for fuel or for crafts. Permaculturists try to use plants for as many functions as they can provide.

A third virtue of permaculture is attention deficit. Modern farming's monoculture is a permaculturist's nightmare. Boooooring! For permaculture, variety is king. I think that Amanda has lost count of the number of times that I've exclaimed, "Hey, we should grow this plant too!" It is not uncommon for permaculturists to cultivate over a hundred species of plants in a single garden. Diversity improves the chances of survival. While a monoculture can be wiped out by a single type of pest or by unusual weather, a diverse ecosystem is more resilient. If one crop fails one year, other crops can take its place. A diverse ecosystem is also less likely to attract devastating quantities of pests in the first place — a monoculture looks like an all-you-can-eat buffet, while a garden with interplanted crops requires more work for the pests to travel between their favourite meals. Furthermore, pest eaters may be lying in wait, having been initially attracted by their favourite snacks. Having a diversity of plants can ensure that the pest eaters are around year-round: when one plant's flowering season is over, another plant can take over.

Similar to how hard work is required before laziness is allowed, a permaculture design must start with careful observation before the mind is allowed to wander. Permaculture design starts with observing different aspects of the site for factors such as soil composition, sun and wind patterns, wildlife, water, et cetera, at various times of the day, and throughout the year. Some permaculture experts suggest observing for a full year before doing any planting. Observation informs the design, indicating, for example, where different plants need to go in order to take advantage of sun and shade, where barriers are needed, what soil alterations are needed, or what plant functions should be sought out.

I believe, then, that the chief virtues of permaculture are: laziness, greediness, and attention deficit. Amanda suggested to rephrase them as: good time management, good stewardship of the land, and diversity, but I don't mind being called a lazy, greedy guy with a short attenti... Hey, we should grow this plant too!

Further reading

If you want to learn more about permaculture, I would recommend two books as a good starting point. Toby Hemenway's Gaia's Garden is a very down-to-earth (pardon the pun), easy to read book with many helpful drawings, tables, and examples. It outlines all the basic permaculture principles, and explains how to create a permaculture design.

For those who want read a story rather than manual, Eric Toensmeier and Jonathan Bates’ Paradise Lot is a book about how two friends developed a thriving permaculture garden in a tenth-of-an-acre lot in the middle of the city. With humour and romance (who knew that silk worm caterpillars would make such a wonderful gift for your sweetheart), the book gives a flavour for the process of designing and establishing a permaculture garden. Although it is primarily a story of a single permaculture garden, and does not go into as much detail about different techniques as Gaia's Garden, Paradise Lot still contains a lot of helpful information.

0 Comments
September 3, 2013

St. Jacobs Farmers Market fire roundup

16:38 -0400

We went to see the St. Jacobs Farmers Market yesterday evening. They had the whole site taped off, but it's the busiest that I've seen the Market area on a non-Market day.

#SJFMFire was trending in Canada yesterday.

Waterloo Region Fire has their album of the fire online.

The un-burned parts of the market (outdoor vendors and Peddlar's Village) will be open on Thursday and Saturday.

There is an (unofficial) listing of vendors and alternate locations.

There's already talk of rebuilding, but nothing concrete yet.

0 Comments
May 21, 2013

Google's walled garden

15:12 -0400

If you have me as a contact in Google Talk, you may no longer be able to chat with me, because Google seems to be dropping support for chatting with non-Google accounts.

Google is dropping XMPP (Jabber) federation from their new chat system, which is more or less the same as preventing you from emailing non-GMail users from your GMail account, except with instant messaging. The instant messaging space is already a fragmented mess, and Jabber was the only possibility for unification.

Google was the first major company to provide public, federated Jabber accounts outside of jabber.org (though they didn't support federation at the beginning). They even contributed to the XMPP standards through their Jingle protocol (though Google had their own incompatible version and AFAIK never fully supported the final official Jingle protocol).

But it looks like Google is taking steps to becoming its own walled garden. It dropped CalDAV support recently (except for certain whitelisted applications). Google+ (and before that, Google Buzz) doesn't interoperate with any other system, nor does it seem to be built with interoperation in mind. They completely dropped Reader. They killed Wave before they released it. Now, with XMPP federation gone, Google's only interoperable products are GMail, Groups, and Blogger.

0 Comments