Building and testing on multiple platforms – introducing minicoin

Published Tuesday June 4th, 2019
8 Comments on Building and testing on multiple platforms – introducing minicoin
Posted in Build system, C++, CI, Compilers, cross-platform, Dev Loop, Test | Tags:

When working on Qt, we need to write code that builds and runs on multiple platforms, with various compiler versions and platform SDKs, all the time. Building code, running tests, reproducing reported bugs, or testing packages is at best cumbersome and time consuming without easy access to the various machines locally. Keeping actual hardware around is an option that doesn’t scale particularly well. Maintaining a bunch of virtual machines is often a better option – but we still need to set those machines up, and find an efficient way to build and run our local code on them.

Building my local Qt 5 clone on different platforms to see if my latest local changes work (or at least compile) should be as simple as running “make”, perhaps with a few more options needed. Something like

qt5 $ minicoin run windows10 macos1014 ubuntu1804 build-qt

should bring up three machines, configure them using the same steps that we ask Qt developers to follow when they set up their local machines (or that we use in our CI system Coin – hence the name), and then run the build job for the code in the local directory.

This (and a few other things) is possible now with minicoin. We can define virtual machines in code that we can share with each other like any other piece of source code. Setting up a well-defined virtual machine within which we can build our code takes just a few minutes.

minicoin is a set of scripts and conventions on top of Vagrant, with the goal to make building and testing cross-platform code easy. It is now available under the MIT license at https://git.qt.io/vohilshe/minicoin.

A small detour through engineering of large-scale and distributed systems

While working with large-scale (thousands of hosts), distributed (globally) systems, one of my favourite, albeit somewhat gruesome, metaphors was that of “servers as cattle” vs “servers as pets”. Pet-servers are those we groom manually, we keep them alive, and we give them nice names by which to remember and call (ie ssh into) them. However, once you are dealing with hundreds of machines, manually managing their configuration is no longer an option. And once you have thousands of machines, something will break all the time, and you need to be able to provision new machines quickly, and automatically, without having to manually follow a list of complicated instructions.

When working with such systems, we use configuration management systems such as CFEngine, Chef, Puppet, or Ansible, to automate the provisioning and configuration of machines. When working in the cloud, the entire machine definition becomes “infrastructure as code”. With these tools, servers become cattle which – so the rather unvegetarian idea – is simply “taken behind the barn and shot” when it doesn’t behave like it should. We can simply bring a new machine, or an entire environment, up by running the code that defines it. We can use the same code to bring production, development, and testing environments up, and we can look at the code to see exactly what the differences between those environments are. The tooling in this space is fairly complex, but even so there is little focus on developers writing native code targeting multiple platforms.

For us as developers, the machine we write our code on is most likely a pet. Our primary workstation dying is the stuff for nightmares, and setting up a new machine will probably keep us busy for many days. But this amount of love and care is perhaps not required for those machines that we only need for checking whether our code builds and runs correctly. We don’t need our test machines to be around for a long time, and we want to know exactly how they are set up so that we can compare things. Applying the concepts from cloud computing and systems engineering to this problem lead me (back) to Vagrant, which is a popular tool to manage virtual machines locally and to share development environments.

Vagrant basics

Vagrant gives us all the mechanisms to define and manage virtual machines. It knows how to talk to a local hypervisor (such as VirtualBox or VMware) to manage the life-cycle of a machine, and how to apply machine-specific configurations. Vagrant is written in Ruby, and the way to define a virtual machine is to write a Vagrantfile, using Ruby code in a pseudo-declarative way:

Vagrant.configure("2") do |config|
    config.vm.box = "generic/ubuntu1804"
    config.vm.provision "shell",
        inline: "echo Hello, World!"
end

Running “vagrant up” in a directory with that Vagrantfile will launch a new machine based on Ubuntu 18.04 (downloading the machine image from the vagrantcloud first), and then run “echo Hello, World!” within that machine. Once the machine is up, you can ssh into it and mess it up; when done, just kill it with “vagrant destroy”, leaving no traces.

For provisioning, Vagrant can run scripts on the guest, execute configuration management tools to apply policies and run playbooks, upload files, build and run docker containers, etc. Other configurations, such as network, file sharing, or machine parameters such as RAM, can be defined as well, in a more or less hypervisor-independent format. A single Vagrantfile can define multiple machines, and each machine can be based on a different OS image.

However, Vagrant works on a fairly low level and each platform requires different provisioning steps, which makes it cumbersome and repetitive to do essentially the same thing in several different ways. Also, each guest OS has slightly different behaviours (for instance, where uploaded files end up, or where shared folders are located). Some OS’es don’t fully support all the capabilities (hello macOS), and of course running actual tasks is done different on each OS. Finally, Vagrant assumes that the current working directory is where the Vagrantfile lives, which is not practical for developing native code.

minicoin status

minicoin provides various abstractions that try to hide many of the various platform specific details, works around some of the guest OS limitations, and makes the definition of virtual machines fully declarative (using a YAML file; I’m by no means the first one with that idea, so shout-out to Scott Lowe). It defines a structure for providing standard provisioning steps (which I call “roles”) for configuring machines, and for jobs that can be executed on a machine. I hope the documentation gets you going, and I’d definitely like to hear your feedback. Implementing roles and jobs to support multiple platforms and distributions is sometimes just as complicated as writing cross-platform C++ code, but it’s still a bit less complex than hacking on Qt.

We can’t give access to our ready-made machine images for Windows and macOS, but there are some scripts in “basebox” that I collected while setting up the various base boxes, and I’m happy to share my experiences if you want to set up your own (it’s mostly about following the general Vagrant instructions about how to set up base boxes).

Of course, this is far from done. Building Qt and Qt applications with the various compilers and toolchains works quite well, and saves me a fair bit of time when touching platform specific code. However, working within the machines is still somewhat clunky, but it should become easier with more jobs defined. On the provisioning side, there is still a fair bit of work to be done before we can run our auto-tests reliably within a minicoin machine. I’ve experimented with different ways of setting up the build environments; from a simple shell script to install things, to “insert CD with installed software”, and using docker images (for example for setting up a box that builds a web-assembly, using Maurice’s excellent work with Using Docker to test WebAssembly).

Given the amount of discussions we have on the mailing list about “how to build things” (including documentation, where my journey into this rabbit hole started), perhaps this provides a mechanism for us to share our environments with each other. Ultimately, I’d like coin and minicoin to converge, at least for the definition of the environments – there are already “coin nodes” defined as boxes, but I’m not sure if this is the right approach. In the end, anyone that wants to work with or contribute to Qt should be able to build and run their code in a way that is fairly close to how the CI system does things.

Do you like this? Share it
Share on LinkedInGoogle+Share on FacebookTweet about this on Twitter

Posted in Build system, C++, CI, Compilers, cross-platform, Dev Loop, Test | Tags:

8 comments

isaac_Techno says:

Great article here , Good article.
Great article, just what I needed.
Hello to all, the contents existing at this site are in fact awesome for people experience, well, keep up the good work fellows.

Richard Weickelt says:

I am surprised that this blog post didn’t receive any comments yet. It’s such an important topic. Thanks for this informative post.

1. I have used Vagrant+VM backend with ansible on top for quite some time. I finally gave up on this, because it was often flaky and unreliable. Provisioning was slow and switching between different VM configurations (branches) was almost unbearable. What’s your experience? How is that different with minicoin?
2. Could you clarify your usage of Docker in your setup?

I am using only Docker these days on both, Windows and Linux hosts. I found this to be the most reliable and scalable approach:
– Docker image to have a standardized build & test environment
– docker-compose to make multiple containers easily managable and to avoid the cumbersome docker command line
– Conan (conan.io) to manage source code / library dependencies

This makes it also very easy to switch between branches with different configurations. I am not planning to go back to Vagrant. Unfortunately that doesn’t work on Mac OS though.

Volker Hilsheimer says:

Hi Richard,

Thanks for your comment! My experience with the vagrant + VM basebox + provisioning script is that it’s really reliable; the quality of the basebox makes a big difference of course, and a part of the idea is either way that the VMs are rather short-lived, so that they don’t accumulate state over time that influences the results of builds and tests.

I don’t ever switch between VM configurations; the VM operates on the code that lives on my host machine. IO performance isn’t ideal, depending on the provider you use. As a general observation, VMware is significantly faster than VirtualBox for most things.

As for Docker: the “wasm-builder” box in the repo uses docker provisioning. The entry in the boxes.yaml points at a role that has a Dockerfile, so provisioning builds that Dockerfile into an image. The dockerfile includes building upstream Qt using the wasm toolchain, it’s mostly just a copy of what Maurice showed in his blogpost.

* https://git.qt.io/vohilshe/minicoin/blob/master/minicoin/boxes.yml#L88
* https://git.qt.io/vohilshe/minicoin/tree/master/minicoin/roles/wasm-builder

The provisioning script then defines a “make” that uses that docker image to run a build job, which the build-project job script respects. So,

$ minicoin run wasm-builder build-project

in one of the Qt examples should result in a web assembly build of that example, hosted by the VM so that you can point your browser at it.

Of course, one could just do that with docker directly on the host, but then I’d have different workflows depending on the target platform.

The overall reasons for not using docker are two-fold:

* I don’t just want a build server, I want a complete local test environment; so even if I could set a docker image up with a cross compiling toolchain for Windows etc, I wouldn’t be able to run the resulting binaries. With many bugs only happening on e.g macOS 10.12 or 10.13, I need VMs anyway.

* I want something that’s close to what developers using Qt use. Developers develop, build, and test their code on a local machine, you install Qt packages on a local workstation, etc. If I get a bug report that something fails for someone on Windows 7 with VC++2005, then me answering “works with my docker image” isn’t really helping anyone, even if the docker image could be available.

In short, a VM gives me a setup that’s closer to “production” than a docker container would be. It’s slower and heavier, but acceptably so. With Mac as host and a local Linux and Windows VM (snapshot’ed after provisioning) to build and run tests on I can be fairly confident that my stuff works before I throw it at Coin, and spinning up a dedicated machine to investigate a specific bug or failure takes just a few minutes.

isaac_Techno says:

merci de votre article je suis intéressé beaucoup

Tehnick says:

Wow! Qt developers finally wrote their own build server for convenient build for multiple platforms.

Thanks for sharing your work! This is really interesting implementation.

I have wrote my own simple build server (Sibuserv) few years ago for building my projects for GNU/Linux, MS Windows and Android. And it has saved a lot of my time and time of my colleagues at work.

But due to hardware limitations of our build PCs (middle CPUs, small size of RAMs, low speed of HDDs and thier small size) at that time my build server was based on idea of cross-compilation for target platforms. This way it requered much less resources than systems which use containers and virtualization for launching of full OSs.

In my case implementation of build scripts was much simpler than in your, but I have faced with completely other difficulties. Like automation of running tests on target systems, for example.

> to build and run their code in a way that is fairly close to how the CI system does things.

Exactly!

When I have finished implementation of my simple build scripts, I have thought that it would be nice to automate build of each commit from our VCS and store produced executables and build logs. That was done. Next I added automated launching of static code analysis tool. And that was done.

Finally I have wished to make nice Web UI for simplifying interaction with this build server… And bang! It now looks almost as CI and it works almost as CI, probably it may become a full featured CI!

It is quite convenient that the same build scripts may be used for testing build of projects on local PC before pushing commits to VCS. And build results will be identical because the same build environments are used.

But these days my simple build server and CI based on it became less actual:
1) developer’s PCs are much more power now (top CPUs, NVMe SSDs, a lot of RAM);
2) we have separate computers for server tasks now;
3) there are a lot of popular CI systems now for any taste, they are widely used, very flexible in configuration and have a lot of convenient features.

For example, we are deploying GitLab CI at work now. (Earlier attempts to use Jenkins have failed, but with GitLab CI all work quite smoothly.) As for hobby, FOSS projects actively use Travis CI, CircleCI, AppVeyor and similar services.

Unfortunately the work on my CI have stuck as I develop it in a spare time and all features which were necessary for my hobby and work are already implemented. I still hope to add missing features in build server and Web UI one day and make a stable release, but the chances for that are quite small.

Even despite the fact that my project is free (under MIT license) and anyone interested may adopt it, this most likely will never happen, because Sibuserv CI is currently in pre-alpha state and its rework and support will require additional time and skills.

Sorry for such long comment.

Volker Hilsheimer says:

Thanks! Looks like you have done some impressive engineering as well for your own system 🙂

FWIW, minicoin is not meant to be a build server; Coin (https://blog.qt.io/blog/2016/08/08/coin-continuous-integration-for-qt/) is perhaps closer to that, although we don’t usually grab the binary artefacts that Coin produces to develop locally.

Qt developers (as far as I have seen; the chaps working on Qt Creator etc might of course do things differently) build Qt from their local Git clone, generally speaking. minicoin hopefully helps doing that across multiple platforms, but always locally. No server involved – at least not so far. Using minicoin to run a build in any cloud is in theory possible, but since the design is based on building the code on your local host, the data transfer will be messy (and probably costly). I generally don’t want to have to commit and push a change to a git remote just to see if my code compiles with e.g MinGW.

Anyway, the reasons why we moved away from Jenkins and developed Coin instead are explained elsewhere (to some extent in Frederik’s post); using a fully hosted CI solution like Travis or GitLab’s CI etc has always been problematic because of Mac, mobile, and embedded platforms, and because compiling and testing Qt is perhaps more complex than what those systems are usually dealing with. But no matter what CI system has the final say about whether a change is good or not, I needed a local way to test my stuff before I hand it over to the CI.

jason says:

I thought this was about a new blockchain currency.

Volker Hilsheimer says:

Sorry not sorry to disappoint! Perhaps scaling out your miners via VMs on a local laptop isn’t the best way to get rich quickly 🙂

Commenting closed.

Get started today with Qt Download now