Maurice Kalinowski

Using Docker to test Qt for WebAssembly

Published Tuesday March 5th, 2019
22 Comments on Using Docker to test Qt for WebAssembly
Posted in Automation, Cloud, Dev Loop, Internet of Things, Uncategorized | Tags: , , ,

There has been lots of excitement around WebAssembly and more specifically Qt for WebAssembly recently. Unfortunately, there are no snapshots available yet. Even if there are, you need to install a couple of requirements locally to set up your development environment.
I wanted to try it out and the purpose of this post is to create a developer environment to test a project against the current state of this port. This is where docker comes in.
Historically, docker has been used to create web apps in the cloud, allowing to scale easily, provide implicit sand-boxing and being lightweight. Well, at least more lightweight than a whole virtual machine.
These days, usage covers way more cases

  • Build Environment (standalone)
  • Development Environment (to create SDKs for other users)
  • Continuous Integration (run tests inside a container)
  • Embedded runtime

Containers and Embedded are not part of this post, but the highlights of this approach are

  • Usage for application development
  • Resources can be controlled
  • Again, sandboxing / security related items
  • Cloud features like App Deployment management, OTA, etc…

We will probably get more into this in some later post. In the meantime, you can also check what our partners at Toradex are up to with their Torizon project. Also, Burkard wrote an interesting article about using docker in conjunction with Open Embedded.
Let’s get back to Qt WebAssembly and how to tackle the goal. The assumption is, that we have a working project for another platform written with Qt.
The target is to create a container capable of compiling above project against Qt for WebAssembly. Next, we want to test the application in the browser.

1. Creating a dev environment

Morten has written a superb introduction on how to compile Qt for WebAssembly. Long-term we will create binaries for you to use and download via the Qt Installer. But for now, we aim to have something minimal, where we do not need to take care of setting up dependencies. Another advantage of using docker hub is that the resulting image can be shared with anyone.
The first part of the Dockerfile looks like this:

FROM trzeci/emscripten AS qtbuilder

RUN mkdir -p /development
WORKDIR /development

RUN git clone --branch=5.13 git://code.qt.io/qt/qt5.git

WORKDIR /development/qt5

RUN ./init-repository

RUN mkdir -p /development/qt5_build
WORKDIR /development/qt5_build

RUN /development/qt5/configure -xplatform wasm-emscripten -nomake examples -nomake tests -opensource --confirm-license
RUN make -j `grep -c '^processor' /proc/cpuinfo`
RUN make install

Browsing through the Docker Hub shows a lot of potential starting points. In this case, I’ve selected a base image which has emscripten installed and can directly be used in the follow up steps.
The next steps are generally a one-to-one copy of the build instructions.
For now, we have one huge container with all build artifacts (object files, generated mocs, …), which is too big to be shared and those artifacts are unnecessary to move on. Some people tend to use volume sharing for this. The build happens on a mount from the host system to the image and install copies them into the image. Personally, I prefer to not clobber my host system for this part.
In the later versions of Docker, one can create multi-stage builds, which allow to create a new image and copy content from a previous one into it. To achieve this, the remaining Dockerfile looks like this:

FROM trzeci/emscripten AS userbuild

COPY --from=qtbuilder /usr/local/Qt-5.13.0/ /usr/local/Qt-5.13.0/
ENV PATH="/usr/local/Qt-5.13.0/bin:${PATH}"
WORKDIR /project/build
CMD qmake /project/source && make

Again, we use the same base container to have em++ and friends available and copy the installation content of the Qt build to the new image. Next, we add it to the PATH and change the working directory. The location will be important later. CMD specifies the execution command when the container is launched non-interactively.

 

2. Using the dev environment / Compile your app

 

The image to use for testing an application is now created. To test the build of a project, create a build directory, and invoke docker like the following

 

docker run --rm -v <project_source>:/project/source -v <build_directory>:/project/build maukalinow/qtwasm_builder:latest

This will launch the container, call qmake and make and leave the build artifacts in your build directory. Inside the container /project/build reflects as the build directory, which is the reason for setting the working directory above.
To reduce typing this each time, I created a minimal batch script for myself (yes, I am a Windows person 🙂 ). You can find it here.

 

3. Test the app / Containers again

Hopefully, you have been able to compile your project, potentially needing some adjustments, and now it is time to verify correct behavior on runtime in the browser. What we need is a browser run-time to serve the content created. Well, again docker can be of help here. With no specific preference, mostly just checking the first hit on the hub, you can invoke a runtime by calling

 

docker run --rm -p 8090:8080 -v <project_dir>:/app/public:ro netresearch/node-webserver

This will create a webserver, which you can browse to from your local browser. Here’s a screenshot of the animated tiles example

animated_dws

 

Also, are you aware that there is also Qt Http Server being introduced recently? It might be an interesting idea to encapsulate it into a container and check whether additional files can be removed to minimize the image size. For more information, check our posts here.

If you want to try out the image itself you can find and pull it from here.

 

I’d like to close this blog asking for some feedback from you readers and get into some discussion.

  • Are you using containers in conjunction with Qt already?
  • What are your use-cases?
  • Did you try out Qt in conjunctions with containers already? What’s your experience?
  • Would you expect Qt to provide “something” out-of-the-box? What would that be?

 

Do you like this? Share it
Share on LinkedInGoogle+Share on FacebookTweet about this on Twitter

Posted in Automation, Cloud, Dev Loop, Internet of Things, Uncategorized | Tags: , , ,

22 comments

Richard says:

I am using Docker mostly for CI purpose. It simplifies building Qt projects with multiple Qt versions for multiple target architectures on Linux and Windows hosts (I use Linux as a run-time for both). Writing Dockerfiles for Windows containers feels a bit awkward because in Powershell things seem to be not as straight-forward as they could be, but that’s maybe just my limited skills.

Instead of hand-crafting batch files to simplify the docker command line, I use docker-compose because it’s more portable between Windows and Linux hosts and makes it simpler to use multiple images in the same project.

When running on different Linux hosts and mounting the current directory as the working directory, you may also want to add an entrypoint script that adjusts the uid/gid of the container user to match the owner of the mounted directory. Docker often runs as root on Linux and files created during build would have root as the owner which is not what most people want.

BTW: The Qt installer tool really sucks in unattended mode because it’s missing even essential command line options like the install directory. The scripting interface is hardly documented and the interface keeps changing. I don’t understand why the Qt company pays so little attention to this. Therefore and because I need Qt for different target architectures, I prefer to build Qt on my own for every Docker image.

The Qt company could simplify that by providing ready-to-use Docker images for all actively maintained Qt versions and for different target architectures.

Cheers.

Maurice Kalinowski Maurice Kalinowski says:

Writing Dockerfiles for Windows containers feels a bit awkward because in Powershell things seem to be not as straight-forward as they could be, but that’s maybe just my limited skills.

I can totally see that. Also the minimum size for a core image is pretty tough. I’ve played around with Nanoserver as well, however that is a completely different beast. It uses (yet again) a stripped down version of the Win32 API, causing yet another Windows derivate for Qt to be ported to.

Instead of hand-crafting batch files to simplify the docker command line, I use docker-compose because it’s more portable between Windows and Linux hosts and makes it simpler to use multiple images in the same project.

Interesting. So far my experience with docker-compose was rather to use it for having multiple containers interact as “one system”. The creation of the images themselves were still done in Dockerfiles. Could you elaborate a bit more on this?

Docker often runs as root on Linux and files created during build would have root as the owner which is not what most people want.

Good point, I should take that into account. Thx for the pointer.

I cannot comment much on the IFW items, as I haven’t been using it for a while now. The JS interface only took care of some items, but at least install location should work. Did you create a bugreport for this?

The Qt company could simplify that by providing ready-to-use Docker images for all actively maintained Qt versions and for different target architectures.

I’ve been thinking about this and initially considered the same. However, what should the base image be? Are you expecting it for all existing Linux distributions as well? If not, what could be considered the best distribution to use as base? Many use ubuntu, some alpine, etc.. I guess you understand what I mean with this.
In regards to target architectures, that is also possible. Then again, for optimal performance you might still want to have tailored images towards a dedicated hardware, which we could hardly provide for all.
Some documentation on how to achieve this might be beneficial though and could be the bare minimum.

Thanks for the feedback.

Richard says:

It’s not clear to me how to cite and formatting in your blog system and there is neither preview nor edit functionality, so please forgive my probably messy post.

> Could you elaborate a bit more on this (docker-compose)
I guess the intention behind docker-compose was, to orchestrate multiple containers. But it is also helpful to avoid longish docker command lines. This is an example docker-compose file for a project that contains a single Dockerfile and defines a single service (debian):

“`
version: “3.7”
services:
debian:
build:
context: ./docker/debian/
dockerfile: Dockerfile
volumes:
– ./:/project
– ~/.conan/data:/home/developer/.conan/data
working_dir: /project

networks:
default:
external:
name: bridge
“`

When the service consists a Dockerfile like the example above, I need to do this once:
docker-compose build debian

And then whenever I want to spin up an interactive bash session in that container I type:
docker-compose run –rm debian

My entrypoint script would start a login shell if I don’t provide any additional arguments.

> However, what should the base image be? Are you expecting it for all existing Linux distributions as well? If not, what could be considered the best distribution to use as base? Many use ubuntu, some alpine, etc.. I guess you understand what I mean with this.

It’s not that important, especially when cross-building for embedded. I guess that Debian or Ubuntu would fit the needs of >80% of Linux devs. It’s more important to provide recipes that actually work and that can developers could tailor according to their needs. And for this purpose, Dockerfiles based upon whatever distribution are handy. It would be already a big advantage for embedded folks to have example configure command lines when building Qt.

> Some documentation on how to achieve this might be beneficial though and could be the bare minimum.
I’d rather vote for executable recipes, including some explanatory comments. Keep them in a repo, connect them to a CI system and you will solve 2 problems at once: Providing docker images AND documentation (the code). You can still include those recipes as code snippets in your Qt docs of course.

Maurice Kalinowski Maurice Kalinowski says:

Ah I see, you’re using docker-compose on one specific project. In that case that makes perfect sense.
My purpose was to keep it as generic as possible, meaning you can use it for any kind of project as long as you specify source and build dirs as arguments.

It’s more important to provide recipes that actually work and that can developers could tailor according to their needs. And for this purpose, Dockerfiles based upon whatever distribution are handy. It would be already a big advantage for embedded folks to have example configure command lines when building Qt.

Right. Maybe it is just my naive view to assume that the existing documentation is sufficient. Because what basically happens is that I tend to use the “local-execution” documentation and place it into a Dockerfile prepending RUN as a first step. From my experience that works fairly well.
Starting from there doing modifications is usually depending on your project/hardware/specific requirements.

Alex says:

I have to chime in here.

We also started to use Docker recently as a development environment, since our setup (ROS + custom compile Qt) is very complicated and it would be hard to match the setup between multiple developers. Now we can even match the system setup (Docker image) with the VCS version.

I agree that the Qt SDK should provide command line options for installations, so far I have to prepare my own zip files for Docker scripts and CI.

We also run Docker to match the current user UID and GID, moreover, we also have to run in privileged mode and to mount some file systems since we match the developers’ environment. And yes, we run Qt with GUI in it, which is somewhat tricky with Nvidia, Intel, and AMD users.

Here is a project that pretty much lists everything that needs to be done to achieve this: https://github.com/mviereck/x11docker

In my opinion, Docker is the way to go forward with future embedded and system developer setups.

Just wanted to share this in case you are thinking about adopting Docker in the Qt environment.

Eli says:

Haha no way! I was just yesterday testing webassembly on my Ubuntu virtual machine via the 5.13 alpha but was unable to compile even a simple hello world qml app, so I searched the web for ready to use docker images but I could not find any… I will give it a try as soon as I can.

Thanks!

Fredrik Orderud says:

Thank you for a really great article Maurice ! I’m looking forward to testing the WASM support myself.

One open question for me is CMake support. Do you have an overview of to what extent CMake-based projects are supported when targeting WASM?

Maurice Kalinowski Maurice Kalinowski says:

Unfortunately no. Maybe someone with more in-depth knowledge on the topic can chime in.

Fabio says:

I have never used docker (though I may give a try once I am able to).

I would rather prefer that for the Qt 5.13 Beta 1 you would provide the pre-compiled files (so we don’t need to compile Qt manually) for the webassembly and that these work with Windows directly without linux subsystem.

Also provide us the complete instructions for what we need to install (emscripten etc) and the specific versions, to get it to compile out of box.

I guess this would allow much more people to give Qt for webassembly a try (later on QtCreator integration will improve this “easy to try” even further).

Maurice Kalinowski Maurice Kalinowski says:

Pre-Compiled binaries will happen, no doubt about that.
I just took the current status as a motivation to do something with docker. So far it felt like something I could not get my hands on, and this was a fun experiment so far.

In regards to “complete instructions” I pointed to the previous blog post, which contains all the steps. Is there anything missing from there?

Fabio says:

Maurice I haven’t tried the instructions yet. I was waiting for the pre-compiled libs.

I think this a nice work that you have made here and that will allow others to test Qt for webassembly, I just wanted to point out that pre-compiled libs and native windows support will probably have a bigger impact on having people to try this new platform.

But of course that anything at this stage that will help with the building / deploy / testing is very welcome, so thanks for your work in creating this image. I may even try it myself.

Vincas says:

> RUN make -j `grep -c ‘^processor’ /proc/cpuinfo`

Maybe you could use -j`nproc` here instead for simplicity? It’s in coretuils package (in Debian at least) so I believe this utility should be available by default.

Alex says:

Or make -j$(nproc) which is safer.

nugai says:

Let me add a couple of general comments:

I agree that “make -j$(nproc)” is probably the best of the options presented, but running jobs at the number of CPU cores is not entirely free of risk (i.e. “safer” is not necessarily “safe”).

No matter which one of the commands suggested above is being used to derive the number of cores available, the inherent risk s that running concurrent processes at the full number of cores available may lead to system crashes. For example, I have two Linux systems (FC29 and Ubuntu 18.10) with AMD 8-core CPUs where running line #16 in the Dockerfile above consistently and repeatedly leads to system crashes. There’s nothing wrong with the Dockerfile, nor with the Qt code gcc needs to compile, nor with my systems — it’s just an accumulation of circumstances that lets such system crashes occur.

In essence, it is a question of how many and what processes (foreground and background tasks) are running at any given time, how much of the resources they occupy in terms of run time on a CPU core, what priority the processes have, and how their priority conflicts with the priority of other processes running on the system. The core issue is one of task scheduling and task priority. It exists on any computer and in any operating system. In most cases, end users and developers do not need to care about them, but I believe developers should be at least somewhat aware of them.

When one googles how many cores should be allocated to a “make -j ” command, one cannot find hard facts. Some people go as far as recommending the number of cores should be $(nproc) * 1.5. I find such one-size-fits-all recommendations ludicrous. The fact of the matter is that the optimal value is subject to experimentation and varies, on any given system, on any given time of day, for any given code base. Refactoring the source code may and will lead to different results — so be aware that optimal values for task concurrency probably will change over time.

On the systems I mentioned above, how many concurrent threads I can have running very much depends on whether I have a desktop GUI running or not, and if I start a job in a terminal window or compile it remotely from CLI via SSH. The optimal number of concurrent threads also depends on whether the files to be processed are on a hard disk or on a SSD (both attached to the same disk controller). Why? My guess is that the latency and speed for getting the files to/from storage into RAM and then to the CPU cores varies according to disk throughput and therefore gives the kernel more time to switch and/or prioritize tasks.

Also, there can be a huge difference with respect to whether or not certain commands are issued at OS level, or inside a Docker container. Commands issued at OS level compete with OS housekeeping tasks. Since Docker uses a mechanism for resource allocation among containers, commands issued inside a container may first compete with commands issued in other containers before they compete with commands at OS level. The difference could be the difference between a system crash, a container crash, or a container being throttled to perform slowly and steadily, allowing all tasks to complete successfully without any crashes.

The whole topic is a subject matter that is not very well documented and because of its specificity probably only understood by very few people (I’m not one of them — I accept the limitations of what I encounter and work around them). However, for those readers who might want to learn more about the underlying subject matter, I’d suggest having a glimpse at the following two web pages, describing how task scheduling happens in Linux (see https://www.tecmint.com/set-linux-process-priority-using-nice-and-renice-commands/ and the section of CPU shares in Docker (see https://goldmann.pl/blog/2014/09/11/resource-management-in-docker/ ).

Now, I don’t want to blow this issue out of proportion. Not every conceivable problem…

Maurice Kalinowski Maurice Kalinowski says:

It is true, that this might lead to some issues. So far I was expecting problems in case you have assigned much fewer CPUs to your docker machine than you try to use for compilation.
However, the fact that this might cause system issues is worrysome and total news to me.

Alex says:

If your system crashes under heavy CPU load, you should check your hardware. I have this happening with insufficient cooling, for example on some high-end notebooks (Alienware) and barebone PCs with passive cooling.

Before I do something like a Qt or ROS compilation I usually run
sudo cpupower frequency-set –governor performance
echo level 7 | sudo tee /proc/acpi/ibm/fan

To ensure my CPU runs at full speed and cooling is always fully active.

Of course, any kind of high load will make your system relatively unresponsive. But in general, I think using nproc as a basis for the build jobs is a good first assumption for most people. In my opinion a better on than to use just one, two or four hardcoded jobs.

And yes, there is no such thing as an optimal number of build jobs. Your overall compilation speed might be limited by your I/O or memory performance. Then one could also start to think about things like Turbo stepping, which sometimes makes a single job compilation faster than multi-threaded ones. There is just no right value, but a sane default is still valuable.

Johan Helsing Johan Helsing says:

We’ve been using docker containers for testing new patch sets and doing regular health checks in Qt Wayland for a while. I’ve been meaning to do a full blog post about it for some time, but here are the cliff notes.

– It uses a shared ccache volume between containers to speed up compilation.
– Pretty recent git sources are included in the image, so checkout of new patch sets is almost instant
– It runs tests on a headless weston instance and uses mesa’s llvmpipe for opengl rendering, so even graphical tests can be run on a server without a gpu
– It runs qtwayland tests and a subset of qtbase tests
– Total time for compilation and tests is about 12 minutes
– A bot watches gerrit, starts new containers and posts reviews.

Image: https://git.qt.io/qtwaylandtests/docker-qt-tests
Bot: https://git.qt.io/qtwaylandtests/qtwayland-gerrit-watcher

qplace says:

What is the size of the run-time that needs to be downloaded to support Qt -webassembly?

nugai says:

I think it’s worth pointing out that using “docker -v” for attaching directories (as used in section #2 above) ) is outdated and has been replaced with the “–mount type, source, target” notation since Docker 17.06. It may still run on some systems, but it might not be supported for much longer. So, the string:

docker run –rm -v :/project/source -v :/project/build maukalinow/qtwasm_builder:latest

eventually may have to be replaced with something like this:

docker run –rm –mount type=bind,source=,target=/project/source — mount type=bind,source=, target=/project/build maukalinow/qtwasm_builder:latest

Maurice Kalinowski Maurice Kalinowski says:

Oh nice, that’s useful information. Thanks for sharing.

Alejandro Exojo says:

I guess I’m getting very, very old (and grumpy), but I facepalm each time I see developers using Docker for something so simple as this.

I had zero knowledge of Emscripten, WebAssembly, and I just wanted to give users a quick taste of an app I’m doing, and I did a WebAssembly build of Qt, and a build of my app in minutes, really. Less than an hour. It is perfectly documented, and you don’t need anything as big as downloading the whole OS from scratch, I think.

When it started out, I understood the convenience, or even the need of Docker for really complex setups, where a JavaScript developer needs to test some design against a frontend that needs an application server, a message queue, etc. OK. That sounds difficult because there are plenty of different small things that have to work at runtime, and communicate, so there is configuration, ports to set up, etc.

For for getting a binary in $PATH, and some library in $LD_LIBRARY_PATH? Nah. 🙂

Maurice Kalinowski Maurice Kalinowski says:

While I wouldn’t necessarily use the same wording, I can absolutely see your point.

What I’ve seen recently though is the benefits of such an approach for two reasons:
1. Embedded. Using OpenEmbedded can be cumbersome with all its dependencies etc. Especially in the context of application development, contrary to platform development, it tends to feel like an overkill.
2. More sophisticated build environments, well embedded again 🙂 In case you are forced to use a setup, which does is not reflected in a “standardized” way, like OpenEmbedded, Docker can be very useful to simplify the deployment of a developer environment. The example in this article does not have many dependencies, nor does it require a specific ordering of tasks, agreed. But I wanted to highlight the general process rather than going into details on a specific use-case.

Furthermore, using containers for developer environments does have its limitations as well though. For instance in case your application project has dependencies which are not easily reflected via volume sharing, or you require more shares from various locations. Generally, I believe there is a sweet spot (or an area where this approach is beneficial). The question is whether it is worth providing something “generic” or whether it is going to be a custom solution almost always.

Commenting closed.

Get started today with Qt Download now