Qt Graphics with Multiple Displays on Embedded Linux

Creating devices with multiple screens is not new to Qt. Those using Qt for Embedded in the Qt 4 times may remember configuration steps like this. The story got significantly more complicated with Qt 5's focus on hardware accelerated rendering, so now it is time to take a look at where we are today with the upcoming Qt 5.8.

Windowing System Options on Embedded

The most common ways to run Qt applications on an embedded board with accelerated graphics (typically EGL + OpenGL ES) are the following:

  • eglfs on top of fbdev or a proprietary compositor API or Kernel Modesetting + the Direct Rendering Manager
  • Wayland: Weston or a compositor implemented with the Qt Wayland Compositor framework + one or more Qt client applications
  • X11: Qt applications here run with the same xcb platform plugin that is used in a typical desktop Linux setup

We are now going to take a look at the status of eglfs because this is the most common option, and because some of the other approaches rely on it as well.

Eglfs Backends and Support Levels

eglfs has a number of backends for various devices and stacks. For each of these the level of support for multiple screens falls into one of the three following categories:

  • [1] Output management is available.
  • [2] Qt applications can choose at launch time which single screen to output to, but apart from this static setting no other configuration option is provided.
  • [3] No output-related configuration is provided.

Note that some of these, in particular [2], may require additional kernel configuration via a video argument or similar. This is out of Qt's domain.

Now let's look at the available backends and the level of multi-display support for each:

  • KMS/DRM with GBM buffers (Mesa (e.g. Intel) or modern PowerVR and some other systems) [1]
  • KMS/DRM with EGLDevice/EGLOutput/EGLStream (NVIDIA) [1]
  • Vivante fbdev (NXP i.MX6) [2]
  • Broadcom Dispmanx (Raspberry Pi) [2]
  • Mali fbdev (ODROID and others) [3]
  • (X11 fullscreen window - targeted mainly for testing and development) [3]

Unsurprisingly, it is the backends using the DRM framework that come out best. This is as expected, since there we have a proper connector, encoder and CRTC enumeration API, whereas others have to resort to vendor-specific solutions that are often a lot more limited.

We will now focus on the two DRM-based backends.

Short History of KMS/DRM in Qt

Qt 5.0 - 5.4

Qt 5 featured a kms platform plugin right from the beginning. This was fairly usable, but limited in features and was seen more as a proof of concept. Therefore, with the improvements in eglfs, it became clear that a more unified approach was necessary. Hence the introduction of the eglfs_kms backend for eglfs in Qt 5.5.

Qt 5.5

While originally developed for a PowerVR-based embedded system, the new backend proved immensely useful for all Linux systems running with Mesa, the open-source stack, in particular on Intel hardware. It also featured a plane-based mouse cursor, with basic support for multiple screens added soon afterwards.

Qt 5.6

With the rise of NVIDIA's somewhat different approach to buffer management - see this presentation for an introduction - an additional backend had to be introduced. This is called eglfs_kms_egldevice and allows running on the automotive-oriented Jetson Pro, DRIVE CX and DRIVE PX systems.

The initial version of the plugin was standalone and independent from the existing DRM code. This led to certain deficiencies, most notably the lack of multi-display support.

Qt 5.7

Fortunately, these problems got addressed pretty soon. Qt 5.7 features proper code sharing between the backends, making most of the multi-display support and its JSON-based configuration system available to the EGLStream-based backend as well.

Meanwhile the GBM-based backend got a number of fixes, in particular related to the hardware mouse cursor and the virtual desktop.

Qt 5.8

The upcoming release features two important improvements: it closes the gaps between the GBM and EGLStream backends and introduces support for advanced configurability. The former covers mainly the handling of the virtual desktop and the default, non-plane-based OpenGL mouse cursor which was unable to "move" between screens in previous releases.

The documentation is already browsable at the doc snapshots page.

Besides the ability to specify the virtual desktop layout, the introduction of the touchDevice property is particularly important when building systems where one or more of the screens is made interactive via a touchscreen. Let's take a quick look at this.

Touch Input

Let's say you are creating digital instrument clusters with Qt, with multiple touch-enabled displays involved. Given that the touchscreens report absolute coordinates in their events, how can Qt tell which screen's virtual geometry the event should be translated to? Well, on its own it cannot.

From Qt 5.8 it will be possible to help out the framework. By setting QT_LOGGING_RULES=qt.qpa.*=true we enable logging which lets us figure out the touchscreen's device node.  We can then create a little JSON configuration file on the device:

{
"device": "drm-nvdc",
"outputs": [
{
"name": "HDMI1",
"touchDevice": "/dev/input/event5",
}
]
}

This will come handy in any case since configuration of screen resolution, virtual desktop layout, etc. all happens in the same file.

Now, when a Qt application is launched with the QT_QPA_EGLFS_KMS_CONFIG environment variable pointing to our file, Qt will know that the display connected to the first HDMI port has a touchscreen as well that shows up at /dev/input/event5. Hence any touch event from that device will get correctly associated with the screen in question.

Qt on the DRIVE CX

Let's see something in action. In the following example we will use an NVIDIA DRIVE CX board, with two monitors connected via HDMI and DisplayPort. The software stack is the default Vibrante Linux image, with Qt 5.8 deployed on top. Qt applications run with the eglfs platform plugin and its eglfs_kms_egldevice backend.

drivecx_small

Our little test environment looks like this:

disp_both

This already looks impressive, and not just because we found such good use for the Windows 95, MFC, ActiveX and COM books hanging around in the office from previous decades. The two monitors on the sides are showing a Qt Quick application that apparently picks up both screens automatically and can drive both at the same time. Excellent.

The application we are using is available here. It follows the standard multi-display application model for embedded (eglfs): creating a dedicated QQuickWindow (or QQuickView) on each of the available screens. For an example of this, check the code in the github repository, or take a look at the documentation pages that also have example code snippets.

A closer look reveals our desktop configuration:

disp2

The gray MouseArea is used to test mouse and touch input handling. Hooking up a USB touch-enabled display immediately reveals the problems of pre-5.8 Qt versions: touching that area would only deliver events to it when the screen happened to be the first one. In Qt 5.8 this is can now be handled as described above.

disp1

It is important to understand the screen geometry concepts in QScreen. When the screens form a virtual desktop (which is the default for eglfs), the interpretation is the following:

  • geometry() - the screen's position and size in the virtual desktop
  • availableGeometry() - without a windowing system this is the same as geometry()
  • virtualGeometry() - the geometry of the entire virtual desktop to which the screen belongs
  • availableVirtualGeometry() - same as virtualGeometry()
  • virtualSiblings() - the list of all screens belonging to the same virtual desktop

Configuration

How does the virtual desktop get formed? It may seem fairly random by default. In fact it simply follows the order DRM connectors are reported in. This is often not ideal. Fortunately, it is configurable starting with Qt 5.8. For instance, to ensure that the monitor on the first HDMI port gets a top-left position of (0, 0), we could add something like the following to the configuration file specified in QT_QPA_EGLFS_KMS_CONFIG:

{
"device": "drm-nvdc",
"outputs": [
{
"name": "HDMI1",
"virtualIndex": 0
},
{
"name": "DP1",
"virtualIndex": 1
}
]
}

If we wanted to create a vertical layout instead of horizontal (think an instrument cluster demo with three or more screens stacked under each other), we could have added:

{
"device": "drm-nvdc",
"virtualDesktopLayout": "vertical",
...
}

More complex layouts, for example a T-shaped setup with 4 screens, are also possible via the virtualPos property:

{
...
"outputs": [
{ "name": "HDMI1", "virtualIndex": 0 },
{ "name": "HDMI2", "virtualIndex": 1 },
{ "name": "DP1", "virtualIndex": 2 },
{ "name": "DP2", "virtualPos": "1920, 1080" }
]
}

Here the fourth screen's virtual position is specified explicitly.

In addition to virtualIndex and virtualPos, the other commonly used properties are mode, physicalWidth and physicalHeight. mode sets the desired mode for the screen and is typically a resolution, e.g. "1920x1080", but can also be set to "off", "current", or "preferred" (which is the default).

For example:

{
"device": "drm-nvdc",
"outputs": [
{
"name": "HDMI1",
"mode": "1024x768"
},
{
"name": "DP1",
"mode": "off"
}
]
}

The physical sizes of the displays become quite important when working with text and components from Qt Quick Controls. This is because these base size calculations on the logical DPI that is in turn based on the physical width and height. In desktop environments queries for these sizes usually work just fine, so no further actions are needed. On embedded however, it has often been necessary to provide the sizes in millimeters via the environment variables QT_QPA_EGLFS_PHYSICAL_WIDTH and QT_QPA_EGLFS_PHYSICAL_HEIGHT. This is not suitable in a multi-display environment, and therefore Qt 5.8 introduces an alternative: the physicalWidth and physicalHeight properties (values are in millimeters) in the JSON configuration file. As witnessed in the second screenshot above, the physical sizes did not get reported correctly in our demo setup. This can be corrected, as it was done for the monitor in the first screenshot, by doing something like:

{
"device": "drm-nvdc",
"outputs": [
{
"name": "HDMI1",
"physicalWidth": 531,
"physicalHeight": 298
},
...
]
}

As always, enabling logging can be a tremendous help for troubleshooting. There are a number of logging categories for eglfs, its backends and input, so the easiest is often to enable everything under qt.qpa by doing export QT_LOGGING_RULES=qt.qpa.*=true before starting a Qt application.

What About Wayland?

What about systems using multiple GUI processes and compositing them via a Qt-based Wayland compositor? Given that the compositor application still needs a platform plugin to run with, and that is typically eglfs, everything described above applies to most Wayland-based systems as well.

Once the displays are configured correctly, the compositor can create multiple QQuickWindow instances (QML scenes) targeting each of the connected screens. These can then be assigned to the corresponding WaylandOutput items. Check the multi output example for a simple compositor with multiple outputs.

The rest, meaning how the client applications' windows are placed, perhaps using the scenes on the different displays as one big virtual scene, moving client "windows" between screens, etc., are all in QtWayland's domain.

What's Missing and Future Plans

The QML side of screen management could benefit from some minor improvements: unlike C++, where QScreen, QWindow and QWindow::setScreen() are first class citizens, Qt Quick has currently no simple way to associate a Window with a QScreen, mainly because QScreen instances are only partially exposed to the QML world. While this is not fatal and can be worked around with some C++ code, as usual, the story here will have to be enhanced a bit.

Another missing feature is the ability to connect and disconnect screens at runtime. Currently such hotplugging is not supported by any of the backends. It is worth noting that with embedded systems the urgency is probably a lot lower than with ordinary desktop PCs or laptops, since the need to change screens in such a manner is less common. Nevertheless this is something that is on the roadmap for future releases.

That's it for now. As we know, more screens are better than one, so why not just let Qt power them all?


Blog Topics:

Comments