Say hello to Qt Quick Pointer Handlers

We've known for several years that our multi-touch support in Qt Quick has been inadequate for many use cases.  We have PinchArea, to handle two-finger scaling, rotation and dragging; and MultiPointTouchArea, which can at least be used to show some sort of interactive feedback for the touchpoints, or maybe you could write a little state machine in JavaScript to recognize some kind of gesture.  As for the rest of Qt Quick though, the main problems are 1) support for mouse events came first; 2) Qt assumes there is only one mouse (the "core pointer"); 3) QMouseEvent and QTouchEvent (and a few more) have no suitable intermediate base class, so they end up being delivered independently; 4) that being hard, shortcuts were taken early on, to treat touch events as mouse events and deliver them the same way.  So the result is that you cannot interact with two MouseAreas or Flickables at the same time, for example.  This means you cannot press two Buttons at the same time, or drag two Sliders at the same time, if they are implemented with MouseArea.

At first I hoped to fix that by making MouseArea and Flickable both handle touch events separately.  The patches to do that were quite complex, adding a lot of duplicated logic for the full parallel delivery path: a QMouseEvent would take one path and a QTouchEvent would take another, in the hope that the interaction would work as much the same as possible.  It was months of work, and at the end it mostly worked... but it was hard to keep all the existing autotests passing, and colleagues worried about it being a behavior change.  MouseArea proclaims by its name that it handles mouse events, so as soon as it begins to handle touch events separately, it becomes a misnomer.  Suddenly you would be able to press two Buttons or Tabs or Radio Buttons at the same time, in applications and sets of controls which weren't designed for it.  (So we tried adding a bool property to opt in, but needing to set that in every MouseArea would be ugly.)  MouseArea and Flickable also need to cooperate a lot, so the changes would have to be done together to keep everything working.  It was possible, but narrowly missed shipping in Qt 5.5 due to uncertainty.

So eventually we took a different route, after we found a reasonable combination of ideas that had been proposed.

One idea was that since we cannot refactor the QEvent hierarchy (yet) due to the binary compatibility mandate, we could instead create wrapper classes which make the events look like we want them to, complete with properties for the benefit of QML, and deliver those instead, using a mostly-unified event delivery path in QQuickWindow and QQuickItem.

Another idea was the realization that dynamically creating and destroying these wrapper events is silly: instead we use a pool of instances, as we have been doing in other cases where an "event" object is emitted by a signal (for example the object emitted by MouseArea.positionChanged is always the same instance, since Qt 5.8).  With that optimization, wrapping one event with another is no longer a big performance hit.

Another idea was suggested: maybe it would be nice if handling an event from a pointing device were as easy as using the Keys attached property: for example, Mouse.onClicked: { ... } or PointingDevice.onTapped: { ... } But soon after came the realization that there can be only one instance of an attached property per Item to which it is attached.  One of the problems with MouseArea is that it tries to do too much, so it didn't make sense to simply re-implement all the functionality in a monolithic MouseAttached.  We wanted the ability to handle a click or a tap without caring which device it comes from, for example, because that's what every Button control needs to do.  It's the same with any gesture that can be performed with either the mouse or a single finger.  Perhaps there could be one attached property per gesture rather than one per device type, then?

Since QML is a declarative language, it's nice to be able to declare constraints rather than writing if/else statements in JavaScript signal callbacks.  If an object which handles events is designed not to care which device a gesture comes from, there will nevertheless be cases when your application does care: you want to perform a different action depending on whether it is tapped on the touchscreen or clicked with the right mouse button, or you want to do something different if the control key is held down while dragging an object.  By allowing multiple instances of these handler objects, we can declare the constraints on them by setting properties.  It should be OK to have as many instances as you like.  Each instance should be lightweight so that you don't fear to have too many instances.  The implementation should be in C++, and it should be simple and understandable.  Each handler should do one, or at most a few very closely-related things, and do them well.

Those concerns have taken us away from the idea of using attached properties, for now.  What we have instead is a new family of classes in Qt Quick: Pointer Handlers.

A Pointer Handler is a type of object which you can declare inside any Item, which handles events from pointing devices on behalf of that Item.  You can declare as many of them as you need: typically, one per interaction scenario.  All the common constraints that we could think of are declarable: you can make the handler react only if it's a touch event, only to certain mouse buttons, only if the correct number of fingers are pressed within the bounds of the Item, only if a particular modifier key is held, etc.

Draggin' balls

Common gestures are represented by their own handler types.  For example, if you declare

Rectangle {
width: 50; height: 50; color: "green"
DragHandler { }
}

then you have a Rectangle which can be dragged around the scene via either mouse or touch, without writing any Javascript, and without even needing to bind the handler to its parent in any way.  It has a target property, and the default is the same as its parent.  (But by setting the target to a different Item, you can capture events within one item, but manipulate the other.)

Of course, if you have two of those green rectangles with DragHandlers in them, you can drag them both at the same time with different fingers.

Every Pointer Handler is a QObject, but it's not a QQuickItem, and it doesn't have too many of its own variables, so each instance is about as small as a practical QObject subclass can be.

Every single-point handler has the point property, to expose all the details about the touchpoint or mouse point that we can find. There are properties for pressure and ellipseDiameters: some devices may have force sensors, while others may measure the size of the contact patch (but many devices don't provide either). It has a velocity property which is guaranteed to be defined: we calculate and average the velocity over the last few movements for a slightly smoother reaction. Having the velocity available could potentially enable velocity-sensitive gestures: perhaps a flick should be done at a certain minimum speed. (Is that the right way to distinguish a flick from a drag? It hasn't been easy to make that distinction before.) Or, if you have the velocity at the time of release, a drag gesture can end with momentum: the object keeps moving a short distance in the same direction. This makes your UI feel more alive. So far we have not formalized MomentumAnimation into a supported animation type, but there is a pure-QML prototype of it in tests/manual/pointer/content/MomentumAnimation.qml.

two red balls being dragged simultaneously with DragHandler

Tap dancing

TapHandler handles all the press-and-release gestures: single quick taps or clicks, double-taps, some other number of taps, holding it pressed for a configurable period of time, or holding it for various periods of time, all in unique ways. (When you touch a touchscreen, it often doesn't make any sound, but you can tap a mouse button; so we thought "tap" is a more future-proof name for this gesture than "click".)  You can show feedback proportional to how long it has been held (an expanding circle, progress bar or something like that).

TapHandler detecting a triple-tap and then a long press

Pinch me if this is for real

There is a PinchHandler.  If you declare it inside an Item, you will be able to scale, rotate and drag that Item using the pinch gesture.  You can zoom into any part of the Item that you like (an improvement on PinchArea).  It can handle larger numbers of fingers, too: you can declare PinchHandler { minimumTouchPoints: 3 } to require a 3-finger pinch gesture.  All the transformations then occur relative to the center point between the three fingers, and the scaling is relative to the average increase or decrease in spread between them.  The idea came from the way that some versions of Ubuntu use the 3-finger pinch for window management: apparently they thought the content in the window may have some use for a 2-finger gesture, but most applications don't use 3 fingers for anything, so it's OK to reserve the 3-finger pinch to scale and move windows around the desktop.  Now since you can write a Wayland compositor in QML, you can easily recreate this experience.

zooming into a map with PinchHandler

Getting to the points

Finally, there is PointHandler.  Unlike the others, it doesn't manipulate its target Item: it exists only to expose the point property.  It's similar to an individual TouchPoint in MultiPointTouchArea, and can be used for the same purpose: to provide interactive feedback as touchpoints and the mouse cursor move around the scene.  Unlike MultiPointTouchArea, it does not exclusively grab the touchpoints or the mouse, so having this interactive feedback does not prevent interacting with other handlers in other Items at the same time.  In the animations on this page, it's used to get the finger sprites to follow my fingers around.

Are we there yet?

So now I'll get to the reasons why this stuff is in Tech Preview in 5.10.  One reason is that it's incomplete: we are still missing support for mouse hovering, the mouse wheel, and tablet stylus devices (a stylus is still treated as a mouse for now).  None of the handlers have velocity-sensitive behavior.  We can imagine a few more Handlers that could be written.  There should be public C++ API so that you can create your own, too.  Handlers and Flickable are getting along somewhat, but Flickable is a complex monolithic Item, and we think maybe it can be refactored later on.  There is a FakeFlickable manual test which shows how it's possible to re-create a lot of its functionality in QML with two ordinary Items, plus a DragHandler and a few animations.

FakeFlickable: a componentized Flickable

Another reason is the naming.  "Pointer Handlers" sounds OK in isolation, but there is pre-existing terminology which makes it confusing: a pointer might be a variable which points to a memory location (but that's not what we mean here), and a handler might be a sort of callback function that you write in JavaScript.  If you write TapHandler { onActiveChanged: ... } do you then say that your handler has a handler?  We could start using the word "callback" instead, but it's an anachronism in some circles, and in QML our habits are hard to change now.

Another reason is that QtLocation has some complex use cases which we want to use as a case study, to prove that it's possible to navigate a map (and any interactive content upon it) with small amounts of readable code.

Perhaps I'll continue this later with another post about another way of doing gesture recognition, and how that might affect how we think about Pointer Handlers in the future.  I haven't explained the passive grab concept yet.  There's also more to be defined about how to build components which internally use Pointer Handlers, but give the application author freedom to override behavior when necessary.

So to wrap up for now, Pointer Handlers are in Tech Preview in 5.10.  We hope you will play with them a lot.  Especially if you have some old frustrations about interaction scenarios that weren't possible before.  Even more so if you ever wanted to create a touch-intensive UI (maybe even multi-user?) without writing a C++ event-forwarding and QObject-event-filtering mess.  We need to start getting feedback from everyone with unique use cases while we still have the freedom to make big changes if necessary to make it easy for you.

So far the usage examples are mostly in tests/manual/pointer. Manual test source code isn't included in release build packages, so to try those out, you'll need to download Qt source packages, or get them from the git repo. Some of those can be turned into examples for upcoming releases.

Since it's in tech preview, the implementation will continue to be refined during the 5.10 series, so follow along on the 5.10 git branch to keep up with the latest features and bug fixes.

There's an upcoming webinar in a couple of weeks.


Blog Topics:

Comments