Shawn Rutledge

Say hello to Qt Quick Pointer Handlers

Published Thursday November 23rd, 2017
20 Comments on Say hello to Qt Quick Pointer Handlers
Posted in Declarative UI, Dev Loop, Gesture Recognizers, Labs, Qt Quick | Tags: , , , ,

We’ve known for several years that our multi-touch support in Qt Quick has been inadequate for many use cases.  We have PinchArea, to handle two-finger scaling, rotation and dragging; and MultiPointTouchArea, which can at least be used to show some sort of interactive feedback for the touchpoints, or maybe you could write a little state machine in JavaScript to recognize some kind of gesture.  As for the rest of Qt Quick though, the main problems are 1) support for mouse events came first; 2) Qt assumes there is only one mouse (the “core pointer”); 3) QMouseEvent and QTouchEvent (and a few more) have no suitable intermediate base class, so they end up being delivered independently; 4) that being hard, shortcuts were taken early on, to treat touch events as mouse events and deliver them the same way.  So the result is that you cannot interact with two MouseAreas or Flickables at the same time, for example.  This means you cannot press two Buttons at the same time, or drag two Sliders at the same time, if they are implemented with MouseArea.

At first I hoped to fix that by making MouseArea and Flickable both handle touch events separately.  The patches to do that were quite complex, adding a lot of duplicated logic for the full parallel delivery path: a QMouseEvent would take one path and a QTouchEvent would take another, in the hope that the interaction would work as much the same as possible.  It was months of work, and at the end it mostly worked… but it was hard to keep all the existing autotests passing, and colleagues worried about it being a behavior change.  MouseArea proclaims by its name that it handles mouse events, so as soon as it begins to handle touch events separately, it becomes a misnomer.  Suddenly you would be able to press two Buttons or Tabs or Radio Buttons at the same time, in applications and sets of controls which weren’t designed for it.  (So we tried adding a bool property to opt in, but needing to set that in every MouseArea would be ugly.)  MouseArea and Flickable also need to cooperate a lot, so the changes would have to be done together to keep everything working.  It was possible, but narrowly missed shipping in Qt 5.5 due to uncertainty.

So eventually we took a different route, after we found a reasonable combination of ideas that had been proposed.

One idea was that since we cannot refactor the QEvent hierarchy (yet) due to the binary compatibility mandate, we could instead create wrapper classes which make the events look like we want them to, complete with properties for the benefit of QML, and deliver those instead, using a mostly-unified event delivery path in QQuickWindow and QQuickItem.

Another idea was the realization that dynamically creating and destroying these wrapper events is silly: instead we use a pool of instances, as we have been doing in other cases where an “event” object is emitted by a signal (for example the object emitted by MouseArea.positionChanged is always the same instance, since Qt 5.8).  With that optimization, wrapping one event with another is no longer a big performance hit.

Another idea was suggested: maybe it would be nice if handling an event from a pointing device were as easy as using the Keys attached property: for example, Mouse.onClicked: { ... } or PointingDevice.onTapped: { ... } But soon after came the realization that there can be only one instance of an attached property per Item to which it is attached.  One of the problems with MouseArea is that it tries to do too much, so it didn’t make sense to simply re-implement all the functionality in a monolithic MouseAttached.  We wanted the ability to handle a click or a tap without caring which device it comes from, for example, because that’s what every Button control needs to do.  It’s the same with any gesture that can be performed with either the mouse or a single finger.  Perhaps there could be one attached property per gesture rather than one per device type, then?

Since QML is a declarative language, it’s nice to be able to declare constraints rather than writing if/else statements in JavaScript signal callbacks.  If an object which handles events is designed not to care which device a gesture comes from, there will nevertheless be cases when your application does care: you want to perform a different action depending on whether it is tapped on the touchscreen or clicked with the right mouse button, or you want to do something different if the control key is held down while dragging an object.  By allowing multiple instances of these handler objects, we can declare the constraints on them by setting properties.  It should be OK to have as many instances as you like.  Each instance should be lightweight so that you don’t fear to have too many instances.  The implementation should be in C++, and it should be simple and understandable.  Each handler should do one, or at most a few very closely-related things, and do them well.

Those concerns have taken us away from the idea of using attached properties, for now.  What we have instead is a new family of classes in Qt Quick: Pointer Handlers.

A Pointer Handler is a type of object which you can declare inside any Item, which handles events from pointing devices on behalf of that Item.  You can declare as many of them as you need: typically, one per interaction scenario.  All the common constraints that we could think of are declarable: you can make the handler react only if it’s a touch event, only to certain mouse buttons, only if the correct number of fingers are pressed within the bounds of the Item, only if a particular modifier key is held, etc.

Draggin’ balls

Common gestures are represented by their own handler types.  For example, if you declare

Rectangle {
    width: 50; height: 50; color: "green"
    DragHandler { }
}

then you have a Rectangle which can be dragged around the scene via either mouse or touch, without writing any Javascript, and without even needing to bind the handler to its parent in any way.  It has a target property, and the default is the same as its parent.  (But by setting the target to a different Item, you can capture events within one item, but manipulate the other.)

Of course, if you have two of those green rectangles with DragHandlers in them, you can drag them both at the same time with different fingers.

Every Pointer Handler is a QObject, but it’s not a QQuickItem, and it doesn’t have too many of its own variables, so each instance is about as small as a practical QObject subclass can be.

Every single-point handler has the point property, to expose all the details about the touchpoint or mouse point that we can find. There are properties for pressure and ellipseDiameters: some devices may have force sensors, while others may measure the size of the contact patch (but many devices don’t provide either). It has a velocity property which is guaranteed to be defined: we calculate and average the velocity over the last few movements for a slightly smoother reaction. Having the velocity available could potentially enable velocity-sensitive gestures: perhaps a flick should be done at a certain minimum speed. (Is that the right way to distinguish a flick from a drag? It hasn’t been easy to make that distinction before.) Or, if you have the velocity at the time of release, a drag gesture can end with momentum: the object keeps moving a short distance in the same direction. This makes your UI feel more alive. So far we have not formalized MomentumAnimation into a supported animation type, but there is a pure-QML prototype of it in tests/manual/pointer/content/MomentumAnimation.qml.

two red balls being dragged simultaneously with DragHandler

Tap dancing

TapHandler handles all the press-and-release gestures: single quick taps or clicks, double-taps, some other number of taps, holding it pressed for a configurable period of time, or holding it for various periods of time, all in unique ways. (When you touch a touchscreen, it often doesn’t make any sound, but you can tap a mouse button; so we thought “tap” is a more future-proof name for this gesture than “click”.)  You can show feedback proportional to how long it has been held (an expanding circle, progress bar or something like that).

TapHandler detecting a triple-tap and then a long press

Pinch me if this is for real

There is a PinchHandler.  If you declare it inside an Item, you will be able to scale, rotate and drag that Item using the pinch gesture.  You can zoom into any part of the Item that you like (an improvement on PinchArea).  It can handle larger numbers of fingers, too: you can declare PinchHandler { minimumTouchPoints: 3 } to require a 3-finger pinch gesture.  All the transformations then occur relative to the center point between the three fingers, and the scaling is relative to the average increase or decrease in spread between them.  The idea came from the way that some versions of Ubuntu use the 3-finger pinch for window management: apparently they thought the content in the window may have some use for a 2-finger gesture, but most applications don’t use 3 fingers for anything, so it’s OK to reserve the 3-finger pinch to scale and move windows around the desktop.  Now since you can write a Wayland compositor in QML, you can easily recreate this experience.

zooming into a map with PinchHandler

Getting to the points

Finally, there is PointHandler.  Unlike the others, it doesn’t manipulate its target Item: it exists only to expose the point property.  It’s similar to an individual TouchPoint in MultiPointTouchArea, and can be used for the same purpose: to provide interactive feedback as touchpoints and the mouse cursor move around the scene.  Unlike MultiPointTouchArea, it does not exclusively grab the touchpoints or the mouse, so having this interactive feedback does not prevent interacting with other handlers in other Items at the same time.  In the animations on this page, it’s used to get the finger sprites to follow my fingers around.

Are we there yet?

So now I’ll get to the reasons why this stuff is in Tech Preview in 5.10.  One reason is that it’s incomplete: we are still missing support for mouse hovering, the mouse wheel, and tablet stylus devices (a stylus is still treated as a mouse for now).  None of the handlers have velocity-sensitive behavior.  We can imagine a few more Handlers that could be written.  There should be public C++ API so that you can create your own, too.  Handlers and Flickable are getting along somewhat, but Flickable is a complex monolithic Item, and we think maybe it can be refactored later on.  There is a FakeFlickable manual test which shows how it’s possible to re-create a lot of its functionality in QML with two ordinary Items, plus a DragHandler and a few animations.

FakeFlickable: a componentized Flickable

Another reason is the naming.  “Pointer Handlers” sounds OK in isolation, but there is pre-existing terminology which makes it confusing: a pointer might be a variable which points to a memory location (but that’s not what we mean here), and a handler might be a sort of callback function that you write in JavaScript.  If you write TapHandler { onActiveChanged: ... } do you then say that your handler has a handler?  We could start using the word “callback” instead, but it’s an anachronism in some circles, and in QML our habits are hard to change now.

Another reason is that QtLocation has some complex use cases which we want to use as a case study, to prove that it’s possible to navigate a map (and any interactive content upon it) with small amounts of readable code.

Perhaps I’ll continue this later with another post about another way of doing gesture recognition, and how that might affect how we think about Pointer Handlers in the future.  I haven’t explained the passive grab concept yet.  There’s also more to be defined about how to build components which internally use Pointer Handlers, but give the application author freedom to override behavior when necessary.

So to wrap up for now, Pointer Handlers are in Tech Preview in 5.10.  We hope you will play with them a lot.  Especially if you have some old frustrations about interaction scenarios that weren’t possible before.  Even more so if you ever wanted to create a touch-intensive UI (maybe even multi-user?) without writing a C++ event-forwarding and QObject-event-filtering mess.  We need to start getting feedback from everyone with unique use cases while we still have the freedom to make big changes if necessary to make it easy for you.

So far the usage examples are mostly in tests/manual/pointer. Manual test source code isn’t included in release build packages, so to try those out, you’ll need to download Qt source packages, or get them from the git repo. Some of those can be turned into examples for upcoming releases.

Since it’s in tech preview, the implementation will continue to be refined during the 5.10 series, so follow along on the 5.10 git branch to keep up with the latest features and bug fixes.

There’s an upcoming webinar in a couple of weeks.

Do you like this? Share it
Share on LinkedInGoogle+Share on FacebookTweet about this on Twitter

Posted in Declarative UI, Dev Loop, Gesture Recognizers, Labs, Qt Quick | Tags: , , , ,

20 comments

Jakob Petsovits says:

Yeah, when I read “QML Pointer Handlers”, my first thought was memory pointers, my second thought was, “but QML is JavaScript, what does this do?”.

So here’s a suggestion: While “Pointer” is ambiguous, “Pointer Events” is not. Given the second, I would immediately think of unified mouse and touch events. “Pointer Event Handlers” might be a bit longer, but it may well pay off in terms of increased clarity.

Ilya says:

“Pointer” is indeed ambiguous, and “Handler” shares that problem. “UserAction” ? “Interaction” …?

J-P Nurmi J-P Nurmi says:

Tap, Pinch etc. are commonly know as “gestures”. Nobody would ever realize to search the docs for “pointer handlers” when trying to find out how to implement gestures.

Drew says:

“GestureHandler” then.

Alexander says:

To be honest, I wouldn’t search for Gesture either. I would search for Touch or Multi-Touch.

J-P Nurmi J-P Nurmi says:

Android, AppKit, UIKit, UWP, GTK+ call them gestures. Even Qt has previously called them gestures.

Searching for touch input handling should take one to the docs for touch input handling, which should link to touch gestures. These are related, but two different things.

Shawn Rutledge Shawn Rutledge says:

But only if you were thinking about touch that day, right? If you were writing a desktop app, you’d be thinking more about the mouse I suppose. The point is we have to stop thinking about each device as having completely independent functionality, because devices are proliferating, and people always expect to be able to use them to do the basic GUI interactions.

J-P Nurmi J-P Nurmi says:

In order to make controls behave well with mouse and touch, being able to distinguish between mouse and touch is the single most important bit of information. QQC1 tries to provide touch-oriented behavior when it detects a system that has a touch screen attached. Obviously this falls short when you interact with mouse on a system that has a touch screen attached. Given that QQC2 moved event handling to C++, we can handle mouse and touch events separately. Thus, we are finally able to make the very much needed distinction. In controls, the behavioral differences between mouse and touch interaction are often subtle, but make a huge difference in terms of usability. It is not something you can let a gesture framework abstract away, because the desired behavior varies between controls.

vladest says:

Is it possible to backport it to Qt 5.9.x?

J-P Nurmi J-P Nurmi says:

> it didn’t make sense to simply re-implement all the functionality in a monolithic MouseAttached

The idea was not to reimplement all of MouseArea as an attached property. The idea was to simply deliver mouse and touch events via attached event handlers, which can be configured to handle events before or after the item it is attached to. This would make it possible to intercept events in order to override an item’s default behavior, or act as a fallback for events not handled by the item. This is semantically very close to overriding an event handler in C++, where you can choose to call the base class implementation before, after, or not at all. This would offer a whole different level of interoperability between C++ and QML, and would play well together with Qt Quick Controls 2 where event handling is done on the root item. I don’t understand why can’t we have access to the low-level events. Do you know any other event-driven UI toolkit that doesn’t let you override event handlers, but bases all touch and mouse input handling on a gesture framework?

Petya Kolbaskin says:

I completely agree.

Shawn Rutledge Shawn Rutledge says:

“being able to distinguish between mouse and touch is the single most important bit of information”

The idea is to have multiple handler instances, and set the acceptedDevices on each one: https://doc-snapshots.qt.io/qt5-5.10/qml-qt-labs-handlers-pointerdevicehandler.html#acceptedDevices-prop

The reason is so that it can be declarative, so you don’t have to write if/else or case statements in Javascript or C++.

And of course, handling events in C++ like you are doing in Controls 2 was always possible, and continues to be possible.

Because of the API review (which you participated in), we have omitted events as parameters from signals, such as TapHandler’s tapped signal. I think maybe we should have left them there.

“I don’t understand why can’t we have access to the low-level events.”

Remember when the PointerHandler base class was available to QML? It had a nice low-level API: you just wrote onPressed, onUpdated, onReleased, and onCanceled. You could do anything you wanted in JavaScript. (But that’s not very declarative, is it?) But people didn’t like it, during API review. I had someone at the Contributors’ Summit in 2016 tell me not to give too-powerful weapons to QML users, because they will abuse them. Such signals could still be added back later though, as soon as consensus allows.

Anyway, QEvents are not suitable for exposing to QML, because they are opaque to QML. I wanted to make them Q_GADGETs, but there is still the hierarchy problem (not having a common base class for events that have a QPointF position and related stuff), and during API review, people preferred the object-wrapping solution that we have now with QQuickPointerEvent. We have always had wrapper “events” in QtQuick, just to be able to give them properties and invokables right before we expose them to QML. But now that we use them for delivery, they aren’t just afterthoughts: they are meant to make the C++ code more straightforward too. But I hesitate to repeat the MouseArea.onPressed: if (blueMoon) mouse.reject() pattern, because it’s imperative Javascript to which MouseArea provides no alternative. The alternative we have now is to declare what you want in advance instead of deciding imperatively on the fly.

As for attached properties, I’m not opposed to having some sort of them, but it’s still not clear how they and Pointer Handlers will fit together, and which unique problems are still going to remain when Pointer Handlers are a bit more complete: the kind of problems that attached properties are the only nice way to solve. Maybe what you should do is try out Pointer Handlers, and then write up bugs about specific use cases that you can’t solve. Then we can discuss solutions for those use cases. It’s the same invitation I’m extending to the whole world to do with 5.10.

I think it’s fine to keep using C++ when you need to, and you have lots of “access to low-level events” there. What I think might be better in C++ though is to handle the QQuickPointerEvents rather than the QEvents, because then you can write fewer event-handling functions; you can choose when to care about whether it’s mouse or touch, and when not to care; and for the first time, you even have enough information to make those decisions: you can see exactly which device instance it came from, with all known information about that device; and you have velocity. You could make your Drawer flick open only if the motion occurs within certain ranges of speed and direction on a touchscreen, for example, while behaving differently with a mouse or a trackpad. And if you use the passive grab mechanism effectively (not explained yet: there will be another blog post about it later), you won’t need to write QObject event filters for that kind of use case. You even have more control over the exclusive grab transition (I’ll blog about that too).

As soon as we make the Pointer Handler headers public, you’ll be able to do at least two things: 1) instantiate them in C++; 2) subclass them and define your own behavior. Maybe we could also add: 3) reuse some code by calling static functions. I think Controls 2 will be able to migrate towards one or more of those approaches, when they are mature enough.

Yet another option is to subclass QQuickItemPrivate to be able to override virtual bool handlePointerEvent(QQuickPointerEvent *). But we can’t make public API out of that until Qt 6, because of BC, unless somebody comes up with a way around that problem.

I see Pointer Handlers as event-handling components which can interpret the events in whatever way you code them to, in C++. If you want to define every kind of event-handling as gesture recognition, maybe you have a point: but if we agree to adopt that definition, it doesn’t make sense to immediately turn around and say that gestures are too limited and something has gone missing. The OSes may drive us towards using pre-digested gestures more (again, upcoming in another blog post) but for now, you can rejoice that the actual touch and mouse events are still there, although mainly in C++ at the moment.

Pointer Handlers put the “declarative” back into QML when it comes to handling events. More declarations, less Javascript, and less caring about event delivery order. Just declare what you want to happen in different scenarios and we’ll make it happen. Because we have gone so far with getting the events out of the signal parameters, you don’t get to see the events in QML, but it doesn’t have to be that way if you think seeing the events solves problems which cannot otherwise be solved.

Patricia Gulotta says:

Shawn, Thanks for your interesting talk at QTWS 2017, re: https://www.youtube.com/watch?v=W8SBSCf8eE0. I do agree with Jakob Petsovits “QML Pointer Handlers” comment. I associate pointers with C++, not users pointing on a device. So, I have suggestion, how about “Input Handlers”, just a thought.

Shawn Rutledge Shawn Rutledge says:

If we are that vague, then someone might think it’s for all input devices, but we aren’t planning to change the way key events are handled, as far as I know. (Not that it couldn’t ever happen, but now would be a bad time, obviously.) This concerns events that have points associated with them. So we have objects which handle events from pointing devices, but not keyboards, and we need a good name for that class of objects. Yes, they can be GestureSomethings as long as we define gesture to include even the simple stuff like clicking and dragging and hovering.

Sandro F says:

Well, it seems that Mouse/Touch Handling is completely broken in Qt 5.9.x because of bug https://bugreports.qt.io/browse/QTBUG-61144

It is not possible to use a pressDelay property anymore because otherwise a “normal” Mousearea onClicked() is only fired after the pressDelay.
This is a completely different behaviour as in Qt 5.7.1 (where it works like intended).

Using MultiTouchArea as an work around does also not work.

So it seems that work which was already done in Qt 5.9.x for MultiTouchSupport is at the end a mess with a lot of bugs.

The blog post confirm this.

Shawn Rutledge Shawn Rutledge says:

That is simply a bug, and being a regression, it will have to be taken seriously. It does not mean that everything is “completely broken”, however it does illustrate the pre-existing “mess” we are in with needing to keep synthesizing mouse events to keep the legacy Items (such as Flickable and MouseArea) satisfied, and yet interoperate with Items and PointerHandlers which do handle touch events properly. But if we never did implement proper multi-touch support, for fear of the occasional bug, we’d continue to be rightfully subject to ridicule for claiming that we had it.

Sandro F says:

@Shawn: Sorry for the “radical” words. I mean, that this behaviour still stops us updating to Qt 5.9 which would bring a lot of nice features, of course.

Anyway…many thanks for dig into this issue!

jiangcaiyang says:

BTW, how do you record gif animated images at the second picture? It is so fluent.

Shawn Rutledge Shawn Rutledge says:

Maybe I’ll blog about that separately, but the summary is: use one of the signals in Window to save each rendered frame to a png file (I used onAfterRendering: contentItem.grabToImage(…)). Then use mogrify (from imagemagick) to convert them to gif, then use gifsicle to make the animated gif. (Qt does not have support for writing GIFs of any kind, AFAIK. I wrote a giflib imageformat plugin a few years ago: https://github.com/ec1oud/qt-gif-plugin but I wasn’t trying to animate them then.)

Joona Petrell says:

I feel your pain. Touch applications heavily overload the interactions user can perform on the screen, putting a lot of demands on the touch framework. On a quick glance the new touch handler APIs you have created look quite nice. 🙂

Still I would be careful before introducing new touch primitives and fragmenting the developer experience. Qt Quick already has dedicated APIs for tapping (MouseArea.clicked()), scrolling (Flickable), dragging (MouseArea.drag) and pinching (PinchArea). Back in the days when the first Qt gesture and multi-touch support was developed both trolls and internal Nokia UI framework teams implemented many touch framework prototypes. Most of them failed to graduate from isolated lab demos to serve non-trivial real-world apps that supported the gesture UX the designers wanted. Nowadays nobody in their right mind would use QGestures to implement scrolling gesture when Flickable does that for you implicitly.

Most applications just use MouseAreas for tapping and Flickables for scrolling (more complex gesture handling belongs to UI toolkits like Qt Quick Controls and Silica UI components). To serve the majority of Qt Quick projects I would introduce a simple flag for enabling tapping and scrolling of multiple items simultaneously in the app. If maintaining two code paths inside existing primitives is too difficult then provide new elements with near identical API to the existing MouseArea and Flickable to make the migration as easy as possible. Most developers don’t care how the gesture negotiations have been implemented, so you should have some leeway under the hood.

In the long run I wouldn’t mind seeing Qt Quick touch handling partially rewritten. While the APIs are rather nice there is a lot of unnecessary complexity, coupling and small (but annoying) inconsistencies between MouseArea, Flickable, ListView, GridView, PathView and likes that has accumulated over the years, making the stack difficult to maintain and extend. Proper rewrite would probably cause behavior breaks, so the new touch primitives couldn’t co-exists with the old.

> MouseArea proclaims by its name that it handles mouse events

I don’t think that ever was the intention, it supported TouchEvents quite early on. MouseArea supports both desktop mouse and touch screen interaction. Agree the name is bad.

PS. The link to PointHandler documentation was broken and the generated API documentation is missing some properties and signals.

Commenting closed.

Get started today with Qt Download now