Layered rendering part 2, it helps solve many problems… :-)

Published Thursday April 23rd, 2009
7 Comments on Layered rendering part 2, it helps solve many problems… :-)
Posted in Graphics, Graphics View, KDE, Kinetic, Qt, Uncategorized

As part of Qt 4.5, we added QGraphicsItem::opacity. Which is great! But it doesn’t work as well as it could. We’re receiving a few comments about how this implementation could work better. Trouble is, in Qt 4.5 we only have two ways of rendering: direct, and indirect (“cached”, e.g., ItemCoordinateCache). And in order to apply opacity, you really need full composition support… :-/

Here’s a rundown of the trouble with today’s opacity support:

  • The current behavior modifies the input opacity of the painter
    • …so you can override this opacity on a per-item basis (intentionally or not!)
    • …and each primitive (line, rect, pixmap) you draw with QPainter will be composed one at a time inside the item (which is bad!)
  • The behavior is slightly different depending on whether you cache an item or not.
    • …if you use caching, the offscreen pixmap is rendered with an opacity, whereas the item itself doesn’t have any opacity set on its painter.
  • Each item applies opacity locally, and even though opacity propagates to children, each item is still rendered by itself, cause each item to be transparent (as opposed to rendering one subtree as a whole with an opacity)

All these problems could have been solved if we treated opacity as an effect that you can apply to one layer as a whole, instead of on each item. I think this is how opacity should work in the first place… but that’s how it goes sometimes. But fear not, we can fix this in a later release! πŸ™‚

To illustrate the current behavior, here’s what you get if you construct a scene with four items, each a child of the previous item, in colors red, then green, then blue, then yellow. Logical structure in hypothetical markup:

Rect {
    color: red;
    rect: QRectF(0, 0, 100, 100);

    child : Rect {
        color : green;
        pos: QPointF(25, 25);
        rect: QRectF(0, 0, 100, 100);
        opacity: 0.5;

        child : Rect {
            color : blue;
            pos: QPointF(25, 25);
            rect: QRectF(0, 0, 100, 100);
            rotation: 45;

            child : Rect {
                color : yellow;
                pos: QPointF(25, 25);
                rect: QRectF(0, 0, 100, 100);
                scale: 2×2;

This will in Qt 4.5 render the following output:

Opacity, in Qt 4.5

What’s important to notice here is how all elements are transparent; so you can see the blue through the yellow item, the green through the blue item, and the red item through the green item. But the yellow item doesn’t actually have any opacity assigned. It inherits opacity from the green item (opacity: 0.5).

By rendering the “green sub-tree” into a separate layer, we can combine all items and apply one uniform opacity as part of composing these items together. In my last blog I wrote about off-screen rendering. This work has progressed and is in quite a usable state (although the code is really ugly). It works! The rendering output for the same application as the above looks like this:

Opacity, in Qt 4.6

The essential difference is how the “green sub-tree” is treated as if it had one _combined_ opacity. The yellow item, for example, isn’t transparent by itself at all. You can’t see the blue through the yellow. By “collapsing” the subtree into a single layer we can both avoid unnecessary rerendering of items (i.e., if you move the green item around, the children are not repainted, not even from cache). Which is pretty cool!

(We get this at the cost of allocating and spending an extra pixmap, and the first time we render there’s an extra level of indirection.)

But there’s even more applications to this technique… πŸ™‚ With code for handling explicit composition of layers, we can now use pixmap filters and shaders to compose one layer onto another. And this is very useful for any future effects API we might add to Qt in general. Imagine applying a gaussian blur effect to the layer represented by the green item. The result is below:

Opacity w/blur, in Qt 4.6

The result is very neat; the whole layer is blurred before it’s rendered. You don’t get funny artifacts that might have occurred in overlapping regions if each of the items was blurred individually.

Now none of this gives any mind-blowing screenshots (at least not yet), but it’s an important step that provides faster rendering of subtrees (as sliding and transforming a complex graph of items ends up just being matrix operations on a single pixmap/texture, and in software this prevents overdrawing), a more accurate processing of the opacity property of QGraphicsItem, and it makes it very easy to apply fancy composition effects to groups of items.

Questions: Should layers be explicitly defined, or implicitly handled by logics inside Graphics View? In the example above, I’ve used the opacity property to detect whether an item’s subtree should be rendered into a separate layer or not. This is easy to do and very clear/unsurprising. But what if you want to collapse a subtree for performance reasons? An alternative, or possibly an additional approach would be to add an explicit setting: QGraphicsItem::setLayer(true) or setCacheMode(QGraphicsItem::DeepItemCoordinateCache). I’m torn. Experimentation and feedback will show (yes I know I need to push this code out, once our SCM goes public this will be much easier!).

There’s also the problem with these darn flags QGraphicsItem::ItemIgnoresParentOpacity and QGraphicsItem::ItemDoesntPropagateOpacityToChildren. There’s no way to make these flags work with a layered rendering approach. But are these flags very important, or could we just (*cough* *cough*) disable/deprecate them? πŸ˜‰

Finally I still haven’t found out how to handle ItemIgnoresTransformations. The most probable solution is that these items are automatically rendered into a separate layer.

Happy hacking! πŸ™‚

Do you like this? Share it
Share on LinkedInGoogle+Share on FacebookTweet about this on Twitter

Posted in Graphics, Graphics View, KDE, Kinetic, Qt, Uncategorized


QtDude says:

Great stuff. As for the question of whether layers should be explicitly defined or handled implicitly, I vote for explicitly. For the particular application I am working on (a graphical editor), items are displayed on different layers, so that they can be dealt with as a whole (some layers hidden, some gray, etc.) As it is, I’m having to simulate layers in the program, but if this was supported directly by Qt, that would be awesome!

Really nice to see this.
Your post describes exactly one thing I have seen in QtSvg, which misses the ability to apply an opacity to a group of elements. See these images for instance:

Currently it is simply ignoring the group opacity, but in order to get the right results, something like this layered rendering you described would be required (or a similar behavior implemented by hand in the Svg renderer)

Nice work!

hugo says:

I use QGraphicsView to create simple dynamic widgets, with few items. For more complex widgets I use OpenGL API because its lower level API is less restricitive.
Saying that, in my opinion QGraphicsView should aim for simplicity, to make life easier when creating customized QTreeViews or QTableViews, with layers implicitly handled to deal with customized QTreeViews and QTableViews.

JarJarThomas says:

I’m coming from the visual fx industry. And my oppinion is, allow to define your layers manually (having an optional way to use autolayers is ok)

I think a kind of Layer Object where i can add items without or WITH subtree
myQLayerObject->addItem ( myItem, true ); // adds graphicsItem myItem WITH subtree
myQLayerObject->addItem ( myItem, false ); // adds without subtree

Manual Layers should be whenever possible independent of the object hierarchy. Or otherwise they can get useless very fast.
So it is really necessary to get full potential to have in layer one eg. the root of a tree and the deepest item. And in layer two a child of the root.
( in the last compositing app i wrote, that was in integral feature )

Automatic Layers can be combined with the object hierarchy.

At the time you have layers much more features should be usefull.
For each layer a pre and a post draw step.
For each layer enable/disable

So you can easily have different “states” of the userinterface by blending/disabling layers or can render pre and post pixel effects on a layer (blur, and so on).

Also IF you do layers … another thing comes to my mind. Kind of HUD layers/Items. That means have a kind of onScreen object that has always the size of the screen and also a depth. Currently i fake this. But that would be a real timesaver (for eg. hud effects)

Scorp1us says:

Huh? Why is the yellow corner being blended? Seems that it should occlude the red, thereby not producing orange.

Andreas says:

JarJarThomas: I like the idea of making it explicit as you say, and the idea of having a special layer object is appealing, but I wonder if it’s necessary. Right now I’m working on making it an option of the items. It’s hard to define a layering system that doesn’t follow the object hierarchy, especially when the QGV stacking order (which affects propagation of properties and key/mouse input) follows the object hierarchy. HUD items are very interesting, I agree. Today there’s one way you can do HUD items: by leaving the QGraphicsView transformation unchanged, and layer items in the scene instead (applying zoom/rotate to the items in the scene instead of on the view). Maybe this is what you’re doing already :-).

Scorp1us: It’s blended because it’s in the same layer as green and blue… As the whole layer is 50% transparent, but the yellow item itself is opaque to its layer, the result is orange.

Remi says:

I know it’s a little bit off-topic, but I have quite the same needs as JarJarThomas, although in a different domain. We are developing GIS applications in transportation domain, so we use graphics scenes to draw maps with several hundred thousands vector items that are organized in layers. In such GIS applications, the layers typically hold some common features about their items : graphic attributes, zValue, a visibility range (all items in the layer appear and / or disappear when you reach given zoom levels), … And on top of that, I have a set of “HUD items”, that are supposed to have fixed coordinates in the view : an embedded navigation widget, scale / compass indicators, …
Obviously, it would be easier to develop such kind of GIS application if some kind of item layers were implemented in QGW. I also think it could improve overall performance, as it would be possible to reduce iterations through items collection by skipping non visible layers.
Concerning HUD items, I didn’t try to perform zoom / rotate individually on items as you suggest in your answer (wouldn’t it be time-consuming ?), so I choose to reposition my HUD items after each view transformation. May be an easier approach would be to have a layout of semi-transparent widgets in front of the graphicsview ?

Commenting closed.

Get started today with Qt Download now