As mentioned in earlier posts, by far the most important aspect of the DWM is the fact that application windows are redirected to render offscreen, and then the DWM is responsible for compositing those windows to the screen.  So, how exactly does that happen?  That's what this post is all about.  Redirection is a fairly complex topic, but is completely central to the composited desktop.  Thanks to Jevan Saks and Greg Swedberg for reviewing this post and answering some of my own questions here as well.

Before diving into this, I should clarify something that hasn't been brought up in earlier posts: the DWM only redirects top-level HWNDs.  Thus, a Multiple Document Interface (MDI) application (Microsoft Management Console, mmc.exe, is a good example of this) will have its overall top-level HWND, with it's internal child HWNDs, composited as a single entity.  The application process draws the child HWNDs, and their non-client areas, as it always has. 

For the purposes of a discussion on redirection, there are really three types of windows that are of interest: GDI-rendered windows, DirectX-rendered windows, and windows rendered by a mix of DirectX and GDI.  Let's discuss these in turn.

GDI-rendered windows

Today and for the near future, most applications use and will continue to use GDI to render their content.  Traditionally, GDI applications were notified when a part of their window became unoccluded, and were asked to repaint that portion of the window.  Under the DWM, that window is redirected, and the following happens:

  • A system memory surface the size of the window is allocated and associated with that window.
  • A video memory surface, in the target DirectX pixel format, is allocated, also the size of the window.
  • When an application retrieves the GDI DC of an HWND, it no longer is the DC of the primary video buffer, as it is in the non-composited, pre-DWM desktop.  Instead, the DC is a DC onto the allocated system memory surface.
  • GDI operations on that DC then populate the system memory surface.
  • The system, based on a number of variables, decides to update the video memory surface from the system memory surface at the "right times".
  • The video memory surface is now up-to-date with the application, and the compositor comes around and uses the video memory surface to composite the desktop from.

There are a few implications of the above that are worth calling out.

  • Dual buffers per window - yes, it's true that GDI windows have both a system memory and a video memory representation.  There is without doubt a memory cost to doing this.  One obvious alternative is to simply have a video memory representation and have the GDI redirection mechanism render to that format.  There are two primary problems with this.  The first is that the formats are not the same, and GDI doesn't support rendering into the DirectX format.  Even if that were resolved, the more fundamental issue remains.  Many GDI operations (XORs, alpha blending, and text are examples) are read-modify-write operations.  To do that to a native video memory surface would involve reading back from video memory into the CPU (and thus into system memory), performing the operation, and then writing back.  This is typically a horribly slow and pipeline-stalling operation.
     
  • Minimized windows present a special issue.  Typically when an application receives a minimization, the surface that it's asked to paint is a nominal size, like 130x30, just enough for shades of the non-client area.  If the application updates the system memory surface at this point, and we continue our copying to the video memory surface, then any surface we may have had available to us for Flip3D or for thumbnail rendering is suddenly gone.  Instead of doing this, we maintain the video memory surface in its last known state, and thus those "secondary window representations" are far more useful when windows are minimized. 

DirectX-rendered windows

Unlike GDI applications, DirectX applications of course can natively render into the DirectX pixel format that the DWM expects.  They also have a very clear indication of when they're done rendering due to the requirements that they call Present().  As such, DirectX applications only need a single window buffer to manage their redirection.  DirectX window redirection is handled by having the DirectX system, when it's determining what surface to provide the app with to render to, make calls to the DWM in order to share a surface between the DirectX client application process, and the DWM process.  This "shared surface" support is unique to DirectX atop the WDDM, and is another key reason why WDDM is an absolute requirement for running the DWM.

When a Present() happens to such a surface, the DWM is notified that there are dirtied surfaces that need to be composited to form the desktop, and that serves as an indication to perform a composition.  (It's actually a fair bit more complicated than that, but this description certainly provides the gist of it.)

Certain DirectX-based applications have much more stringent scheduling requirements (for instance, video applications), and there are public APIs provided that allow the application to get a lot more information, and more control, over when they should render based upon the rendering schedule of the desktop compositor.  That will be covered more in a future topic.

Finally, WPF (Avalon) applications are DirectX applications, so they render just as the DX applications described above render.

Mixed DirectX and GDI Windows

The other reasonably common rendering to a top level window involves mixing DirectX and GDI.  There are two forms of "mixing" here, one is perfectly fine, and the other is problematic. 

The form of mixing that is fine is when there is a window tree of the top level HWND and child HWNDs (and further children, etc), where each individual HWND is either rendered by DirectX or by GDI.  In this situation, the redirection component of the DWM forms its own "composition tree" where each node in the tree represents a node or a set of "homogenously rendered nodes" in the "window tree" rooted at the top level HWND.  Rendering occurs by having each of these render to their own surface, and then compositing this tree of surfaces to the desktop.  Thus, mixed DirectX and GDI rendering works well, so long as the boundary between them is at least at the child HWND level.

The form of mixing that doesn't work well is when an application uses DirectX and GDI to target the same HWND.  This has never been a supported scenario with DirectX, but there have been scenarios where it has happened to work.  Under the DWM, this is much more problematic, because there can be no guarantee of ordering between the DirectX and the GDI rendering.  This is most troublesome when GDI and DirectX are not only rendering to the same HWND, but to overlapping areas of the same HWND.  As such, this usage pattern is not supported.  Note that there is an alternative that can often work for an application -- DirectX is capable of handing back a DC to a DirectX surface, and applications can perform GDI rendering to that DC.  From the DWM's perspective, that DirectX surface remains purely rendered by DirectX, and all is well.

Drawing To and Reading From the Screen -- Baaaad!

Lastly, since we're on the redirection topic, one particularly dangerous practice is writing to the screen, either through the use of GetDC(NULL) and writing to that, or attempting to do XOR rubber-band lines, etc.  There are two big reasons that writing to the screen is bad:

  1. It's expensive... writing to the screen itself isn't expensive, but it is almost always accompanied by reading from the screen because one typically does read-modify-write operations like XOR when writing to the screen.  Reading from the video memory surface is very expensive, requires synchronization with the DWM, and stalls the entire GPU pipe, as well as the DWM application pipe.
     
  2. It's unpredictable... if you somehow manage to get to the actual primary and write to it, there can be no predictability as to how long what you wrote to the primary will remain on screen.  Since the UCE doesn't know about it, it may get cleared in the next frame refresh, or it may persist for a very long time, depending on what else needs to be updated on the screen.  (We really don't allow direct writing to the primary anyhow, for that very reason... if you try to access the DirectDraw primary, for instance, the DWM will turn off until the accessing application exits)