Combining shaders

Combining shaders

  • Comments 6

"So, I have this shader that does normalmapping, and this other shader that does skinned animation. How can I use them both to render an animated normalmapped character?"

Welcome, my friend, to one of the fundamental unsolved problems of graphics programming...

It seems a reasonable expectation that if you have two effects which work separately, it should be easy to combine them, no? After all, that's exactly what happens in a program like Photoshop. I can add a drop shadow, then a blur, then apply a contrast adjustment, and everything Just Works™. Why not the same for GPU shaders?

To understand the problem, we need to understand the underlying programming model. An app like Photoshop has a very simple model:

  • Data is a bitmap image (a 2D array of color values)
  • Filters are functions which take in one bitmap and output a different bitmap
  • Any number of filters can be stacked by passing the output from one filter as the input of another

This programming model is conceptually simple, mostly because it only deals with one data type. It is trivial to chain filters together when their input and output are the same type.

The GPU shader programming model is more flexible, and thus more complex. A slightly simplified diagram:

Untitled

The blue boxes represent input data. Red are customizable processing operations, and yellow is the final output. Let's walk through what happens each time you draw something, starting at the left of the diagram and working to the right:

  • The GPU reads vertex data and indices
  • These values are combined to form one or more triangles
  • Your vertex shader programruns once for each vertex
    • Inputs: vertex data + effect parameter values
    • Outputs: position + colors + texture coordinate values
  • The GPU takes the position values output by the vertex shader, and works out what screen pixels are covered by each triangle
  • It interpolates color and texture coordinate values over the surface of the triangle, generating smooth gradients between the three corner vertices
  • Your pixel shader programruns once for each pixel covered by the triangle
    • Inputs: colors and texture coordinates (produced by interpolation of the vertex shader output values) + effect parameter values + textures
    • Outputs: color
  • The color produced by the pixel shader is combined with the previous color at that location in the rendertarget, by applying a user specified blend function
  • The resulting color is stored into the output rendertarget

Yikes! Note that although the final output is a 2D bitmap (same as for a Photoshop filter), the input is a combination of vertex data, indices, effect parameters, and textures. The input and output types are not the same, which means there is no generalized way to pass the output from one shader as the input of another, and thus no way to automatically combine multiple shaders.

In fact the only universal way to combine two shaders is to understand how they work individually, then write a new shader that contains all the functionality you are interested in. This is the price we pay for flexibility. Because shader programs can do so many different things in so many different ways, the right way to merge them is different for every pair of shaders. For instance to use animation alongside normalmapping, it is necessary to animate the tangent vectors used by the normalmap computation, which requires changes to both the animation and normalmapping shader code.

However, there are specific cases in which Photoshop style layering is possible, if you impose extra constraints on the programming model by restricting all your shaders to work in a similar way:

  • When processing rectangular 2D regions (most often fullscreen), you can feed the output rendertarget from one drawing operation as an input texture to another. This is only possible when all the interesting work is done in the pixel shader, with SpriteBatch often used to provide the vertex data, indices, and vertex shader. Restricting the geometry pipeline to 2D quads enables Photoshop style composition using a separate shader per layer. Check out this sample for an example.
  • If you have several pixel shaders which produce different color values for the same model, and the desired way to combine these colors is a simple arithmetic operation (typically addition, multiplication, or interpolation), you can draw several times directly to the backbuffer, one pass per shader, and use alpha blending to combine the shader outputs. The first pass will typically use opaque blending with standard depth buffer states, while subsequent passes use some other blend function, depth compare set to equal, and depth writes disabled. This can be a good solution for scenes with large numbers of lights, where each pass adds the contribution of a single light to the color already in the backbuffer. See this sample.

In a previous job I designed a system for automatically combining shader fragments in more flexible ways than are possible using rendertargets or alpha blending. This worked well, but was complex both to implement and when adding new shader fragments, and there were still many things it could not handle.

So there you have it. Combining shaders turns out to be harder than you might expect, and is usually a manual process. But on the plus side, I guess this makes good job security for us shader programmers :-)

  • Shawn, this is a great and timely post. Seems like this is the wild west and everyone has their own scheme of doing things. Will we ever get to a stage where routines and concepts are standardised, like the STL for C++? I don't know, given how fragmented the shader languages are.

    And there have been many attempts at solving it using graph-based editors but none of those have yet to become a de-facto standard too.

  • Helpful and clear as ever Shawn, thank you so much!

    Now, where can I start a petition to get this sort of info included in the XNA documentation?? ;¬)

  • Yeah, this is a well known shader permutation problem. People usually write uber shader, or generate all variants of the shader or people nowadays are using deferred shading/lighting approach.

    Shawn, I wonder if you have experimented with deferred lighting on XNA? and how well does it perform?

    I also wonder if you are proponent of deferred approaches (either shading/lighting)?

  • tep: I did some early deferred shading research in my previous job, but never tried this using the XNA Framework.

    I have mixed feelings about deferred rendering - it has some major advantages, but also significant disadvantages that can't really be worked around (eg. doesn't work at all for alpha blended geometry). I think it really depends on your game whether the tradeoffs are worth it.

    I also don't think deferred shading is that great a fit for the 360 GPU hardware (limited EDRAM, predicated tiling, etc).

  • Funny how the flexible nature of shaders encourages us to want to make them generalized,  but makes them difficult to generalize.

  • tep, instead of Deferred, you should go with the Light Pre-Pass method by Wolfgang Engel. Not only does it handle trasparency well, it's also compatible with msaa and doesn't limit the different materials you can use :)

Page 1 of 1 (6 items)
Leave a Comment
  • Please add 7 and 3 and type the answer here:
  • Post