Shawn Hargreaves Blog
The way rendertargets work in XNA is somewhat different to regular DirectX, and although we've already discussed most of the details in various forum threads, I recently noticed there doesn't seem to be any one place that explains everything all together in one place. So here it is!
First, the basics. XNA provides just three GraphicsDevice methods relating to rendertargets:
And here is how you use them:
RenderTarget2D myRenderTarget = new RenderTarget2D(...);
// draw some stuff
// use myRenderTarget.GetTexture to look up the data you just rendered
You can also resolve from the backbuffer, which you can think of as the default rendertarget used when you haven't specified a custom one:
Texture2D resolveTexture = new Texture2D(device, w, h, 1, ResourceUsage.ResolveTarget, format, ResourceManagementMode.Manual);
// draw some stuff
Unfortunately, there are three special rules to complicate the picture:
Note that "undefined" is a tricky word. It might mean "is set to zero", or it could mean "contains random values", or perhaps "hasn't changed at all, and still contains exactly what you would expect".
In XNA, the exact meaning of "undefined" is different on Xbox and Windows, and also depends on whether your rendertarget uses multisampling or not. This is unfortunate, because it lets you write code assuming one particular behavior, and this may happen to work on one platform but will then behave unexpectedly on another.
If you want your code to be robust and portable, you have to avoid making any assumptions about the contents of things that have become undefined. That means whenever you change rendertarget or call resolve, you must assume all existing rendertarget data has been lost. If you need to preserve the contents of a rendertarget across such an event, the only reliable way to do that is by drawing the texture you resolved into back over the top of the now undefined rendertarget:
spriteBatch.Draw(resolveTexture, Vector2.Zero, Color.White);
You may be wondering why XNA imposes these peculiar rules. The reason is that there are three fundamentally different ways in which rendertargets can be implemented.
On Windows, if you are not multisampling, a single area of video memory can be used both for rendering and as a texture. This means resolve is essentially a no-op, and the contents of rendertargets are never actually lost.
On Windows, if you are multisampling, a rendertarget requires two areas of video memory: one large buffer for rendering into, plus a smaller texture for holding the downsampled result. The resolve call copies from one to the other, taking care of the multisampling as it goes, but each buffer still has its own area of video memory so the contents are never lost.
On Xbox, the GPU actually only has one physical rendertarget, which is stored in a special area of incredibly fast memory (this crazy silicon is one of the main reasons the Xbox GPU is so fast). You can't texture from this special memory, and you can't render into normal memory, so when you are done rendering to the special memory, you have to copy the results back into a normal texture before you can do anything with them (fortunately, there is more crazy silicon devoted to making that copy operation pleasingly quick). Because all rendertargets are really just pointers to the same special memory, there is fundamentally no way to persistently store an image in one while rendering something different to another, so the contents will always be destroyed as soon as you switch rendertarget. It is less obvious why the resolve call needs to clear the special memory, but apparently there is a performance gain from doing this: I don't pretend to understand why but I'm not going to complain as long as this keeps my Xbox running as fast as it does!
This table might help clarify exactly what happens on each platform:
But remember, you only need to care about these details if you are planning on breaking the rules! If you play it safe by always assuming your buffer contents will be lost when you call SetRenderTarget or Resolve, that code will work consistently on all platforms.
ahh, good deal ... I was curious about using this for some post processing, but couldn't really find any coherent data in one place (as you alluded to). thanks Shawn
Thanks! This clears up some things.
I was especially curios about the cost of resolving the render target.
This blog post is kind of troublesome, because I got some post-processing effects going in what I think is the very same way that you say should destroy the textures on the Xbox360, yet they work fine!
I render the scene to target A, resolve A, switch to target B, render A to B with an effect, resolve B, switch to C, render A to C with an effect, resolve C, then render A, B, and C to the backbuffer.
This should be destroying everything, but it's not. Works flawlessly on Windows and Xbox360. What gives? Should I repost this on the XNA forum for the general edification of the masses?
Thanks, love your blog!
Sorry, unclear wording on my part there! Your approach seems fine. The thing that is lost when you call resolve is the contents of the buffer that is being rendered onto: the texture that it resolves into (which is returned by GetTexture) will never be lost.
This is fine:
- Render to A
- Resolve A
- Render to B
- Resolve B
- Render A.GetTexture and B.GetTexture to C
But this is not:
- Carry on rendering more stuff to A
No good because the resolve cleared the current contents of the rendertarget!
Now all is clear!
I like simple diagrams, preferably with lots of pictures, arrows, and colors, and anything beyond that is difficult to fit into my brain...
Thank you very much!
I love your blog doubly now!
btw, there is no simple way to do such a save/restore for the RenderTargetCube. What should we do in that case?
You should be able to render any individual face of the cubemap texture to the backbuffer, in just the same way you would for a 2D rendertarget. You need a custom pixel shader to do that, but it should be simple enough to write a shader that just grabs pixels from one face of the cubemap.
Remember that only the texture portion of the RenderTargetCube is truly a cubemap: the current rendertarget is still just a 2D surface, so you are only ever rendering to (and thus only need to worry about restoring) one face of that cubemap at a time.
Probably XNA should have more useful RenderTargets and support features like 'RT stacking'. I hope that feature will appear in future releases.
BTW, what is the lifetime of the resolved texture? I do some rendering into RT, than resolve it, than do some other rendering into another RT, set the previous RT and try to get resolved texture, but i has InvalidOperationException at this time. That was my simple attemp to implement RT stacking myself, and it fails =). Can you give some explanation, please?