Compressed vertex data

Compressed vertex data

  • Comments 5

Compressed data is a great thing, especially when the GPU hardware is able to decompress it for free while rendering. But what exactly does vertex data compression mean, and how do we enable it?

XNA defaults to 32 bit floating point for most vertex data. For instance the VertexPositionNormalTexture struct is 32 bytes in size, storing Position and Normal as Vector3 (12 bytes) and TextureCoordinate as Vector2 (8 bytes). 32 bit floats are great for precision and range, but not all data needs so much accuracy! There are many other options to choose from:

    enum VertexElementFormat
    {
        Single,
        Vector2,
        Vector3,
        Vector4,
        Color,
        Byte4,
        Short2,
        Short4,
        NormalizedShort2,
        NormalizedShort4,
        HalfVector2,
        HalfVector4
    }

The HalfVector formats are only available in the HiDef profile, but all the others are supported by Reach hardware as well.

Generating packed values from C# code is easy thanks to the types in the Microsoft.Xna.Framework.Graphics.PackedVector namespace. For instance we could easily make our own PackedVertexPositionNormalTexture struct that would use HalfVector2 or NormalizedShort2 instead of Vector2 for its TextureCoordinate field.

To compress vertex data that is built as part of a model, we must use a custom content processor. This example customizes the built-in ModelProcessor, automatically converting normal data to NormalizedShort4 format, and texture coordinates to NormalizedShort2. This is an 8 byte saving, reducing 32 byte uncompressed vertices to 24 bytes:

    using Microsoft.Xna.Framework.Content.Pipeline;
    using Microsoft.Xna.Framework.Content.Pipeline.Graphics;
    using Microsoft.Xna.Framework.Content.Pipeline.Processors;
    using Microsoft.Xna.Framework.Graphics.PackedVector;
 
    [ContentProcessor]
    public class PackedVertexDataModelProcessor : ModelProcessor
    {
        protected override void ProcessVertexChannel(GeometryContent geometry, int vertexChannelIndex, ContentProcessorContext context)
        {
            VertexChannelCollection channels = geometry.Vertices.Channels;
            string name = channels[vertexChannelIndex].Name;

            if (name == VertexChannelNames.Normal())
            {
                channels.ConvertChannelContent<NormalizedShort4>(vertexChannelIndex);
            }
            else if (name == VertexChannelNames.TextureCoordinate(0))
            {
                channels.ConvertChannelContent<NormalizedShort2>(vertexChannelIndex);
            }
            else
            {
                base.ProcessVertexChannel(geometry, vertexChannelIndex, context);
            }
        }
    }

Note that we had to choose NormalizedShort4 format for our normals, even though these values only actually have three components, because there is no NormalizedShort3 format. That's because GPU vertex data must always by 4 byte aligned. We could avoid this wastage by merging multiple vertex channels. For instance if we had two three component data channels, a and b, we could store (a.x, a.y, a.z, b.x) in one NormalizedShort4 channel, plus (b.y, b.z) in a second NormalizedShort2 channel. We would then have to change our vertex shader to extract this data back into the original separate channels, so this approach is more intrusive than just changing the format of existing data channels.

Vertex compression often works better if you adjust the range of the input data before changing its format. For instance, NormalizedShort2 is great for texture coordinates, but only if the texture does not wrap. If you have any texture coordinate values outside the range -1 to 1, these will overflow the packed range. This can be avoided by scanning the entire set of texture coordinates to find the largest value, then dividing every texture coordinate by this maximum. The resulting data will now compress into NormalizedShort format with no risk of overflow. To render the model, you must store the value by which everything was divided, pass it into your vertex shader, and have the shader multiply all texture coordinates by this scale factor.

How much you win by compressing vertex data obviously depends on how many vertices you are dealing with. For many games the gain may be too small to be worth bothering with. But when using detailed models or terrains that contain millions of vertices, the memory and bandwidth savings can be significant.

  • (I might have accidentally posted this twice, if so, my apologies)

    I miss the old Normalized101010 format on the 360, and there's no normalized 32-bit format with XNA 4 (due to, of course, a general lack of support on the PC side of things, I'm sure).  I ended up using Byte4 and doing the rescaling manually, it's less free than otherwise...though it's still a massive speed win over uncompressed vertex data for my terrain.

    Moral is: even if you don't have a perfect vertex format that gets you exactly the right number, using the packed format that's the right size and doing a little work in the shader is still the way to go if you need that extra vertex bandwidth :)

  • This is actually the main reason I miss having custom shaders for XNA on Windows Phone 7. For example, my preferred implementation of terrain geomipmapping uses Short2 for [x, y] coordinates (that vertex data is shared across all patches, and rendered with an offset), and Single for the (non-shared) [z] coordinate. The vertex data gets put back together in the vertex shader. But without custom effects, this kind of compression is of course not possible. Which is ironic, as WP7 is the one place where you'd want it the most :)

  • Thanks Shawn , this may be a benefit for me

    you see in my engine i use  VertexPositionTexture format for all my models

    and as a post effect i calculate the normal from the depthbuffer and allso transperent rendering is also done as a port effect , so i simply pas the vertex and texture thrue the pipeline

    and yes  Which is ironic, as WP7 is the one place where you'd want it the most :)

    is there some way to expose the depthbuffer and allow us to create post effect on the phone

    i now it all about battery , but pehaps you cood add this to the market place

    "this game consume battery" and let the user desicde of what he whant to use battery or not

    now back..

    i think i will try the packed thing out

    thanks as allways you are full of great tricks

    Michael

  • Hi Shawn,

    Clearly I'm late to the party here, but I can't find an answer to this anywhere online and you seem to be the man to ask.  Can you use Short2 for vertex position data?  I'm trying to do this at the moment, but I get literally no output.  If I switch to using the same values as floats in a Vector3 (as x,y,0), it works.  Are the WP7 shaders not configured to accept short positions, or maybe 2D positions?

    Cheers,

    Bob

  • You can use Short2 in vertex data, but this is an integer type, so your vertex shader must be written to accept integer rather than float inputs.  BasicEffect takes floats, so Short2 will not work with it.  NormalizedShort2 might be a better choice?

Page 1 of 1 (5 items)
Leave a Comment
  • Please add 2 and 6 and type the answer here:
  • Post