Vertex data in XNA Game Studio 4.0

Vertex data in XNA Game Studio 4.0

Rate This
  • Comments 35

In previous XNA versions, a VertexBuffer was just a loosely typed bag of bytes. A separate VertexDeclaration object specified how to interpret these bytes.

As of Game Studio 4.0, every VertexBuffer now has an associated VertexDeclaration, which is specified at creation time. VertexBuffer thus becomes a strongly typed container, providing all the information necessary to interpret its contents.



Consider this typical 3.1 rendering code:

    // Create
    VertexDeclaration decl = new VertexDeclaration(device, VertexPositionColor.VertexElements);

    int size = VertexPositionColor.SizeInBytes * vertices.Length;
    VertexBuffer vb = new VertexBuffer(device, size, BufferUsage.None);


    // Draw
    device.VertexDeclaration = decl;
    device.Vertices[0].SetSource(vb, 0, VertexPositionColor.SizeInBytes);


I highlighted the problem in red. See how many times I had to repeat what vertex format I am dealing with? The more places this is repeated, the more chances to make a mistake. I cannot be the only one who ever forget to set the right VertexDeclaration, or specified the wrong stride when calling Vertices.SetSource!

This loosely typed design also presented challenges for our runtime implementation. Because a VertexBuffer did not specify the format of its contents, the framework was unable to tweak this data for different platforms or hardware (at least not until the actual draw call, at which point it is too late for efficient fixups).

For example, although we generally prefer to perform Xbox endian conversion as late as possible in the Content Pipeline build process, our ContentTypeWriter<VertexBufferContent> did not have enough information to do this correctly. Instead, we had to specify the TargetPlatform when calling VertexContent.CreateVertexBuffer, because the necessary type information was discarded from that point on.

As we add new platforms, we need more flexibility to adjust for the needs of each, and this requires a deeper understanding of the data we are dealing with. Strongly typed vertex buffers simplify the API, which reduces the chance of error, at the same time as increasing framework implementation flexibility. A classic win-win.


VertexDeclaration changes

The VertexElement and VertexDeclaration types still exist, but are used somewhat differently:

  • VertexDeclaration constructor no longer requires a GraphicsDevice
  • Vertex stride is now specified as part of the VertexDeclaration
  • Removed VertexElement.Stream property (see below)
  • Removed VertexElementMethod enum (because no hardware actually supported it)

And the important one:

  • No more GraphicsDevice.VertexDeclaration property

This is no longer necessary, because whenever you set a VertexBuffer, the device can automatically look up its associated declaration. I find it great fun porting from Game Studio 3.1 to 4.0, because I can simply delete everywhere that used to set GraphicsDevice.VertexDeclaration, and everything still magically "just works" ™.


VertexBuffer changes

VertexBuffer has different constructor arguments:

  • The vertex format can be specified by passing either a Type or a VertexDeclaration
  • The buffer size is now specified in vertices rather than bytes


  • New VertexBuffer.VertexDeclaration property
  • GraphicsDevice.SetVertexBuffer replaces GraphicsDevice.Vertices[n].SetSource

With 4.0, the code example from the top of this article becomes:

    // Create
    VertexBuffer vb = new VertexBuffer(device, typeof(VertexPositionColor), vertices.Length, BufferUsage.None);


    // Draw


Note how the vertex format is only specified in one place. Less code, and less potential for error.

These changes mean it is no longer possible to store vertices of different formats at different offsets within a single vertex buffer. This was once a common pattern in graphics programming, because changing vertex buffer used to be incredibly expensive (back in DirectX 7). Developers learned to optimize by merging many models into a single giant vertex buffer, and some have been doing that ever since. But changing vertex buffer is cheap these days, so this is no longer a sensible optimization.


IVertexType interface

In the previous code example, notice how I pass typeof(VertexPositionColor) when creating my VertexBuffer? How can the VertexBuffer constructor get from this type to its associated VertexDeclaration?

We added a new interface:

    public interface IVertexType
        VertexDeclaration VertexDeclaration { get; }

This is implemented by all the built-in vertex structures, so anyone who has one of these types can look up its associated VertexDeclaration.

If you try to create a VertexBuffer using a type that does not implement the IVertexType interface, you will get an exception. This is a common situation when loading model data, as you may wish to support arbitrary vertex layouts that do not match any existing .NET types, so your vertex data is loaded as a simple byte array. In such cases you should use the alternative VertexBuffer constructor overload which takes a VertexDeclaration instead of a Type.


Custom vertex structures

When creating custom vertex structures, it is a good idea to implement the IVertexType interface, so your custom types can be used the same way as the built-in ones. A simple example:

    struct MyVertexThatHasNothingButPosition : IVertexType
        public Vector3 Position;

        public readonly static VertexDeclaration VertexDeclaration = new VertexDeclaration
            new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),

        VertexDeclaration IVertexType.VertexDeclaration { get { return VertexDeclaration; } }


DrawUserPrimitives changes

DrawUserPrimitives<T> and DrawUserIndexedPrimitives<T> get their vertex data from a managed array, as opposed to a vertex buffer object. With no VertexBuffer, how can they find the necessary VertexDeclaration?

  • If T implements the IVertexType interface, the declaration can be queried from that.

  • If T does not implement IVertexType, you can use new overloads of DrawUserPrimitives and DrawUserIndexedPrimitives, explicitly passing the VertexDeclaration as the last draw call argument.


Multiple vertex streams

Prior to version 4.0, a single VertexDeclaration could combine inputs from many vertex streams, using the VertexElement.Stream property to specify the source stream for each element. Multiple vertex buffers were then set onto the device like so:

    device.Vertices[0].SetSource(vb1, 0, stride1);
    device.Vertices[1].SetSource(vb2, 0, stride2);

As of 4.0, each VertexDeclaration now describes the format of a single VertexBuffer, so if you are using multiple buffers, there are also multiple declarations. This made the VertexElement.Stream property unnecessary, so it was removed. Multiple buffers are set onto the device with a single atomic call, similar to (and for the same reasons as) the new SetRenderTargets API:

    device.SetVertexBuffers(vb1, vb2);


Content Pipeline changes

We made some minor Content Pipeline tweaks to match the new vertex buffer API:

  • Added a new VertexDeclarationContent type
  • VertexBufferContent now has an associated VertexDeclarationContent
  • VertexBufferContent.Write and VertexContent.CreateVertexBuffer no longer need a TargetPlatform parameter
  • Moved the VertexBuffer and IndexBuffer properties from ModelMesh to ModelMeshPart
  • Removed the ModelMeshPart VertexDeclaration and VertexStride properties
  • > device.SetVertexBuffers(vb1, vb2);

    So under the hood how is that working?  Is it maintaining an internal cache of VertexDeclarations built from the set VBs?  Any interesting technique used to make the lookup of the real declaration fast?

  • > So under the hood how is that working?

    It uses a crazy cross-linked tree structure to do the mapping in linear time (proportional to how many VB's are set simultaneously). Details not particularly interesting to go into here, but yeah, it obviously requires some sensible data structure choices to make such a thing fast.

  • Thank you! You are definitely not the only one who forget to set the right VertexDeclaration - I've wasted hours tracking that mistake down.

  • So how do I cast the declaration of a buffer?

    Specifically, when I select from a large set of morph targets that all have "position" data, and I can't know the usageindex up front?

  • I'm getting old and grumpy I guess, but I'm still not sure I like this general direction of abstracting stuff further and further from D3D. Abstraction is all neat and dandy, but I can't help but feel like we're losing flexibility and transparency with each iteration.

    Especially abstracting away vertex declarations seems like a bad idea. I agree that it's not something you want to deal with if it can be avoided, but for some interesting usecases (like indeed morph targets) I really think it should remain exposed somehow.

    If you promise to fix my latest issue on Connect though (552653, which happens to deal with an unforeseen usecase), I'll promise to stop being grumpy :)

  • Well the way D3D does things is itself an abstraction... It is just the XNA team are trying to find better abstractions. For the most part they seem to be suceeding.

    I would however also be interested in more info about jons use case. Multiple blended morph targets does seem like an interesting example.

  • > when I select from a large set of morph targets that all have "position" data, and I can't know the usageindex up front?

    Do you mean an approach where all the morph targets are embedded in a single giant VB, then you just change the decl to select one or more from a larger set of possible positions?

    That kind of type-punning is no longer possible in GS4.

    The best way to do morph targets would be to use multiple streams, with a separate smaller VB for each position set. This approach has a couple of other benefits:

    - More compact data format makes more efficient use of GPU vertex fetch caches (which is rarely a bottleneck, so in practice the perf impact is usually somewhere between zero and minimal, but a more cache efficient data format can't hurt and might help)

    - Allows an arbitrary number of morph targets. Many GPUs have a max vertex stride of 255 bytes, so the all-in-one-giant-VB approach imposes a max of somewhere between 10 and 15 position sets.

  • > I'm still not sure I like this general direction of abstracting stuff further and further from D3D

    But which "D3D"? There is D3D9 on Windows, and D3D10+ on Windows, and D3D on Xbox, and D3D on Windows Phone. All of these are significantly different, in ways that change how application code must be written.

    In the past, XNA has generally followed the Windows D3D9 model, which caused much complexity and performance overhead when we tried to map the resulting API to other D3D variants (plus, D3D9 isn't exactly the future, so if we had to pick just one to optimize for, that's not really the most logical choice). A major goal of the 4.0 changes is to find abstractions that do a better job of fitting over all these various back-end implementation layers, so no one platform is left having to jump through hoops trying to conform to an API that was optimized for something different.

  • I'm wondering about setting data into a vertex buffer. Will I still be able to pass an array of floats into the vertex buffer? It'd suck to have to come up with a serialization mechanic that transform an array of floats into a typed array that conforms with the vertex elements.

  • > Will I still be able to pass an array of floats into the vertex buffer?

    Absolutely. This is crucially important, for instance when loading model data that could use many different vertex formats, so you want to just read and set the bits as a byte array.

    The important thing for the framework is that when you call SetData, passing a byte[] or float[], we know what the actual format of this data is, so if we need to do things like endian swapping or fixing up formats that aren't supported on a particular graphics card, we have enough info to do that correctly.

  • I think what is unclear with the morph targets example is matching the usage indices specified in the vertex declaration to shader inputs.

    eg a shader might have as input:

    posA: POSITION0

    posB: POSITION1

    posC: POSITION2

    Then we have some morph target position streams, but we want to assign these vertex buffers to any of posA, posB or posC depending on animation settings.

    Is this possible, or am I misunderstanding something here?

  • > Is this possible, or am I misunderstanding something here?

    This is not possible in the CTP release, but the plan is to enable this before RTM. Rather than just giving an error if multiple streams try to bind the same usage index, we'll just offset the usage of subsequent streams so you can bind whatever Position0 inputs you like (in any order), and the shader sees this data as Position0, Position1, etc.

  • nice api,

    Multiple vertex streams

    Shawn is not possible to have defferent worldmatrix for each vertex streams


    transform this postion to the first vertex streams


    transform this postion to the second vertex streams

    and the mesh ofcourse share the same index/face data

    is this possible

    then we can setup a defered render of shadows maps and render out 3 deferent shadowsmaps and only use one drawcall per mesh

  • sorry not the defared render of shadowsmaps i meen a g-buffer


    hope you understand

  • > hope you understand

    I don't, sorry :-)

    Multiple vertex streams are not the same thing as instancing or geometry shaders. The vertex shader still runs in the same way as ever: using multiple streams just sets up a gather operation to read vertex shader input data from several arrays of smaller structures rather than just one array of bigger structures.

Page 1 of 3 (35 items) 123
Leave a Comment
  • Please add 1 and 6 and type the answer here:
  • Post