Collapsing caps

Collapsing caps

  • Comments 15

Optional features with hundreds of caps bits do not a great developer experience make. How do I know which caps I need to check on which hardware? What do I do if they are not set?

Some of the caps we have today are just silly. How is a 3D game supposed to react if RasterCaps.SupportsDepthBufferTest comes back false? I don't know of any hardware that doesn't set this, but in theory some could, and what then?

An interesting exercise (well, interesting to me anyway :-) is to compare the caps found on actual hardware out in the wild:

Untitled

If you compare the full set of caps from many graphics cards, several things become clear:

  • This is a multi dimensional problem, far more complex than the above Venn diagram

  • Collating that much data is REALLY REALLY BORING

  • The distribution of caps is far from random

  • Many caps are supported by all hardware

  • Surprisingly, some caps are never supported by any hardware at all!

  • There is a large bucket of caps that are supported by all "HiDef" hardware (Xbox and recent/better Windows cards), but not by older/cheaper Windows parts such as integrated laptop GPUs

  • Thanks to DX10, Windows is rapidly converging toward the "HiDef" feature set (even cheap laptops support DX10 these days)

  • There are remarkably few important caps that are supported by just random subsets of hardware

These findings suggest how we could simplify caps management:

  • Remove all the stupid stuff that no hardware actually supports

  • Collapse the remaining flags into a single enum with two values:

    • Reach = "I want my game to run on as many machines as possible, so will limit myself to the features that are reliably available everywhere"

    • HiDef = "I want to use more advanced things like vertex textures, MRT, and floating point surface formats, and am ok with this limiting my ability to run on less powerful hardware"

It could take just one line of code to check whether a machine supports Reach or HiDef, which makes it easier to understand what features are safe to use. This also helps when trying to create portable games. If you specify Reach profile, the framework could enforce that you don't accidentally use any HiDef functionality, thus avoiding unpleasant surprises of the "huh, that worked on my machine..." variety.

  • So when will XNA support DX11? :-)

  • I'm torn about whether I agree or not. I agree checking caps is no fun at all and often overlooked anyway, but just a two-way split might not allow for enough granularity.

    For example, HiDef in your example would not include any ATI SM3 hardware because of vertex textures. That would still be a big part of the target audience to cut out of the HiDef experience, at least at time of writing. It's the obvious problem case, but the point is that it forces your (the API) hardware requirements for HiDef upon us (the Game), while we might only be using a single hardware feature of the list your consider mandatory.

    With DX10 we might not have to worry about the featureset anymore a few years down the line. The problem that remains however is judging the actual performance of the hardware, which might be harder to solve. My laptop Intel card might be supporting all the HiDef features but I have no way of telling if it won't run like a slideshow.

  • Isn't that the idea behind the feature levels in DX11? Also, doesn't DX10 guarantee certain features?

  • Sounds wonderful, as long as you document in detail what constitutes each profile.  I'm desperately curious to know the specific contents of that "Windows laptop" circle.  It would be incredibly helpful if you could share this information.  Pretty please?

  • It's nice if the problem is solved in DX10/11, but obviously anyone interested in "reach" is still going to be limited to DX9 for awhile.

    http://en.wikipedia.org/wiki/Usage_share_of_operating_systems

  • > Isn't that the idea behind the feature levels in DX11?

    Absolutely! The DX11 feature levels (9_1, 9_3, etc) are a very similar concept.

  • > I agree checking caps is no fun at all and often overlooked anyway, but just a two-way split might not allow for enough granularity.

    Believe me, I spent a LOT Of time going back and forth between different granularity levels!

    What if we had three buckets, or four, or five? It's interesting to look at which features would end up in each bucket, what kind of things can be achieved using just those features but none of the ones from the next higher bucket, and also what is the current and projected future install base of hardware that fits into each such bucket.

    This is obviously a sliding scale. At one extreme you have the current DX9 approach of a separate bucket for each unique piece of hardware. At the other end is the lowest common denominator approach: "cut everything that isn't supported everywhere".

    I believe the sweet spot lies somewhere in between. Much agonizing can be done over exactly where this lies, but when I gathered all this data, I was surprised just how few interesting caps fell into the third and fourth buckets, and also how small the install base of those cards is today (and not just small, but also rapidly shrinking).

    > My laptop Intel card might be supporting all the HiDef features but I have no way of telling if it won't run like a slideshow.

    True that. Performance differences are in some ways a harder problem than functional differences (it's not so easy to sum up performance characteristics in a concise way), but they are also somewhat softer in that games will still run, just too slowly.

    That gives both the user and the developer more options (reduce resolution, reduce draw distance, use lower detail models, or just conclude "huh, my machine obviously isn't up to running this game") that wouldn't necessarily apply if they just got a crash on startup. Speeding up something that runs too slow is an incremental problem, while fixing something that doesn't run at all is an absolute one.

    Which is not to say that performance deltas between hardware with the same feature set isn't an interesting and worthwhile challenge for us to be thinking about. Just not something I have much clue what we could do about at this point in time :-)

  • > It's nice if the problem is solved in DX10/11, but obviously anyone interested in "reach" is still going to be limited to DX9 for awhile.

    The DX9 API, yes, but not entirely the confusion of DX9 caps.

    It is amazing just how many people are running Windows XP, which uses DX9 API, on top of DX10 GPU hardware today! And the DX10 install base is growing all the time.

    The interesting thing when you put DX10 hardware in a DX9 machine, is that all the caps required by DX10 are therefore guaranteed to be turned on for DX9 as well. You basically end up with a "turned up to 11" DX9 part. In fact I'd go as far as to say that this "turning up to 11" of existing DX9 caps is more valuable for most games than the specific new features added by DX10!

    Of course, it will be a while yet before DX10 hardware becomes so ubiquitous that we could require that for a "reach" profile. But things are definitely moving in a good direction. It makes me happy when I see things gradually becoming more consistent, as opposed to the more common trend of splintering and just getting more and more confusing the more time passes :-)

  • This is an interesting idea, but one thing I learned from Unsigned is that if you are developing a PC game for wide distribution, you want to have as much on a slider as possible.  There are a lot of users (in my experience) who only had SM1.1 and it was pretty difficult and time consuming to make the shaders work the whole way down from 3.0 to 1.1.  With this bucket system, you would really lose that kind of runtime flexibility, right?  Yeah, you could go to the lowest bucket and everyone could play, but then you can't add in those mind-blowing effects for the people that have the hardware to support it.  Caps are inconvenient, but sometimes it is nice to have that granularity.  So, what I'm saying is, if you don't completely obfuscate caps, and allow the developer to disable the bucket system for full control, that would seem to be the best bet.

  • Sounds like a good idea, generally. Though are you sure about the name? "HiDef" won't be HiDef in two years.. will  you add a new value to the enum, "ReallyHiDef"?

  • > Though are you sure about the name? "HiDef" won't be HiDef in two years.

    Names are tricky, especially names for somewhat abstract concepts like this.

    "HiDef" actually has quite a specific meaning, though: it refers to the ability to display graphics at resolutions greater than 640x480 (typically 1280x720 or 1920x1080).

    When we were trying to think of a name that could sum up what is unique about Xbox 360 level graphics capabilities (and roughly equivalent Windows GPUs), it occurred to us that Xbox 360 was the first console capable of consistently rendering to an HD resolution output.

    Whatever awesome features the next generation of graphics hardware may bring to the table, I think it will always remain true that Xbox 360 was the first to render at HD resolution.

  • Regarding the performance deltas, perhaps you could have a chat with the WinSAT guys (the Windows Experience Index hardware score)?

    If that gives reliable measures for doing typical DX/XNA things (or even measures per featureset), it would be nice information to have. If nothing else it could provide an indicative value to use for 'graphics quality', instead of games running their own benchmarking tests.

    If it's already being used for games, I guess this usecase could use some more exposure :)

  • It's all good until a few years down the road from now, when there is an even newer feature set that the latest cards support and the older ones don't.  Then what, add a third part to the Enum - 'HiHiDef'?

    It'd be nice if all cards of a certain level support a certain set of features (without any hacks/fudging to get things working).  For example, any card that supports DirectX 11 must support floating point surface formats, MRT, and vertex textures.  If it can't do those things, then it doesn't get the 'DX11' enum.

    We are definitely closer to this than we were years ago, but I think we still have a little way to go.

  • While we dont get XACT on the zune, we should get at least visualization data, im trying to make a music game with interactive backgrounds, and having access to that along with other things would be a nicety :)

  • Hey CobaltHex,

    Visualization data works fine on Zune. This is part of the Song API, not XACT.

    In fact, Zune was the main reason we originally added that functionality!

Page 1 of 1 (15 items)
Leave a Comment
  • Please add 6 and 1 and type the answer here:
  • Post