Color constructors in XNA Game Studio 4.0

Color constructors in XNA Game Studio 4.0

  • Comments 21

Here’s a subtle improvement I bet you would never notice if I didn’t point it out, but which should save much pulling-out-of-hair for those who previously ran into this issue…

XNA supports two different color formats: byte values ranging 0-255, or floating point values ranging 0-1. The Color struct had constructor overloads accepting either format:

    Color(byte r, byte g, byte b);
    Color(float r, float g, float b);

Well and good, until someone tries something like:

    Color x, y;
    Color z = new Color(x.R + y.R, x.G + y.G, x.B + y.B);

That seems straightforward, but falls foul of an unfortunate interaction between two parts of the C# type system:

  • When you do math on 8 or 16 bit types, C# automatically promotes the result to a 32 bit integer. Although x.R and y.R are bytes, the result of x.R + y.R is an int.

  • When you pass integer values to a method that has byte and float overloads, C# method resolution rules choose the float version, because int -> float is considered a better conversion than int -> byte.

Result: values mysteriously end up 255 times larger than you intended. Colors saturate to pure white. Attempted alpha fades saturate to fully opaque. Confusion ensues.

In Game Studio 4.0, we changed the Color constructor overloads to take ints rather than bytes:

    Color(int r, int g, int b);
    Color(float r, float g, float b);

Result:

  • If you pass bytes, you get the int version, which gives the same result as before
  • If you pass floats, you get the float version
  • If you pass the result of doing math on bytes, you get the int version, which was what you wanted all along!
  • Would you stop doing this? I just watched your talk about 3D games on Windows Phone 7 on http://live.visitmix.com/MIX10/Sessions/CL22, it's close to 1am and I just wanted to go to bed. Now you're posting another post? Give me a rest..at least make it a little less interesting.

    It's short though, so thank you.

    Oh yeah, great articles on XNA 4, and great talk at MIX! You're getting my all hyped up about Windows Phone 7 even though I had no interest in it at all in the beginning.

    One question on this though: if passing 255 in the byte overload is the maximum value for a color, what is the maximum value in the new int overload? It's not int.MaxValue, is it? That would make summing the byte values of other colors useless since the desired effect would be way off.

    I hope I got this all right..it's pretty late, and I got a lot of BizTalk to do tomorrow "^^ good night.

  • > One question on this though: if passing 255 in the byte overload is the maximum value for a color, what is the maximum value in the new int overload? It's not int.MaxValue, is it?

    Nope, still 255. So code that used the old byte overload still has exactly the same behavior as before. If you pass a value larger than 255 or less than 0, we clamp to the 0-255 range.

  • Great improvement.

    Just one more question...

    Why did you removed the "new Color(Color, byte)"?

    I was at a presentation, showing how easy it would be to go from XNA3.1 to XNA4.0, then the ONLY thing that did not work was that... It was a pretty simple example, but that was the only thing that didn't work, and it was a great Constructor! xD

    Why you decided to removed it?

  • > Why did you removed the "new Color(Color, byte)"?

    That's my next-but-one topic :-)

    (it's part of the changes to use premultiplied alpha blending)

  • I noticed this change. And just like all the other changes I like it!!!

  • This was one of the most annoying gripes I had with the XNA framework (which, in the grand scheme of things, tells you that XNA is pretty good). Glad to see it's getting fixed!

  • Nice, I ran into that exact problem yesterday (generating a random color, with Random.Next)

  • Good change, although I'd have to say it's slightly less intuitive when writing new code. People are familiar with bytes being used for colours and using their max range. People are also familiar with using floats between 0 and 1 for ranges. However if I'd see an int overload and hadn't seen this blog post, I wouldn't know what to put in there.

    (However this is probably tackled by the description of the method, and I always use floats anyway :P ).

  • I ran into a very curious problem yesterday, and since it has to do with Color I might as well ask you.

    List<Color> colorList = new List<Color>();

    colorList.Add(new Color(0, 0, 0));

    colorList[0].R = 255;

    This, for some reason, doesn't work. My workaround was to temporarily store colorList[0] in a new Color, change it, and then overwrite colorList[0] with that.

    Any idea why this is happening?

  • > List<Color> colorList = new List<Color>();

    > colorList.Add(new Color(0, 0, 0));

    > colorList[0].R = 255;

    > This, for some reason, doesn't work. Any idea why this is happening?

    Because Color is a struct, so colorList[0] returns a copy and you try to change R of a temporary object.

  • >This was one of the most annoying gripes I had with the XNA framework

    I would say this is one of "the most annoying gripes" I have with C#. The promotion from byte to int whenever doing math operations with two bytes is truly asinine. If I were to use the logic behind this, doing math with two ints should result in a long.

    Of course, this has nothing to do with XNA beyond the fact that XNA (and everyone else who uses bytes in C#) had to do additional work, both in design and construction of code, to work around a stupidity that exists in the chosen language. *sigh*

  • Nice change, should help out quite a few people new to the framework.

  • @Lucas: try this instead:

    List<Color> colorList = new List<Color>();

    colorList.Add(new Color(0, 0, 0));

    Color temp = colorList[0];

    temp.R = 255;

    colorList[0] = temp;

    [To avoid using copies you will need unsafe code, which won't work on WP7]

  • > The promotion from byte to int whenever doing math operations with two bytes is truly asinine. If I were to use the logic behind this, doing math with two ints should result in a long.

    I thought this at first too, but I've come to like the C# way.

    The difference is that int is the default numeric type (native machine word size, etc), so is used all over the place in situations where the data is nowhere near the limits of the data type. There are certainly times when you do need to check for integer overflows (and can use the checked keyword to do this), but for the vast majority of integer math operations, the input data is known to be constrained in such a way that overflow can never occur (as you are nowhere near to using all 32 bits), so checks or size promotion are not necessary.

    Byte and short, on the other hand, are only used for specific reasons, most often because you actually want their overflow behavior, or because you are packing data. The data stored in these types is almost always using the entire numeric range, so overflow can occur almost any time you do math on them.

    The nice thing about the C# semantics is they make you think about what you want to happen. If you forget about overflow and just do math on bytes, you get a compile error. Having to introduce a cast on the result makes it clear exactly where you want the rounding to occur (just the final result? Or do intermediates also need to be cast?) and forces you to consider whether a simple cast is appropriate, or whether there should be more careful range checks and clamping instead. The code also then documents these decisions for any future readers.

    There is also a practical implementation consideration: promoting 8 and 16 bit types to 32 is free, but promoting 32 to 64 would be expensive on 32 bit hardware.

  • So, what you're saying is that the XNA team has seen this asinine behavior, figure out that the logic behind it is correct (make sure we bounds check in the place that the user intends), and then works around it anyway? The fact that your framework had to make changes due to user feedback that resulted not through a direct fault in your framework, but a (mis-)feature of the language used, says a lot about that feature in C#. At the very least, it's not intuitive. Even understanding the logic, I still think it was the wrong decision.

    Personally, I think that the compiler shouldn't change the type without me explicitly telling it to do so, or making the change back on its own. If 150 + 150 - 60 doesn't cause an overflow on the (byte)300, it should at least convert back to a byte at the end. Forcing me to cast whenever I do an operation on two bytes is stupid. Tell me the logic behind SomeByte & SomeOtherByte returning an int, besides "consistency". There is absolutely no way in this reality that that can overflow or in any way justify a promotion to an int. Yet C# promotes it to an int.

    I think the only reason why C# does this is because the "int" is the "natural" integer on 32 bit processors, so any work done on something smaller must be converted to an int first. But rather than make the compiler smart, C# instead takes the lazy route of making us, the user, handle the conversion back from an int. Sure, there might be a plethora of business reasons why they took the lazy route, but it really looks like simple laziness from here outside of Microsoft.

    Sorry for the rant.

Page 1 of 2 (21 items) 12
Leave a Comment
  • Please add 2 and 1 and type the answer here:
  • Post