Tom Miller's Blog

The Ramblings of Miller. These postings are provided "AS IS" with no warranties, and confer no rights.

Direct3D and the FPU..

Direct3D and the FPU..

  • Comments 13

I had an email this morning about Managed Direct3D 'breaking' the math functions in the CLR.  The person who wrote discovered that this method:

public void AssertMath()
{
  double dMin = 0.54797677334988781;
  double dMax = 4.61816551621179;
  double dScale = 1/(dMax  - dMin);
  double dNewMax = 1/dScale + dMin;
  System.Diagnostics.Debug.Assert(
    dMax == dNewMax);
}

Behaved differently depending on whether or not a Direct3D device had been created.  It worked before the device was created, and failed afterwords.  Naturally, he assumed this was a bug, and was concerned.  Since i've had to answer questions similar to this multiple times now, well that pretty much assures it needs it's own blog entry.

The short of it is this is caused by the floating point unit (FPU).  When a Direct3D device is created, the runtime will change the FPU to suit its needs (by default switch to single precision, the default for the CLR is double precision).  This is done because it has better performance than double precision (naturally).

Now, the code above works before the device is created because the CLR is running in double precision.  Then you create a Direct3D device, the FPU is switched to single precision, and there are no longer enough digits of precision to accurately calculate the above code.  Thus the 'failure'.

Luckily, you can avoid all of this by simply telling Direct3D not to mess with the FPU at all.  When creating the device you should use the CreateFlags.FpuPreserve flag to keep the CLR's double precision, and have your code functioning as you expect it.

  • What are the performance and quality ramifications of using FpuPreserve when creating a D3D device?
  • Well naturaly since you're using double precision rather than single precision, there will be a performance hit (memory usage, etc)..

    Not sure what you mean by quality though..
  • I'd noticed that all the DirectX SDK works with singles and there is no single datatype in the CLR. My assumption is therefore that in the interop layer you have to convert every CLR double into a single before calling the native code.

    So when the FPU is in single precision mode does that mean you can pass the doubles directly through to the native functions or do you still have to do conversion on all of them.

    Does this change when you are in double precision mode?
  • 'float' is the equivalent of a single in C# (System.Single for the CLS 'class' name)..

    You can cast any double to float (or System.Single) before passing them to the MDX runtime.
  • Well slap me with a big stick... Sometimes you even forget the obvious stuff. I had to go back and look at why I thought there was no single type and its becuase all of the system.math stuff only accepts doubles so I got into the habit of never using float becuase I was fed up doing all the casts when I want to use anything from system.math

    So now my question is almost unrelated to your original subject but I will ask anyway. Given that we know the FPU is in single precision mode is there anyway the CLR can 'know' this and stop me having to cast everything to/from double just to use the system.math library?

    Or will system.math ever get overloads for single?

    Also discussed here http://www.gamedev.net/community/forums/topic.asp?topic_id=131054
  • On a similiar topic, how does MDX runtime switch the FPU to single precision? Whats the method if you wanted to do this yourself in .NET?

    My research so far just turned up native FPU intrinsics to do this. I would like to profile some of my FPU intensive .NET apps in double and single precision. I also assume that using single precision floats explicitly will achieve similiar results.
  • This should definitly be mentioned in the remarks section of the device constructors documentation :-)
  • Isn't messing with the FPU of the machine serious side-effect-no-no juju? What if other processes started doing this themselves?
  • This has had very negative effects on our application since our application was relying on double precision math. Shouldn't the default for a new device be FpuPreserve!?!? Who knows what calculations you might be affecting on the system , especially other processes, without it.
  • I had an email this morning about Managed Direct3D 'breaking' the math functions in the CLR. The person who wrote discovered that this method: public void AssertMath() { double dMin = 0.54797677334988781; double dMax = 4.61816551621179; double dScale

  • PingBack from http://insomniacuresite.info/story.php?id=6737

  • PingBack from http://barstoolsite.info/story.php?id=6997

Page 1 of 1 (13 items)