Direct3D and the FPU..
I had an email this morning about Managed Direct3D 'breaking' the math functions in the CLR. The person who wrote discovered that this method:
public void AssertMath()
{
double dMin = 0.54797677334988781;
double dMax = 4.61816551621179;
double dScale = 1/(dMax - dMin);
double dNewMax = 1/dScale + dMin;
System.Diagnostics.Debug.Assert(
dMax == dNewMax);
}
Behaved differently depending on whether or not a Direct3D device had been created. It worked before the device was created, and failed afterwords. Naturally, he assumed this was a bug, and was concerned. Since i've had to answer questions similar to this multiple times now, well that pretty much assures it needs it's own blog entry.
The short of it is this is caused by the floating point unit (FPU). When a Direct3D device is created, the runtime will change the FPU to suit its needs (by default switch to single precision, the default for the CLR is double precision). This is done because it has better performance than double precision (naturally).
Now, the code above works before the device is created because the CLR is running in double precision. Then you create a Direct3D device, the FPU is switched to single precision, and there are no longer enough digits of precision to accurately calculate the above code. Thus the 'failure'.
Luckily, you can avoid all of this by simply telling Direct3D not to mess with the FPU at all. When creating the device you should use the CreateFlags.FpuPreserve flag to keep the CLR's double precision, and have your code functioning as you expect it.
Comments
Anonymous
June 01, 2004
What are the performance and quality ramifications of using FpuPreserve when creating a D3D device?Anonymous
June 01, 2004
Well naturaly since you're using double precision rather than single precision, there will be a performance hit (memory usage, etc)..
Not sure what you mean by quality though..Anonymous
June 01, 2004
I'd noticed that all the DirectX SDK works with singles and there is no single datatype in the CLR. My assumption is therefore that in the interop layer you have to convert every CLR double into a single before calling the native code.
So when the FPU is in single precision mode does that mean you can pass the doubles directly through to the native functions or do you still have to do conversion on all of them.
Does this change when you are in double precision mode?Anonymous
June 01, 2004
'float' is the equivalent of a single in C# (System.Single for the CLS 'class' name)..
You can cast any double to float (or System.Single) before passing them to the MDX runtime.Anonymous
June 01, 2004
Well slap me with a big stick... Sometimes you even forget the obvious stuff. I had to go back and look at why I thought there was no single type and its becuase all of the system.math stuff only accepts doubles so I got into the habit of never using float becuase I was fed up doing all the casts when I want to use anything from system.math
So now my question is almost unrelated to your original subject but I will ask anyway. Given that we know the FPU is in single precision mode is there anyway the CLR can 'know' this and stop me having to cast everything to/from double just to use the system.math library?
Or will system.math ever get overloads for single?
Also discussed here http://www.gamedev.net/community/forums/topic.asp?topic_id=131054Anonymous
June 01, 2004
On a similiar topic, how does MDX runtime switch the FPU to single precision? Whats the method if you wanted to do this yourself in .NET?
My research so far just turned up native FPU intrinsics to do this. I would like to profile some of my FPU intensive .NET apps in double and single precision. I also assume that using single precision floats explicitly will achieve similiar results.Anonymous
June 03, 2004
This should definitly be mentioned in the remarks section of the device constructors documentation :-)Anonymous
June 10, 2004
Isn't messing with the FPU of the machine serious side-effect-no-no juju? What if other processes started doing this themselves?Anonymous
August 02, 2004
This has had very negative effects on our application since our application was relying on double precision math. Shouldn't the default for a new device be FpuPreserve!?!? Who knows what calculations you might be affecting on the system , especially other processes, without it.Anonymous
June 04, 2008
I had an email this morning about Managed Direct3D 'breaking' the math functions in the CLR. The person who wrote discovered that this method: public void AssertMath() { double dMin = 0.54797677334988781; double dMax = 4.61816551621179; double dScaleAnonymous
June 08, 2009
PingBack from http://insomniacuresite.info/story.php?id=6737Anonymous
June 18, 2009
PingBack from http://barstoolsite.info/story.php?id=6997