Answer to Question: Converting rendermonkey to fx
I received a post from Brandon Furtwangler asking a few general questions about RenderMonkey and converting RM files to FX files. Since it was a long post i though I'd post my answer as a blog entry...
I've been learning about shaders and HLSL, but until now I've never tried to write a demo outside of RenderMonkey. I recently wrote the Parallax shader by modifying a sample shader, but...Now that I'm trying to convert it to .fx format I realize that I have an incomplete understanding of what I'm doing, and I was hoping you could clear some things up.
For one thing, it appears that RenderMonkey uses transposed matrices with respect to the matrices that D3DX uses, and it seems as if RenderMonkey moves the camera around the model, rather than ever changing the World matrix. This is all fine, because I think I can compensate easily enough, but I can't seem to get tangent space lighting working.
I myself have not verified the output of RM to see if its matrices were row or column major. The reality is that DirectX and OpenGL uses different standards in regards to that and RenderMonkey supports both. So they did have to pick some native format. However, I would assume that if matrices are exported in the wrong form within the FX file, it should be a bug that they should address. Also, in regards to the camera movements, yes i have noticed that RM does not manage a "world" camera for objects but simply moves the "view" camera around the object. If you wish to move an object by hand, you will likely either have to add a matrix or vector variable and apply it to your model in the shader.
For one thing, I'm not sure what exactly a normal map (like the one in the paper I reference in my Parallax post) encodes. I assumed if it encodes the Normal, already in tangent space. I cant tell for sure, because I cant think of a way to verify this. Any ideas? Is this common, or would I generally have to transform it in the pixel shader (sounds like a bad/slow idea).
Yes, normal maps are generally encoded as the normal in tangent space. Since a texture could be arbitrarily be mapped to any shape, you need to use a coordinate system for normal maps which is independent of the object itself. So the best solution, is to assume that each pixel is a little "patch" of its own and create a coordinate system relative to this patch (i.e: the Tangent Space).
Also, I'm not totally clear on how to get my light direction vector in tangent space. I initially thought it would work like:
<from vertex shader>
float3 N = inNormal;
float3 T = inTangent;
float3 B = cross(N,T);
float3x3 tangentmat = float3x3(T,B,N);and then tangentmat would transform object-to-tangent or tangent-to-object depending on the order of the mat and vec in the mul instruction. In this case, I would tranform my light from world space to object space, and then into tangent space. Dont be afraid to tell me I have it all mixed up, because I'm sure i do.
At a quick glance, your approach seems to be valid. Assuming you have your light vector in object space, converting it to tangent space should be as follows:
float3x3 TangentSpace;
TangentSpace[0] = inTangent;
TangentSpace[1] = inNormal;
TangentSpace[2] = inBinormal; // you can use the cross prod to find this.
TanSpaceLightDir = mul(TangentSpace,LightDir);
Then to use it in your pixel shader, all your need to do is sample your normal map. Since it is already in tangent space and that it represents the normal for this "fragment". Then all you need to do is use the results as the normal in your per-pixel lighting calculations. Keep in mind that the output of the texture fetch for the normalmap will the in the [0..1] range so you will need to scale and bias to make it into the [-1..1] range.
One problem I see is that inNormal is in object space, and inTangent is generated by Mesh.ComputeTangent (so I don't know what space they are in). Do you have any ideas of what I could try? Do I need to first transform the inTangent vec into Object space? Or maybe transform them both into world or view space? How is this stuff usually done? I'm pretty good at putting together what I need from samples, but I think it's time I actually understand it better.
Both the tangent and binormal generated by the D3DXMESH class should be in object space (the module doesn't have the info to define another space). With this you can use your normals to generate the tangent space matrix as mentioned above. The best way to visualize this is to imagine your are zooming real close on the surface of your object. The tangent and binormal represents two vectors which are tangent to the surface at this particular point (and are generally aligned along the U and V texture coordinates to make it easier to compute). The matrix that you generate will allow you to take any vector (in object space) and convert it into this local tangent space. From there, the normal map gives you the normal of the object in tangent space, meaning that if you converted your other vectors (like light) into tangent space, the rest of your per-pixel calculations should be exactly as they used to be.
Hope i helped. :)
Comments
- Anonymous
September 02, 2004
The comment has been removed - Anonymous
September 02, 2004
here is my fx file so far. maybe there is some glaring mistake.
http://www.brandonfurtwangler.com/parallax/sexydemo.fx - Anonymous
September 02, 2004
Brandon,
You are correct concerning the tangent space matrix, i swaped the binormal and normal by mistake. My bad... Looking at the .FX file, i don't notice anything obviously wrong. Maybe you should do regular bumpmapping and add the parallax one the bumps work.
As for .X files with Tan/Binormal. I don't think i have any which are usable. Anyways, i think RenderMonkey will regenerate them even if they are in the file so it might be moot.
Finally, to debug shaders. Well if you have Visual Studio .NET, there is an extension which will allow you to step through shaders as they are executed. However i think this will obly work if your app is set for software vertex processing. If you don't have VS.NET then the way to debug shaders is pretty much by using the shader itself (i.e: outputing intermediate results as color so you can see if it seems to be working).
Personally i can't wait for the new DirectX under Longhorn. At that point, if the PIX tool can behave closely to what it can currently do on Xbox, debugging shaders should be so easy (it essentially will capture all the intermediate shader results for all vertices and pixels and you can browse through them). - Anonymous
September 04, 2004
Thanks for clearing all that up for me. I think I'm starting to get a hang of it now. My demo is fininally finished!
http://www.brandonfurtwangler.com/index.php?p=34
It comes with source and requires .net framework, dx9c, and ps2.0 hardware.
Thanks again, couldn't have done it without you. - Anonymous
June 18, 2009
PingBack from http://homelightingconcept.info/story.php?id=746