How to code my shaders for Advanced Lighting
by Edward Rotberg · in Torque 3D Professional · 01/28/2010 (2:33 pm) · 22 replies
Hi,
Let me start out by stating for those of you that have not seen my other posts, that for the time being, my project is only concerned with high-end PC's. This may change in the future, but for now this holds true.
I have been working on a custom renderDelegate which uses a completely custom set of shaders. My shaders are doing a lot of custom code that obviously is not supported anywhere by Torque, but in terms of lighting, there is really nothing special going on. The shaders are doing simple point light calculations supporting ambient, specular, and bump-mapping for the time being. I would like to take advantage of Torque's deferred lighting/advanced lighting (AL) if there is any way to accomplish this.
I know that Torque dynamically builds vertex/pixel shaders based upon material properties and lighting options. I am also familiar with how deferred lighting is accomplished by rendering various pixel data to multiple, parallel Render Targets (RTs). I know that if I had the specifications of what the format of these RTs were, and how to access them independently, that I could modify my shaders to render to them appropriately. I am aware that Torque dynamically changes some of the shaders based upon the settings and hardware capabilities (for example, using the Console method addGlobalShaderMacro()), and I would have to find ways to provide the same capabilities within my own shader.
Ideally I could somehow leverage Torque's dynamic shader construction with my shader by coding my shader to conform with Torque's shader paradigm and having Torque dynamically add appropriate pieces to my vertex and pixel shaders as needed. I imagine that this might be difficult as we definitely do not use Torque materials for our renderDelegate.
However there is very little documentation on this and it is beyond tedious to follow the code that accomplishes all of this in order to even simply determine which way to go. So I am hoping that someone from this august group can provide links, pointers to documentation or code, a shoulder to cry on, or anything that might help.
Thanks in advance for any help that you can offer.
= Ed =
Let me start out by stating for those of you that have not seen my other posts, that for the time being, my project is only concerned with high-end PC's. This may change in the future, but for now this holds true.
I have been working on a custom renderDelegate which uses a completely custom set of shaders. My shaders are doing a lot of custom code that obviously is not supported anywhere by Torque, but in terms of lighting, there is really nothing special going on. The shaders are doing simple point light calculations supporting ambient, specular, and bump-mapping for the time being. I would like to take advantage of Torque's deferred lighting/advanced lighting (AL) if there is any way to accomplish this.
I know that Torque dynamically builds vertex/pixel shaders based upon material properties and lighting options. I am also familiar with how deferred lighting is accomplished by rendering various pixel data to multiple, parallel Render Targets (RTs). I know that if I had the specifications of what the format of these RTs were, and how to access them independently, that I could modify my shaders to render to them appropriately. I am aware that Torque dynamically changes some of the shaders based upon the settings and hardware capabilities (for example, using the Console method addGlobalShaderMacro()), and I would have to find ways to provide the same capabilities within my own shader.
Ideally I could somehow leverage Torque's dynamic shader construction with my shader by coding my shader to conform with Torque's shader paradigm and having Torque dynamically add appropriate pieces to my vertex and pixel shaders as needed. I imagine that this might be difficult as we definitely do not use Torque materials for our renderDelegate.
However there is very little documentation on this and it is beyond tedious to follow the code that accomplishes all of this in order to even simply determine which way to go. So I am hoping that someone from this august group can provide links, pointers to documentation or code, a shoulder to cry on, or anything that might help.
Thanks in advance for any help that you can offer.
= Ed =
About the author
#2
I am not at liberty to disclose too much of what we are doing at this point. Hopefully that situation will change later.
I was afraid that you were going to tell me that this was a two pass operation. Our vertex shader is computationally expensive. I can tell you that part of what it does involves instancing a large number of objects which is definitely not something that we want to be doing multiple passes on. This is why I was really hoping to hear that the we could do this all in one pass. I was also hoping that by conforming to Torque's AL system we could get shadowing for "free". And finally, I was hoping that by doing this we could render something that would look decent under HDR, which my simple lighting does not.
From what you are saying, it sounds like the "forward pass" shader might not need to deal with much of a vertex shader at all. It also sounds like the lighting pass is just creating a light map of sorts and would not need to run anything for our objects?? If this is true, and we could use a very simple vertex shader for the "forward pass" that doesn't need to do any instancing, then maybe we can do this after all. Obviously, I doubt that I fully understand your multi-pass technique, so this last paragraph is pretty much just wishful thinking.
I don't know what other information I can tell you about our shaders that would help. Does any of my conjecture make sense?
= Ed =
01/28/2010 (5:45 pm)
Thanks for getting back to me Pat.I am not at liberty to disclose too much of what we are doing at this point. Hopefully that situation will change later.
I was afraid that you were going to tell me that this was a two pass operation. Our vertex shader is computationally expensive. I can tell you that part of what it does involves instancing a large number of objects which is definitely not something that we want to be doing multiple passes on. This is why I was really hoping to hear that the we could do this all in one pass. I was also hoping that by conforming to Torque's AL system we could get shadowing for "free". And finally, I was hoping that by doing this we could render something that would look decent under HDR, which my simple lighting does not.
From what you are saying, it sounds like the "forward pass" shader might not need to deal with much of a vertex shader at all. It also sounds like the lighting pass is just creating a light map of sorts and would not need to run anything for our objects?? If this is true, and we could use a very simple vertex shader for the "forward pass" that doesn't need to do any instancing, then maybe we can do this after all. Obviously, I doubt that I fully understand your multi-pass technique, so this last paragraph is pretty much just wishful thinking.
I don't know what other information I can tell you about our shaders that would help. Does any of my conjecture make sense?
= Ed =
#3
So if your custom objects need to recieve light and shadows you need a shader which generates the proper per-pixel depth and normal. I'm not sure of all your doing, but if the expensive part is just the shading... then you should be ok. You can also compromise some by using only vertex normals and not pixel normals.
Yes you do get the draw call hit of rendering everything at least twice. I'm still not happy with that as its a frequent source of CPU bottlenecks, but its a huge fillrate savings over fully deferred rendering.
We are working on hardware instancing to reduce the draw call overhead in the near future which should help people with that case.
01/28/2010 (6:32 pm)
The "prepass" collects the depth and normals from anything that is to recieve shadows and lighting from the deferred renderer. So if your custom objects need to recieve light and shadows you need a shader which generates the proper per-pixel depth and normal. I'm not sure of all your doing, but if the expensive part is just the shading... then you should be ok. You can also compromise some by using only vertex normals and not pixel normals.
Yes you do get the draw call hit of rendering everything at least twice. I'm still not happy with that as its a frequent source of CPU bottlenecks, but its a huge fillrate savings over fully deferred rendering.
We are working on hardware instancing to reduce the draw call overhead in the near future which should help people with that case.
#4
I assume that you are referring to per-pixel data here. This can all be supplied easily from my current shader assuming I know what RT to output them to and what format they need to be in. I would think that at some pass, I would need to also output diffuse RGB information per pixel, and perhaps specular/emmisive/whatever else is supported if desired. My problem is that if this other data needs to be from a second pass, that would require another pass from my vertex shader which is the expensive part of the operation, and could easily be calculated during the "prepass" thus saving me that time.
I'm just not clear on exactly what is needed at which pass and what formats the render targets take. From what Pat says, there is a "forward pass" that reads the light map and generates diffuse colors per-pixel. Since I can see no way to generate that information for the "forward pass" without a second running of our expensive vertex shader, I don't see How I can fit into this scheme.
While I'm sure that your hardware instancing will address the needs of 99% of your customers, it seems unlikely that will do all of what we are doing in our shader. Even if it did, it would still become very expensive for our case to have to execute that vertex shader twice per frame.
Please let me know if I am mis-understanding anything at this point.
= Ed =
01/28/2010 (6:57 pm)
Thanks for weighing in Tom.Quote:
The "prepass" collects the depth and normals from anything that is to recieve shadows and lighting from the deferred renderer.
I assume that you are referring to per-pixel data here. This can all be supplied easily from my current shader assuming I know what RT to output them to and what format they need to be in. I would think that at some pass, I would need to also output diffuse RGB information per pixel, and perhaps specular/emmisive/whatever else is supported if desired. My problem is that if this other data needs to be from a second pass, that would require another pass from my vertex shader which is the expensive part of the operation, and could easily be calculated during the "prepass" thus saving me that time.
I'm just not clear on exactly what is needed at which pass and what formats the render targets take. From what Pat says, there is a "forward pass" that reads the light map and generates diffuse colors per-pixel. Since I can see no way to generate that information for the "forward pass" without a second running of our expensive vertex shader, I don't see How I can fit into this scheme.
While I'm sure that your hardware instancing will address the needs of 99% of your customers, it seems unlikely that will do all of what we are doing in our shader. Even if it did, it would still become very expensive for our case to have to execute that vertex shader twice per frame.
Please let me know if I am mis-understanding anything at this point.
= Ed =
#5
How dense are the instanced objects? Since you are targeting high-end, you could trade space for time, and basically do an MRT solution for just those objects. There is code which does something similar to this right now for the "Advanced Lightmap Support". Basically the "pre-pass" draws to MRTs in this case, and draws depth/normal as usual, but also draws the lightmaps into the light-buffer at the same time. You could basically set up your geometry to draw once to MRT during the pre-pass. Output to depth/normal, and then also output the diffuse color to a buffer. The lighting step runs as normal, and then instead of doing the "forward pass" which requires the re-render of the geometry, just do a combine step with the lighting results and the diffuse output buffer.
How does that solution sound?
01/28/2010 (9:16 pm)
I think I am picking-up what you are putting-down, and no worries on the not being able to share details. How dense are the instanced objects? Since you are targeting high-end, you could trade space for time, and basically do an MRT solution for just those objects. There is code which does something similar to this right now for the "Advanced Lightmap Support". Basically the "pre-pass" draws to MRTs in this case, and draws depth/normal as usual, but also draws the lightmaps into the light-buffer at the same time. You could basically set up your geometry to draw once to MRT during the pre-pass. Output to depth/normal, and then also output the diffuse color to a buffer. The lighting step runs as normal, and then instead of doing the "forward pass" which requires the re-render of the geometry, just do a combine step with the lighting results and the diffuse output buffer.
How does that solution sound?
#6
First of all, thanks for the awesome responses from both you and Tom!
The instanced objects are moderately dense - depending of course, on the camera view. Your solution sounds just like what I was hoping for! Where can I find format information for the MRTs? Is there an example of this that you can point me to, especially an example of the "forward pass" combine step of the lighting results with the diffuse output buffer. I'm pretty certain that I understand what needs to be done there, but an example is always a time saver.
Also, can I get Torque to set up the extra buffer that I will need, or should I plan to do that myself?
Thanks again for all your help. I'm dying to be able to show this stuff to you guys. I think you'll like it.
= Ed =
01/29/2010 (2:17 am)
Pat,First of all, thanks for the awesome responses from both you and Tom!
The instanced objects are moderately dense - depending of course, on the camera view. Your solution sounds just like what I was hoping for! Where can I find format information for the MRTs? Is there an example of this that you can point me to, especially an example of the "forward pass" combine step of the lighting results with the diffuse output buffer. I'm pretty certain that I understand what needs to be done there, but an example is always a time saver.
Also, can I get Torque to set up the extra buffer that I will need, or should I plan to do that myself?
Thanks again for all your help. I'm dying to be able to show this stuff to you guys. I think you'll like it.
= Ed =
#7
No problem. It's nice to have interesting questions come up. Your questions always seem to go right for the guts of the rendering code, and that's one of the parts of the code I like best ;)
The best example is the MRT lightmap code, but it won't be as complicated as that implementation. If you search in the engine for "MRTLightmap" it will turn up hits in AdvancedLightBinManager, RenderPrePassMgr and HLSL shader features.
I will get to a full reply on this later, I need to get some things done first today.
The over-all plan of what you are going to be doing is having your objects render after the pre-pass bin finishes. Your objects will use your custom shader, and they will write out data to two render targets, #prepass, and #your_target_name. They'll use the 'prepassCondition' method to properly format depth/normal to write to #prepass, and then you will sample your diffuse texture, and write out that diffuse color to #your_target_name.
The engine will do it's thing like normal, and after the forward-pass renders, you will use a PostEffect to take the contents of #your_target_name and #lightinfo, and combine them into a lit, textured pixel representing your custom geometry solution.
Check out game/core/scripts/client/renderManager.cs for a rough idea of the render-order and how to add render-bins (at run-time). The pre-pass and lighting passes do not show up in that script file because they are added dynamically if the engine is using Advanced Lighting.
I will try and get back to this soon with a more detailed description.
01/29/2010 (5:02 pm)
Edward,No problem. It's nice to have interesting questions come up. Your questions always seem to go right for the guts of the rendering code, and that's one of the parts of the code I like best ;)
The best example is the MRT lightmap code, but it won't be as complicated as that implementation. If you search in the engine for "MRTLightmap" it will turn up hits in AdvancedLightBinManager, RenderPrePassMgr and HLSL shader features.
I will get to a full reply on this later, I need to get some things done first today.
The over-all plan of what you are going to be doing is having your objects render after the pre-pass bin finishes. Your objects will use your custom shader, and they will write out data to two render targets, #prepass, and #your_target_name. They'll use the 'prepassCondition' method to properly format depth/normal to write to #prepass, and then you will sample your diffuse texture, and write out that diffuse color to #your_target_name.
The engine will do it's thing like normal, and after the forward-pass renders, you will use a PostEffect to take the contents of #your_target_name and #lightinfo, and combine them into a lit, textured pixel representing your custom geometry solution.
Check out game/core/scripts/client/renderManager.cs for a rough idea of the render-order and how to add render-bins (at run-time). The pre-pass and lighting passes do not show up in that script file because they are added dynamically if the engine is using Advanced Lighting.
I will try and get back to this soon with a more detailed description.
#8
Thanks again,
= Ed =
01/29/2010 (5:53 pm)
Sounds good Pat. I've started doing the searches you indicate. One more quick question - can I encode a Specular amount somewhere - maybe in the Diffuse .w field or Normal .w so get a specular component?Thanks again,
= Ed =
#9
Take a look at the DeferredRTLightingFeatHLSL::processPix method and the DeferredPixelSpecularHLSL::processPix method. Basically what is happening is pre-pass lighting, during the light step, does not have an exact specular value. Instead what it does is calculates specular with a fixed exponent during the lighting step, and then during the forward-pass (in your case it's the combine post-effect) it reconstructs a specular approximation using the actual per-material specular exponent. It does this by using the identity: (a^m)^n = a^(m*n)
This approximation gets more inaccurate the more lights get layered on top of the same pixel (because the identity is no longer true, since values got summed). The reason that this approximation is even needed is because pre-pass stores only depth and normal, where as a traditional deferred renderer will store many buffers of information.
01/29/2010 (6:20 pm)
Ahh specular. Yes you can do that, I would encode the specular value in the Diffuse.W field when you write out the diffuse color buffer, but you will need to mess with things slightly in that Post-Process which combines the #lightinfo and your diffuse target. Take a look at the DeferredRTLightingFeatHLSL::processPix method and the DeferredPixelSpecularHLSL::processPix method. Basically what is happening is pre-pass lighting, during the light step, does not have an exact specular value. Instead what it does is calculates specular with a fixed exponent during the lighting step, and then during the forward-pass (in your case it's the combine post-effect) it reconstructs a specular approximation using the actual per-material specular exponent. It does this by using the identity: (a^m)^n = a^(m*n)
This approximation gets more inaccurate the more lights get layered on top of the same pixel (because the identity is no longer true, since values got summed). The reason that this approximation is even needed is because pre-pass stores only depth and normal, where as a traditional deferred renderer will store many buffers of information.
#10
Where can I find the 'prepassCondition' method you refer to? I can see it referenced in various shader snippets, but I haven't been able to locate the actual method to see what it is doing. Also, how do I get access to the #prepass RT? Is it oC0? And again, should I create my own RT for the Diffuse colors, or get Torque to manage that for me?
I'm sure I will have more questions as I get on with this, but as I said, I understand your current deadlines.
Thanks again,
= Ed =
01/29/2010 (6:34 pm)
This all looks do-able Pat. Of course this begets many more questions and I know you guys are under the gun at the moment. These questions (and the inevitable ones to follow) can certainly wait until you have a breather.Where can I find the 'prepassCondition' method you refer to? I can see it referenced in various shader snippets, but I haven't been able to locate the actual method to see what it is doing. Also, how do I get access to the #prepass RT? Is it oC0? And again, should I create my own RT for the Diffuse colors, or get Torque to manage that for me?
I'm sure I will have more questions as I get on with this, but as I said, I understand your current deadlines.
Thanks again,
= Ed =
#11
Whenever a shader uses a buffer which has a conditioner feature attached, the shader system ensures that the needed methods are exposed to that shader. If you open up 'autogenConditioners.h', you will see a bunch of hashed function names. The shader system will programmatically #define the methods that each shader needs based on what encoding, and format the buffer uses. The best example to look at, and arguably the most important conditioner, is GBufferConditionerHLSL. This is from the constructor:
So that is why conditioners exist and how they get generated and linked to functions in shaders.
To gain access to #prepass as a render target, all you need to do is assign 'prepass' as the target. For example, this is the Point Light material:
This material uses the light buffer as the target, and the prepass buffer as an input. This means that the shader system will make available to it 'lightinfoCondition' and 'prepassUncondition'. These two functions are used inside the shaders, and #include "shadergen:/autogenConditioners.h" takes care of the rest.
01/31/2010 (4:06 pm)
So, just a tiny bit of background to answer that question. The Conditioner class is a specialized shader feature that is used by both the ShaderGen system and custom shaders. A dynamic file called 'autogenConditioners.h' gets generated at runtime (in memory or an actual file, on consoles it is generated in memory) and it generates different methods depending on the needs of the buffer. A conditioner is associated with a named buffer, such as "prepass" or "lightinfo". Whenever a shader uses a buffer which has a conditioner feature attached, the shader system ensures that the needed methods are exposed to that shader. If you open up 'autogenConditioners.h', you will see a bunch of hashed function names. The shader system will programmatically #define the methods that each shader needs based on what encoding, and format the buffer uses. The best example to look at, and arguably the most important conditioner, is GBufferConditionerHLSL. This is from the constructor:
// Figure out how we should store the normal data. These are the defaults.
mCanWriteNegativeValues = false;
mNormalStorageType = CartesianXYZ;
// Note: We clear to a depth 1 (the w component) so
// that the unrendered parts of the scene end up
// farthest to the camera.
const NormalStorage &twoCmpNrmStorageType = ( nrmSpace == WorldSpace ? Spherical : LambertAzimuthal );
switch(bufferFormat)
{
case GFXFormatR8G8B8A8:
mNormalStorageType = twoCmpNrmStorageType;
mBitsPerChannel = 8;
break;
case GFXFormatR16G16B16A16F:
// Floating point buffers don't need to encode negative values
mCanWriteNegativeValues = true;
mNormalStorageType = twoCmpNrmStorageType;
mBitsPerChannel = 16;
break;
// Store a 32bit depth with a sperical normal in the
// integer 16 format. This gives us perfect depth
// precision and high quality normals within a 64bit
// buffer format.
case GFXFormatR16G16B16A16:
mNormalStorageType = twoCmpNrmStorageType;
mBitsPerChannel = 16;
break;
case GFXFormatR32G32B32A32F:
mCanWriteNegativeValues = true;
mNormalStorageType = CartesianXYZ;
mBitsPerChannel = 32;
break;
default:
AssertFatal(false, "Unsupported G-Buffer format");
}So any time that a material uses '#prepass' as a sampler source, or output-target the system uses a #define to make available a 'prepassCondition' and 'prepassUncondition'. You must: #include "shadergen:/autogenConditioners.h" in your shader, as well. So that is why conditioners exist and how they get generated and linked to functions in shaders.
To gain access to #prepass as a render target, all you need to do is assign 'prepass' as the target. For example, this is the Point Light material:
// Point Light Material
new ShaderData( AL_PointLightShader )
{
DXVertexShaderFile = "shaders/common/lighting/advanced/convexGeometryV.hlsl";
DXPixelShaderFile = "shaders/common/lighting/advanced/pointLightP.hlsl";
OGLVertexShaderFile = "shaders/common/lighting/advanced/gl/convexGeometryV.glsl";
OGLPixelShaderFile = "shaders/common/lighting/advanced/gl/pointLightP.glsl";
pixVersion = 3.0;
};
new CustomMaterial( AL_PointLightMaterial )
{
shader = AL_PointLightShader;
stateBlock = AL_ConvexLightState;
sampler["prePassBuffer"] = "#prepass";
sampler["shadowMap"] = "$dynamiclight";
sampler["cookieTex"] = "$dynamiclightmask";
target = "lightinfo";
pixVersion = 3.0;
};(From: core/scripts/client/lighting/advanced/shaders.cs)This material uses the light buffer as the target, and the prepass buffer as an input. This means that the shader system will make available to it 'lightinfoCondition' and 'prepassUncondition'. These two functions are used inside the shaders, and #include "shadergen:/autogenConditioners.h" takes care of the rest.
#12
For your case, what you want is something like this, for the render pass of your objects:
Then you are going to need something to combine the lighting results with your diffuse target. The PostEffect will look something like this, probably:
I think this is quite possibly the most complete discussion of this process ever written down. Not many people get this close to the renderer. It's kind of unfortunate because it's really reasonable code to work with.
01/31/2010 (4:06 pm)
For your case, what you want is something like this, for the render pass of your objects:
new CustomMaterial( Eds_Awesome_Material )
{
shader = Eds_Awesome_Shader;
stateBlock = Eds_Awesome_Stateblock;
// Your samplers...
target[0] = "prepass";
target[1] = "awesomeDiffuseBuffer";
pixVersion = 3.0;
};NOTE: This syntax does not exist yet. Right now, there is only one target (bound to oC0) for CustomMaterials. If you look at the renderPrePassMgr.cpp file, it has special case code for binding a secondary render target to oC1. The solution for your application is to modify CustomMaterial (and associated ProcessedCustomMaterial) to take multiple targets so that you can write out depth/normal to oC0 and your diffuse target. Then you are going to need something to combine the lighting results with your diffuse target. The PostEffect will look something like this, probably:
singleton PostEffect( Eds_Awesome_Recombination )
{
shader = Eds_Awesome_Recombine_Shader;
stateBlock = PFX_DefaultStateBlock;
texture[0] = "#lightinfo";
texture[1] = "#awesomeDiffuseBuffer";
targetClear = "PFXTargetClear_OnDraw";
targetClearColor = "0 0 0 0";
target = "$backBuffer";
};It's worth noting that you don't have to do anything special at all to create a named buffer to use as an input/output in the material system; all you have to do is use the name, and the system does the rest. Take a look at 'core/scripts/client/postFx/edgeAA.cs' The named buffer "#edge" is created dynamically by the material system. All it needed to do is use the name "#edge" and the buffer was created. So simply by using "#awesomeDiffuseBuffer" in this example...that target gets created and made available to the entire render system. Post effects, materials, etc. I think this is quite possibly the most complete discussion of this process ever written down. Not many people get this close to the renderer. It's kind of unfortunate because it's really reasonable code to work with.
#13
This has a lot of great stuff to digest. Right now our code does not use the shader/shaderGen system at all, and likewise does not use the Material system for this special shader. That said, I'm anxious to more closely couple our code to the underlying Torque subsystems, and thereby get a leg up if/when we branch out from PC-only.
I'm headed out of town for a couple of days, but I'll dig in to this as soon as I get back, assuming that nothing more pressing is thrown at me in the meantime. ;)
Thanks again for this really detailed tour through the guts of the shader and material system. No doubt as I actually start to implement some of this,other questions will arise, but this should keep me going for quite some time!
My very deepest thanks again!
= Ed =
01/31/2010 (4:27 pm)
Pat,This has a lot of great stuff to digest. Right now our code does not use the shader/shaderGen system at all, and likewise does not use the Material system for this special shader. That said, I'm anxious to more closely couple our code to the underlying Torque subsystems, and thereby get a leg up if/when we branch out from PC-only.
I'm headed out of town for a couple of days, but I'll dig in to this as soon as I get back, assuming that nothing more pressing is thrown at me in the meantime. ;)
Thanks again for this really detailed tour through the guts of the shader and material system. No doubt as I actually start to implement some of this,other questions will arise, but this should keep me going for quite some time!
My very deepest thanks again!
= Ed =
#14
01/31/2010 (10:02 pm)
Are you sure your code doesn't use CustomMaterial? How are you feeding shader code to DirectX? That is still part of the material system, and can still work with named buffers.
#15
We are calling D3DXCreateEffectFromFileA(). Works every time! ;) If I can manage to do something equivalent with Torque, I will happily make the change, especially if it makes it easier to use some of the shader functionality that you guys have written to support the AL system.
= Ed =
01/31/2010 (10:06 pm)
Pat,We are calling D3DXCreateEffectFromFileA(). Works every time! ;) If I can manage to do something equivalent with Torque, I will happily make the change, especially if it makes it easier to use some of the shader functionality that you guys have written to support the AL system.
= Ed =
#16
It's a direct interface with HLSL/GLSL code + the Torque texture hooks.
Look at the code 4 posts up for 'AL_PointLightMaterial'. It's an HLSL shader, a state block, and information about how to manage the render-target and sampler inputs. That is what you want.
02/01/2010 (12:29 pm)
It's CustomMaterial.It's a direct interface with HLSL/GLSL code + the Torque texture hooks.
Look at the code 4 posts up for 'AL_PointLightMaterial'. It's an HLSL shader, a state block, and information about how to manage the render-target and sampler inputs. That is what you want.
#17
I'm back in town now. Thanks for the information, but I'm not sure I'm decoding your reference properly. You state:
If you are referring to posts on the forums, I've done that search as well and turned up nothing. A general search of the site turns up nothing as well. I'm anxious to follow your references, but I'm just not sure where to look.
= Ed =
02/02/2010 (8:46 pm)
Pat,I'm back in town now. Thanks for the information, but I'm not sure I'm decoding your reference properly. You state:
Quote:Look at the code 4 posts up for 'AL_PointLightMaterial'I've done a search for 'AL_PointLightMaterial', and traced code and not found what you are referring to - just a string table.
If you are referring to posts on the forums, I've done that search as well and turned up nothing. A general search of the site turns up nothing as well. I'm anxious to follow your references, but I'm just not sure where to look.
= Ed =
#18
This is the code-block:
The first bit defines the shader to use, HLSL and GLSL. This is why the CustomMaterial class is what you want to use for your shaders. The definition is a straight up wiring-in of custom HLSL/GLSL shaders into Torque.
The second is the CustomMaterial definition. It references the shader that was defined above it, and it also references a state block (which I did not include because it is lengthy, but it can be found in that script file). It then defines samplers. The format for these is:
The last, and possibly more important bit is the "target" parameter. This outputs to the lightinfo target. Your shader will need to output to the 'prepass' target, as well as a diffuse color target on oC1 which will take the texture samples for your custom geometry solution.
02/03/2010 (1:28 am)
Edward, the code block that I pasted above was what I am referring to. It is found in scripts, not in the engine C++ code. The site-wide search is very poor, unfortunately :(This is the code-block:
// Point Light Material
new ShaderData( AL_PointLightShader )
{
DXVertexShaderFile = "shaders/common/lighting/advanced/convexGeometryV.hlsl";
DXPixelShaderFile = "shaders/common/lighting/advanced/pointLightP.hlsl";
OGLVertexShaderFile = "shaders/common/lighting/advanced/gl/convexGeometryV.glsl";
OGLPixelShaderFile = "shaders/common/lighting/advanced/gl/pointLightP.glsl";
pixVersion = 3.0;
};
new CustomMaterial( AL_PointLightMaterial )
{
shader = AL_PointLightShader;
stateBlock = AL_ConvexLightState;
sampler["prePassBuffer"] = "#prepass";
sampler["shadowMap"] = "$dynamiclight";
sampler["cookieTex"] = "$dynamiclightmask";
target = "lightinfo";
pixVersion = 3.0;
};(From core/scripts/client/lighting/advanced/shaders.cs)The first bit defines the shader to use, HLSL and GLSL. This is why the CustomMaterial class is what you want to use for your shaders. The definition is a straight up wiring-in of custom HLSL/GLSL shaders into Torque.
The second is the CustomMaterial definition. It references the shader that was defined above it, and it also references a state block (which I did not include because it is lengthy, but it can be found in that script file). It then defines samplers. The format for these is:
sampler["nameInHLSL"] = "file_or_dynamic_texture";In the Point light shader (the code above) it samples from #prepass, which is a named target (designated by '#<name>'), which is valid as a source at all times, and two dynamic sources which are only valid under certain circumstances. What I mean by this is that you can wire #prepass into a post-process effect with no issues, but $dynamiclight will not be valid.
The last, and possibly more important bit is the "target" parameter. This outputs to the lightinfo target. Your shader will need to output to the 'prepass' target, as well as a diffuse color target on oC1 which will take the texture samples for your custom geometry solution.
#19
Thanks again for the clarification. I'm starting to get a feel for what you are proposing. At present I am not using Torque to load any "Material". I imagine that when I switch over to this approach, we would have to assign a new "material" to our object in the World Editor for things to hook up correctly. Is this correct, or can I manage this from our custom RenderDelegate?
Also, we use multiple textures - which raises another issue. I do use Torque's ResourceManager to load our multiple meshes, and I parse the mesh file for our geometry. But we load our textures (multiple diffuse/normal texture pairs used at different times during our rendering) using the DirectX D3DXCreateTextureFromFileA(), getting the file names we need via some rules that we established for where to store these textures and what to name them.
That said, we currently use 2 samplers/texture-buffers during our rendering for traditional uses, one for diffuse color and one for a normal map. However, we also use other samplers/texture-buffers for non-traditional uses. Currently we define our Texture2D buffers in the shader (HLSL) and use GetParameterBySemantic() to get handles to these and change them as needed. The sampler definitions are also in the shader file and are very straightforward. Using the handle to the texture buffer we can change the texture buffers as needed. I'm not sure how I would handle that under the Torque scheme as it appears that the actual texture buffer name is hidden from me and I would not be able to access it via the DirectX Semantic call. I assume that Torque has another method for gaining access to the actual texture buffer, but I don't know what that is. So if I want to set a particular texture into one of these named samplers dynamically, how would I go about doing that from our rendering code using your sampler naming scheme? Can I use such a method for setting arbitrary data into a texture buffer as opposed to data from an actual texture?
This raises yet another question concerning shader constants. Does Torque support a scheme for dynamically changing shader constants from the C++ code? For example we do our own LOD management, and need to change shaders based upon LOD level. We also have other constants that need to change during our render pass.
Finally, we have only addressed HLSL for this project. Would I need to write GLSL equivalents for our shaders as well? Can I just not name them in the ShaderData declaration, or name them to bogus files (FWIW, I am not a huge fan of OpenGL to begin with, but we can discuss that another time ;) ).
I'm sorry to keep coming up with more and more questions, but, as usual, one answer ends up leading to more problems.
Thanks again for all of your help.
= Ed =
02/03/2010 (1:11 pm)
Pat,Thanks again for the clarification. I'm starting to get a feel for what you are proposing. At present I am not using Torque to load any "Material". I imagine that when I switch over to this approach, we would have to assign a new "material" to our object in the World Editor for things to hook up correctly. Is this correct, or can I manage this from our custom RenderDelegate?
Also, we use multiple textures - which raises another issue. I do use Torque's ResourceManager to load our multiple meshes, and I parse the mesh file for our geometry. But we load our textures (multiple diffuse/normal texture pairs used at different times during our rendering) using the DirectX D3DXCreateTextureFromFileA(), getting the file names we need via some rules that we established for where to store these textures and what to name them.
That said, we currently use 2 samplers/texture-buffers during our rendering for traditional uses, one for diffuse color and one for a normal map. However, we also use other samplers/texture-buffers for non-traditional uses. Currently we define our Texture2D buffers in the shader (HLSL) and use GetParameterBySemantic() to get handles to these and change them as needed. The sampler definitions are also in the shader file and are very straightforward. Using the handle to the texture buffer we can change the texture buffers as needed. I'm not sure how I would handle that under the Torque scheme as it appears that the actual texture buffer name is hidden from me and I would not be able to access it via the DirectX Semantic call. I assume that Torque has another method for gaining access to the actual texture buffer, but I don't know what that is. So if I want to set a particular texture into one of these named samplers dynamically, how would I go about doing that from our rendering code using your sampler naming scheme? Can I use such a method for setting arbitrary data into a texture buffer as opposed to data from an actual texture?
This raises yet another question concerning shader constants. Does Torque support a scheme for dynamically changing shader constants from the C++ code? For example we do our own LOD management, and need to change shaders based upon LOD level. We also have other constants that need to change during our render pass.
Finally, we have only addressed HLSL for this project. Would I need to write GLSL equivalents for our shaders as well? Can I just not name them in the ShaderData declaration, or name them to bogus files (FWIW, I am not a huge fan of OpenGL to begin with, but we can discuss that another time ;) ).
I'm sorry to keep coming up with more and more questions, but, as usual, one answer ends up leading to more problems.
Thanks again for all of your help.
= Ed =
#20
You should be able to take care of managing the material from your custom render delegate. Check out Engine/source/lighting/advanced/advancedLightBinManager.cpp
I made a helper struct called 'LightMaterialInfo' for this. That is not needed, basically what this struct does is hold a collection of 'MaterialParameterHandle' and manage setting those parameters. Here are some of the important bits from this, and what they do:
You can also assign textures dynamically to your materials. So where you will first want to add code is Engine/source/materials/processedCustomMaterial.cpp in _setStageData(). You will want to add a block like this:
And then take a look at: ProcessedCustomMaterial::setTextureStages. This is where it does a switch based on that enum. Take a look at how it handles Material::SGCube. The texture itself gets passed in from the SceneGraphData struct (It really has not much to do with a SceneGraph at all). There are several ways of stuffing textures into materials, but I think this is one of the more easier to follow methods.
You do *not* need to create GLSL files so don't worry about it. Just don't even define the fields, it will work just fine. The only fans of OpenGL are those who have not used it sufficiently to know better ;)
This system is significantly easier to use than it seems...once you figure it out. It makes a ton of sense once you figure it out, but until then...it's some crazyness.
02/04/2010 (9:40 pm)
Ed,You should be able to take care of managing the material from your custom render delegate. Check out Engine/source/lighting/advanced/advancedLightBinManager.cpp
I made a helper struct called 'LightMaterialInfo' for this. That is not needed, basically what this struct does is hold a collection of 'MaterialParameterHandle' and manage setting those parameters. Here are some of the important bits from this, and what they do:
// From .h
MaterialParameterHandle *lightPosition;
// Assignment
lightPosition = matInstance->getMaterialParameterHandle("$lightPosition");What this does is create a link between the "lightPosition" shader constant (This is essentially GetParameterBySemantic). This is what will allow you to dynamically assign values to shader constants from C++.MaterialParameters *matParams = matInstance->getMaterialParameters(); // ... Point3F lightPos; worldViewOnly.mulP(lightInfo->getPosition(), &lightPos); matParams->setSafe( lightPosition, lightPos );This is assigning a dynamic value to a shader constant.
You can also assign textures dynamically to your materials. So where you will first want to add code is Engine/source/materials/processedCustomMaterial.cpp in _setStageData(). You will want to add a block like this:
if(filename.equal(String("$dynamiclight"), String::NoCase))
{
rpd->mTexType[i] = Material::DynamicLight;
mMaxTex = i+1;
continue;
}You will need to add to the same enum that Material::DynamicLight is in, to add a type for your texture.And then take a look at: ProcessedCustomMaterial::setTextureStages. This is where it does a switch based on that enum. Take a look at how it handles Material::SGCube. The texture itself gets passed in from the SceneGraphData struct (It really has not much to do with a SceneGraph at all). There are several ways of stuffing textures into materials, but I think this is one of the more easier to follow methods.
You do *not* need to create GLSL files so don't worry about it. Just don't even define the fields, it will work just fine. The only fans of OpenGL are those who have not used it sufficiently to know better ;)
This system is significantly easier to use than it seems...once you figure it out. It makes a ton of sense once you figure it out, but until then...it's some crazyness.
Torque 3D Owner Pat Wilson
Can you put just a bit more about what you are trying to do. Torque3D actually exposes the deferred buffers directly to custom shaders.
First a tiny bit of background: Torque3D uses pre-pass lighting which is different than traditional deferred lighting. This uses two passes over the geometry instead of just one. The first pass writes out the linear depth, and normal to a buffer. Then the lighting pass happens, which draws lighting information to a light-buffer. Finally the forward-pass happens which gets the lighting results from the light-buffer, samples the diffuse texture(s) and calculates a final pixel color.
Both the depth/normal, and light-buffer are exposed to custom materials.
For the best example, take a look at the advanced lighting shaders themselves. The shaders used for the deferred light step are found in 'Game/shaders/common/lighting/advanced/'. The Material definitions are found in 'game/core/scripts/client/lighting/advanced/shaders.cs'
This is the definition for the Point Light material (the material applied to the sphere which represents a point light during deferred lighting)
new CustomMaterial( AL_PointLightMaterial ) { shader = AL_PointLightShader; stateBlock = AL_ConvexLightState; sampler["prePassBuffer"] = "#prepass"; sampler["shadowMap"] = "$dynamiclight"; sampler["cookieTex"] = "$dynamiclightmask"; target = "lightinfo"; pixVersion = 3.0; };The important bits here are:Sampling from the pre-pass buffer, containing normals/linear-depth
Writing out data to the light-buffer:
Now if you take a look at the 'pointLightP.hlsl' shader, you can see that (among other things) it uses 'prepassUncondition'. This method is auto-generated by Torque3D and it will unpack the packed depth/normal buffer into an XYZ normal and linear eye-space depth (There is a corisponding 'prepassCondition' function which does the opposite).
If you take a look at the Post Effect shaders, they do some of the same stuff, only using #lightinfo as a sampler, not as a target, and using 'lightinfoUncondition' to pull out light color/specular.
If you give a rough idea of what you are trying to do I can help more (IE write to the light-buffer, or to the depth/normal buffer, etc).