Shaders
by Martin Wedvich · in Technical Issues · 01/20/2003 (5:00 am) · 37 replies
How do I code in pixel shaders into my game? The models will have a quite high polygon count, but I've seen doom which is without shaders, and it looks great, but the lighting are quite weird on some of the models. So I want to use pixel shaders. Again, how do I code this? Do I have to get the DirectX 8.1 or 9.0 SDK or...?
#2
If you are willing to do the work, download the DX9 SDK, look around at the samples. Grab RenderMonkey or something and start messing around with that. This will let you write the shader without worrying about the rest of the junk, then after you get that, then you have to do the rest of the junk. As was said before, Torque is written in OpenGL and you can not put DirectX shaders in it without doing a massive amount of work. OpenGL shaders are currently vendor-specific extentions, so you'll have to write one for NVidia and one for ATI cards (and Matrox/3D Labs if you want to support them). Getting familiar with shaders is great and all, but I think that chances are you do not want to mess with this stuff.
01/20/2003 (9:16 am)
First off, simple is relative. Doing shaders is simple in very much the same way as saying, "Hey, I live near a beach, I think I'll start a glass-blowing buisness," is simple. If you are willing to do the work, download the DX9 SDK, look around at the samples. Grab RenderMonkey or something and start messing around with that. This will let you write the shader without worrying about the rest of the junk, then after you get that, then you have to do the rest of the junk. As was said before, Torque is written in OpenGL and you can not put DirectX shaders in it without doing a massive amount of work. OpenGL shaders are currently vendor-specific extentions, so you'll have to write one for NVidia and one for ATI cards (and Matrox/3D Labs if you want to support them). Getting familiar with shaders is great and all, but I think that chances are you do not want to mess with this stuff.
#3
01/20/2003 (9:39 am)
Pat: The OpenGL situation isn't as bad as that. There are now ARB versions of both vertex and fragment programs, so you can pretty much write one implementation for multiple hardware vendors now.
#4
01/20/2003 (2:35 pm)
Last I knew that was GL2. I know that ATI uses the ARB extentions, however (as far as I know) NVidia still uses NV_etc_etc for their extntions. GL2 does have a unified shader interface though, so that will make life much easier.
#5
For development:
Cg is a good way to start learning, but it locks you into NV. And I don't know if anyone has built it to target other cards or platforms, or will.
ATI's rendermonkey is kinda cool in that it'll generate code for all sorts of different target APIs. And has a plug in interface to extend it further.
DX8/9 give you a standard interface for querying and constructing shaders. If you only ever want to ship on a PC platform (no mac/linux), DX is currently the way to go. With the DX 'effects' files (I think that's what they're called), you actually design a series of 'fallback' implementations, which allows you to hit the entire range of shader and shader-less hardware with lesser and lesser implementations, but still have things work and run.
The DX HLSL (high level shading language) definitely simplifies things, but you can always do 'assembly level' shader coding that is supported for a given class of boards or better.
The OpenGL ARB fragment shader extension is, I believe, only going to work with next gen boards like the 9500/9700 or the nv30 whenever that shows up. At least, that was my impression. GL2's implementation is likely to base off of that further -- but I haven't seen a clear description.
So far as I understand, DX HLSL and Cg at the moment I think can reasonably well compile the same basic language instructions (I think Cg made a point of saying they'd be able to compile HLSL shaders). Everything will start looking like C code, basically.
d
01/20/2003 (4:43 pm)
Shaders are definitely something you can tackle, but it's a steep learning curve.. ;)For development:
Cg is a good way to start learning, but it locks you into NV. And I don't know if anyone has built it to target other cards or platforms, or will.
ATI's rendermonkey is kinda cool in that it'll generate code for all sorts of different target APIs. And has a plug in interface to extend it further.
DX8/9 give you a standard interface for querying and constructing shaders. If you only ever want to ship on a PC platform (no mac/linux), DX is currently the way to go. With the DX 'effects' files (I think that's what they're called), you actually design a series of 'fallback' implementations, which allows you to hit the entire range of shader and shader-less hardware with lesser and lesser implementations, but still have things work and run.
The DX HLSL (high level shading language) definitely simplifies things, but you can always do 'assembly level' shader coding that is supported for a given class of boards or better.
The OpenGL ARB fragment shader extension is, I believe, only going to work with next gen boards like the 9500/9700 or the nv30 whenever that shows up. At least, that was my impression. GL2's implementation is likely to base off of that further -- but I haven't seen a clear description.
So far as I understand, DX HLSL and Cg at the moment I think can reasonably well compile the same basic language instructions (I think Cg made a point of saying they'd be able to compile HLSL shaders). Everything will start looking like C code, basically.
d
#6
Its quite possible, and possibly quite likely, that shaders CAN be implemented in a cross platform manner. Essentially taking a high level description (lets say HLSL) and either
a: parsing it yourself on mac/linux
b: passing it to DX under DX9
c: parsing it and emitting fragment programs under GL on PC.
Most of the parsing could actually be done at install time (essentially the install program detects the shader library to install, which can read its own intermediate format, lets say its like a bytecode version of a shader file).
If you write a shader library that can take in a high level description, write out HLSL, GL vertex fragment programs AND fixed function pipeline calls please let me know! :))
Phil.
01/20/2003 (4:54 pm)
Guys, shaders are more of an architectural change than necassarily a platform issue. When I say shaders i mean using a description file of some sort (normally termed a shader) to determine surface attributes (textures, combiners etc).Its quite possible, and possibly quite likely, that shaders CAN be implemented in a cross platform manner. Essentially taking a high level description (lets say HLSL) and either
a: parsing it yourself on mac/linux
b: passing it to DX under DX9
c: parsing it and emitting fragment programs under GL on PC.
Most of the parsing could actually be done at install time (essentially the install program detects the shader library to install, which can read its own intermediate format, lets say its like a bytecode version of a shader file).
If you write a shader library that can take in a high level description, write out HLSL, GL vertex fragment programs AND fixed function pipeline calls please let me know! :))
Phil.
#7
Sigma k=0..n [vertices * codeSize(0) + ... + vertices * codeSize(n)]
instructions, where n is the number of rendering passes, I would much rather do it in assembly then Cg or whatever. It really is not that hard at all. You can translate almost directly from shader asm -> OpenGL calls, the only pain in the ass thing is having to do one NV_vertex_shader and one ARB_fragment_shader (? syntax) but that will all be fixed in GL2.
The real problem, in my mind, is that Torque is not set up to do shader-based rendering. Sure you can hack it in, but the code ugly as sin, ask the guys who did that water shader that was ITOD a while back, I'm sure they have many thoughts on the subject. Ideally you want to get the most bang for your buck out of shaders, and the way Torque seems to do things you would have to store the shader in the object, and each time the object renders you would have to load the shader into the GPU and run it, then unload it, since Torque isn't really an organized multi-pass setup. The other option is to do a global shader, so each object would make a call to a point-light shader routine or something, but you'd have to do all of it in one pass, which kind of screws over the whole versatility thing.
I dunno. It's possible. But it's probably ugly, and it's time spent in better places for now.
01/20/2003 (7:07 pm)
I can't tell if I'm a purist or just crazy (John Quigly claims I am "hardcore") but honestly, when dealing with:Sigma k=0..n [vertices * codeSize(0) + ... + vertices * codeSize(n)]
instructions, where n is the number of rendering passes, I would much rather do it in assembly then Cg or whatever. It really is not that hard at all. You can translate almost directly from shader asm -> OpenGL calls, the only pain in the ass thing is having to do one NV_vertex_shader and one ARB_fragment_shader (? syntax) but that will all be fixed in GL2.
The real problem, in my mind, is that Torque is not set up to do shader-based rendering. Sure you can hack it in, but the code ugly as sin, ask the guys who did that water shader that was ITOD a while back, I'm sure they have many thoughts on the subject. Ideally you want to get the most bang for your buck out of shaders, and the way Torque seems to do things you would have to store the shader in the object, and each time the object renders you would have to load the shader into the GPU and run it, then unload it, since Torque isn't really an organized multi-pass setup. The other option is to do a global shader, so each object would make a call to a point-light shader routine or something, but you'd have to do all of it in one pass, which kind of screws over the whole versatility thing.
I dunno. It's possible. But it's probably ugly, and it's time spent in better places for now.
#8
No word on whether Cg will be ported to the Mac. I've lost track of whether Apple are tight with NVIDIA anymore. There is a Cg runtime for Linux however.
There's been a lot of arguing about whether Cg is good or not recently, and why you should bother when there's HLSL in DX9. Well, HLSL is no good if you're not running DX, and who knows when GL2 (and good drivers) will surface?
01/20/2003 (9:52 pm)
Ignoring completely what Pat just said (*grin* - excellent points though) and assuming you do want to go ahead and use shaders.. NVIDIA does now have ARB standard vertex and fragment shaders. Well, standard vertex shaders now, standard fragment in NV30.No word on whether Cg will be ported to the Mac. I've lost track of whether Apple are tight with NVIDIA anymore. There is a Cg runtime for Linux however.
There's been a lot of arguing about whether Cg is good or not recently, and why you should bother when there's HLSL in DX9. Well, HLSL is no good if you're not running DX, and who knows when GL2 (and good drivers) will surface?
#9
I'm still debating with myself wether I'll explicitly write code to get hardware vertex/fragment programs in DX, since I'm perfectly happy with a GL-only implementation.
Shouldn't be tough for someone else to plug DX in if they want it, though.
Pat: I've been handling the Shaders as material replacements. It works pretty well, especially since objects like interiors are already trying to sort by material. It's not as fast state change wise as sorting _all_ objects by shader, though. But hey, I'll look at doing that once I've got the basics working.
01/21/2003 (5:53 am)
Phil: I've been working on a Shading language in Torque for a while now. You can see some early shots of it in my last plan (It's come a ways along since then). I'll probably release it (or at least portions of it) to the community when it's done. Look for it in a couple of months.I'm still debating with myself wether I'll explicitly write code to get hardware vertex/fragment programs in DX, since I'm perfectly happy with a GL-only implementation.
Shouldn't be tough for someone else to plug DX in if they want it, though.
Pat: I've been handling the Shaders as material replacements. It works pretty well, especially since objects like interiors are already trying to sort by material. It's not as fast state change wise as sorting _all_ objects by shader, though. But hey, I'll look at doing that once I've got the basics working.
#10
@Mark: Now that's an idea I really didn't think about. Nice.
01/21/2003 (9:31 am)
@Simon: Awesome about the ARB stuff. Just goes to show I should keep more on top of this sort of thing.@Mark: Now that's an idea I really didn't think about. Nice.
#11
NV_vertex_program
ARB_vertex_program
NV_fragment_program
NV_register_combiners
NV_register_combiners2
and have Cg runtime integrated into TGE.
But, this is not my field - I'm unable to get vertex programs to work with TGE data. Maybe someone with OpenGL background can help me?
01/21/2003 (11:36 pm)
I've added support for NV_vertex_program
ARB_vertex_program
NV_fragment_program
NV_register_combiners
NV_register_combiners2
and have Cg runtime integrated into TGE.
But, this is not my field - I'm unable to get vertex programs to work with TGE data. Maybe someone with OpenGL background can help me?
#12
01/22/2003 (8:25 am)
Robin, what do you mean you are unable to get them to work. What are you having trouble with? If you post some specifics then we can help. (Potentially at least =P)
#13
This is easier than expected, I'm moving on to fragment programs.
01/23/2003 (7:01 am)
I actually got vertex programs working! Wrote a simple sincos wave program for the fluid renderer (hardware waves!!).This is easier than expected, I'm moving on to fragment programs.
#14
01/23/2003 (10:08 am)
Way to go! I love it when stuff works. The geek-rush I call it.
#15
01/26/2003 (8:57 am)
Hey, know I'm late, but the second post up at the top... Doom 3 is actually NOT using pixel shader. Not sure about vertex shader, but I'm not surprised if Doom 3 don't use it.
#16
I'm confused, on NVIDIA cards (<= GF4) 'fragment programs' get translated into register combiner / texture shader insructions (Cg compiler generates TS/RC code when using the fp20 profile). Therefore nothing really uses fragment programs.
Doom must use a lot of TS/RC instructions.
01/27/2003 (2:34 am)
Are you sure Doom III doesn't use 'pixel shaders'? I'm confused, on NVIDIA cards (<= GF4) 'fragment programs' get translated into register combiner / texture shader insructions (Cg compiler generates TS/RC code when using the fp20 profile). Therefore nothing really uses fragment programs.
Doom must use a lot of TS/RC instructions.
#17
If the effects in Doom aren't using them then they're being done in software... which I find even more unlikely. I can't say i've followed Doom III particulary much, but no doubt we're just getting some words mixed up rather than it not using them.
And without Vertex Shaders, Pixel shaders are really rather limited... so you'd use both.
01/27/2003 (4:35 am)
I've seen the term fragment program used interchangably with pixel shader in most stuff i've read (as in the CG docs). As for Doom3 not using either... it would be mighty strange to target GF3 level hardware (the first card the really supported either) and not use them.If the effects in Doom aren't using them then they're being done in software... which I find even more unlikely. I can't say i've followed Doom III particulary much, but no doubt we're just getting some words mixed up rather than it not using them.
And without Vertex Shaders, Pixel shaders are really rather limited... so you'd use both.
#18
01/27/2003 (4:57 am)
Software based rendering can solve problems. Would it be difficult to migrate a software based shading system to DX or Ogl?
#19
Also, as was previously stated, vertex shaders are designed to be used in conjunction with pixel shaders. Don't get me wrong, vertex shaders are very, very cool, and you can do a lot of stuff with them, but if you want to actually use the whole shader thing to it's potential you will use them in conjunction. Here's what you can output with a vertex shader:
position, color, texture coordinates
Depending upon the kind of effects you want to create, this isn't going to be enough at all enough, unless you want to have an offensive number of triangles.
01/27/2003 (9:54 am)
I can't believe that Doom 3 is not using shaders out the wazoo. I know that Carmack is a fan of GL2.0 and rewrote stuff to support GL2. The core thing of GL2 is redoing the rendering pipeline to put in shader support, since GL2 is backwards compatable with GL1, the only reason to redo stuff to support GL2 is to put in shaders.Also, as was previously stated, vertex shaders are designed to be used in conjunction with pixel shaders. Don't get me wrong, vertex shaders are very, very cool, and you can do a lot of stuff with them, but if you want to actually use the whole shader thing to it's potential you will use them in conjunction. Here's what you can output with a vertex shader:
position, color, texture coordinates
Depending upon the kind of effects you want to create, this isn't going to be enough at all enough, unless you want to have an offensive number of triangles.
#20
02/09/2003 (9:30 am)
Doom 3 isn't using shaders... but even without'em, the visuals looks astounishing... The GF3-thingy would be that Doom 3 does fully support T&L, but GF3 or better cards is necessary to gain full advantage of it.
Torque 3D Owner Gareth Davies
Pixel Shaders (interchangably called Fragment programs etc.) and there cousins Vertex Shaders change the fixed rendering pipeline into a programmable one. I'd be mighty suprised if Doom (3?) isn't using both.
If you're starting from scratch you can could use something like HLSL built into DirectX9, that allows you to very simply implement them. It's just a case of setting the VS / PixelShader properties of the device before rendering your object (and the various register constants). Alternativly you could use HLSL techniques or intergrate something like the CG runtime (that compiles Nvidia's CG language to code for other target types). A good start is to get the DirectX9 SDK (or even the CG SDK) and at least read the articles it has about it.
It's more 'interesting' on OpenGL as there isn' really a set way of doing it.
If you mean how do you get them in Torque... few people are working on it. But don't hold your breath, Torque is basically OpenGL with a DirectX wrapper... which means unless you mess with the wrapper directly and toast cross platform support it's going to be very hard.