Hi Polygon Character Theory
by Joel Hargarten · in Torque Game Engine · 08/31/2005 (12:57 pm) · 7 replies
Hey. I've been working on player models recently, & I've been able to get some real high quality models that look great. The only problem is that torque only allows a dts shape up to a certain number of polygons. I think the best I've been able to get is 10,000 into the game. My problem is that my models are in the 30,000 range. I've been thinking about this dilemma and trying to find ways around it, and I want to run something by everyone to see if my idea will work:
What if we took the player model and broke it up into several smaller models? For example, detach the head so its seperate, the hands, legs, upper body, & lower body. Now that they're all seperate, a 30, 000 polygon model can be broken up into models that can seperatley be loaded into torque. Now all that would need to be done is to reconnect them in torque, using code, so that it appears to be one single model.
This is where I run into some trouble. I've done something like this in a 2d java applet. What I did was for attaching the head was give it the right x, y coordinates. when it's 2d you can determine the coord's using formula's such as body.height - head.height, or whatever. When using formula's based on variables instead of exact coordinates, you will always line up exactly even if the proportions change. I believe that the same system can be used here. It will be more complex because it is 3d and there is now a z-axis, but thats ok. My problem is that I am still new to torque and I don't know where, or exactly how to set this up. I had an idea of placing mounts in each seperate model, like one at the base of the head & one at the top of the neck so that those two could be used like magnets to connect the body & head. I'm not sure if that would work, but it's just an idea.
I know what most of you are thinking now. "Even if you could assemble these seperate models together to look like one model, what about animation? How are the other model's going to use the skeleton of the root model?" This is probably the biggest problem in my theory. But there may be ways around this too. In my setup, the "root model" would be the upper body without the head, & the lower body cut off just below the knees. That would be the main parts of the body that the skeleton would move. The legs, for example, would still bend at the knees, and the calfs below that could be attached there and wouldnt miss a beat because they would be connected to the rest of the leg, and they dont bend anyway. I believe that as long as you have the main joints all connected to the skeleton, that movement would still be possible and would look just about as good. The only things I think would be missed is movment in the fingers and certain bending in the ankels & wrists. Personally, I dont use those movements much, but maybe someone else has a way around it.
Another benefit to all of this is customization. You could allow the player to choose all the parts of the character. Say they wanted a different face? Just replace the head model with another one. Or different boots by replacing the foot with another one. This would be an easy way to allow custom player model's.
Well, that is my proposal. Tell me what I'm missing or what problems would result from this. I just quickley went through this off the top of my head, maybe I haven't explained it very well. If you have questions ask, and I can also post some drawn diagrams of what I'm saying. Let me know. Thanks.
What if we took the player model and broke it up into several smaller models? For example, detach the head so its seperate, the hands, legs, upper body, & lower body. Now that they're all seperate, a 30, 000 polygon model can be broken up into models that can seperatley be loaded into torque. Now all that would need to be done is to reconnect them in torque, using code, so that it appears to be one single model.
This is where I run into some trouble. I've done something like this in a 2d java applet. What I did was for attaching the head was give it the right x, y coordinates. when it's 2d you can determine the coord's using formula's such as body.height - head.height, or whatever. When using formula's based on variables instead of exact coordinates, you will always line up exactly even if the proportions change. I believe that the same system can be used here. It will be more complex because it is 3d and there is now a z-axis, but thats ok. My problem is that I am still new to torque and I don't know where, or exactly how to set this up. I had an idea of placing mounts in each seperate model, like one at the base of the head & one at the top of the neck so that those two could be used like magnets to connect the body & head. I'm not sure if that would work, but it's just an idea.
I know what most of you are thinking now. "Even if you could assemble these seperate models together to look like one model, what about animation? How are the other model's going to use the skeleton of the root model?" This is probably the biggest problem in my theory. But there may be ways around this too. In my setup, the "root model" would be the upper body without the head, & the lower body cut off just below the knees. That would be the main parts of the body that the skeleton would move. The legs, for example, would still bend at the knees, and the calfs below that could be attached there and wouldnt miss a beat because they would be connected to the rest of the leg, and they dont bend anyway. I believe that as long as you have the main joints all connected to the skeleton, that movement would still be possible and would look just about as good. The only things I think would be missed is movment in the fingers and certain bending in the ankels & wrists. Personally, I dont use those movements much, but maybe someone else has a way around it.
Another benefit to all of this is customization. You could allow the player to choose all the parts of the character. Say they wanted a different face? Just replace the head model with another one. Or different boots by replacing the foot with another one. This would be an easy way to allow custom player model's.
Well, that is my proposal. Tell me what I'm missing or what problems would result from this. I just quickley went through this off the top of my head, maybe I haven't explained it very well. If you have questions ask, and I can also post some drawn diagrams of what I'm saying. Let me know. Thanks.
#2
If it's for purely academic reasons, I can understand that...trying to figure it out for intellectual satisfaction,
but otherwise I can't really understand that for a PC game.
Recently I have begun to subscribe to the Texturing Excellence school of thought which basically says that it makes much more sense to invest the time and energy into learning how to make better and better character skins, and to take advantage of shader hardware if absolutely neccessary to achieve certain effects in re: realism.
One 30,000 poly character in a game seems like such a waste of resources. I would argue that a 10,000 poly character WITH KICK A** TEXTURING could rival the quality of a 30,000 poly character with an average skin. Valve did such an excellent job with the characters in HL2 in terms of the realism re: facial expressions.
These are all philosophical questions about how and where to spend your time and energy, and purely subjective, so this rant will end here.
08/31/2005 (1:35 pm)
Not to be cynical , but WHY?
If it's for purely academic reasons, I can understand that...trying to figure it out for intellectual satisfaction,
but otherwise I can't really understand that for a PC game.
Recently I have begun to subscribe to the Texturing Excellence school of thought which basically says that it makes much more sense to invest the time and energy into learning how to make better and better character skins, and to take advantage of shader hardware if absolutely neccessary to achieve certain effects in re: realism.
One 30,000 poly character in a game seems like such a waste of resources. I would argue that a 10,000 poly character WITH KICK A** TEXTURING could rival the quality of a 30,000 poly character with an average skin. Valve did such an excellent job with the characters in HL2 in terms of the realism re: facial expressions.
These are all philosophical questions about how and where to spend your time and energy, and purely subjective, so this rant will end here.
#3
I'm assuming your looking for something of this kind of quality. Well the actual mesh is a little over only 5k, with a normal map of over 2 million.
Heres a repost of a repost out of the private areas that explains everything:
originally posted by Erik Madision -
08/31/2005 (1:36 pm)
No multiplayer game (next gen or other) can push multiple 30k players...I'm assuming your looking for something of this kind of quality. Well the actual mesh is a little over only 5k, with a normal map of over 2 million.
Heres a repost of a repost out of the private areas that explains everything:
originally posted by Erik Madision -
Quote:
While looking for info, I ran across a forum post of interest. Im copying it here, in case the original doesnt last. Credit goes to someone registered as HeliosDoubleSix.
//--------------------------------------------------
Normal Maps like in Unreal Engine 3
I have collected a bunch of resources on normal mapping like used in unreal engine 3, doom 3 and Farcry.
For those curious about normal mapping
For those that havent seen Unreal Engine 3 yet
unrealtechnology.com/html/technology/ue30.shtml
Read this one before doing any of the other tutorials this tutorial will help you to understand what a normal map is
www.monitorstudios.com/bcloward/tutorials_normal_maps1.html
Zbrush 2 and Normal Maps
209.132.69.82/zbrush/zbrush2/displacement.html
ZbrushCental Forum Normal Map posts
www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=1&t=011260
www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=1&t=015376
www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=1&t=011757
www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=1&t=011894
www.pixolator.com/zbc-bin/ultimatebb.cgi?ubb=get_topic&f=4&t=000789
ATI Tools
www.ati.com/developer/tools.html
www.ati.com/developer/sdk/radeonSDK/html/Tools/ToolsPlugIns.html
NVidia Tools
developer.nvidia.com/page/tools.html
Drones tutorials
www.drone.org/tutorials/normal_maps.html
www.drone.org/tutorials/displacement_maps.html
www.drone.org/tutorials/displacement_maps_mental.html
www.drone.org/tutorials/rayDisplace_workarounds.html
www.drone.org/tutorials/rayDisplace_mental.html
Tutorial on creating Normal Maps in 3DS Max
www.pinwire.com/article82.html
Generating Normal maps for wall textures with Cinema
members.shaw.ca/jimht03/normal.html
Lightwave and normal maps
amber.rc.arizona.edu/lw/normalmaps.html
FarCry game that uses normal maps
www.crytek.com/screenshots/index.php?sx=polybump&px=poly_02.jpg
Maps made using a program called PolyBump which generates normal maps from geometry, optionally including height-map bump maps. Includes 3ds max and Maya plugins. Includes code for integrating the effect into your real-time 3D engine. Includes standalone viewer.
www.crytek.de/polybump/index.php?sx=polybump
#4
08/31/2005 (1:37 pm)
Continued..Quote:
A PC Program for dealing with them
www.mankua.com/
Mike Bunnells modification of ATIs tool that uses OBJ files instead of ATIs NMF format. Optionally creates a sub-division surface for you. Creates displacement maps. Supports 16-bit TIFF. Etc.
subd.4t.com/normalmapper/
ORB is another normal map generator, converts 3D models into normal maps. Also generates displacement maps, diffuse maps, vertex-color maps. Imports ASE/OBJ/LWO formats. Previewer included.
www.soclab.bth.se/practices/orb.html
Texporter can create a normal map from high-res geometry, as long as the UVs are there. Although I should point out it is a world-space normal map, thus you shouldnt rotate or deform the final model that has the normal map on it, because the shading will be horrible. World space normals are best for static objects in your game.
www.cuneytozdas.com/software/3dsmax/#Texporter
Gary Pate (a.k.a. Ionized) has a great tutorial using 3ds max and ATIs normal mapper.
www.ionization.net/tutsnorm1.htm
XSI normal mapping
be3d.republika.pl/howto_d3_normalmap.html
Blender normal mapping
reblended.com/www/alien-xmp/Tutorials/NormalMap/NormalMap.html
Discreets utility plugin Normal Render works OK, but requires a similar UV layout between the low-res and high-res objects. This can be quite limiting. Not many options in the tool. Works with max4 and max5. Free registration is required to download the file.
sparks.discreet.com/download...fm?f=2&wf_id=83
Ben Lipmans gNormal plugin goes in the bump channel of a material, allowing you to use your normal map in the 3ds max renderer.
www.maxplugins.de/max5.php?search=gnormal
Nvidia has a tool theyre about to release called Melody. It can create the low-res model automatically (seems pretty good, as far as auto-LOD is concerned, but of course never as good as manual) and they wrap the whole thing in a GUI.
Some opinions here:
dynamic.gamespy.com/~polycount/ubb/Forum8/HTML/002595.html
developer.nvidia.com/docs/IO/4449/SUPP/GDC2003_AllThePolygonsYouCanEat.ppt
developer.nvidia.com/docs/IO/4449/SUPP/GDC2003_AllThePolygonsYouCanEat.pdf
ATI and NVIDIA each use different normal map formats with their graphics chips.
Basically ATI expects the green channel to point the normal upwards, while NVIDIA expects it to point downwards.
The MetalBump shader in 3ds max uses the NVIDIA method.
Mankuas Kaldera has the option to output either ATI or NVIDIA format. Im not sure about the other tools.
One sure-fire method to fix a map thats incompatible with your viewer is to simply invert the green channel in your image editor of choice. By inverting I mean the black pixels should be white, and the white pixels should be black.
Polycount thread containing tips about painting/editing normal maps.
dynamic.gamespy.com/~polycount/ubb/Forum8/HTML/002497.html
Polycount thread explaining World Space vs. Object Space vs. Tangent Space.
dynamic.gamespy.com/~polycount/ubb/Forum8/HTML/001876.html
Polycount thread dissecting Doom IIIs use of normal mapping.
dynamic.gamespy.com/~polycount/ubb/Forum8/HTML/000441.html
Digital Sculpting Forum threads about normal mapping/displacement extraction.
cube.phlatt.net/forums/spiraloid/viewtopic.php?TopicID=9
cube.phlatt.net/forums/spiraloid/viewtopic.php?TopicID=395
cube.phlatt.net/forums/spiraloid/viewtopic.php?TopicID=581
These links and info have been gathered from all over, take them and repost if you like.
#5
08/31/2005 (1:50 pm)
Wow...great info on normal maps. That's the other side of the whole optimization piece. Why push real time polys when real time polys are not neccessary?
#6
I think the MDL file format is alot closer to what you're needing instead of the DTS file format. It has the notion of supermodels and submodels, including the ability of assigning vertex weights to bones in submodels when the bones actually reside in the supermodel.
If you don't have a solution, keep an eye on my blog. I've taken the intial steps of making MDL models work natively in TSE and I plan to eventually fully support all of the features of the MDL file format including dangly meshes (aka skins and cloth) and animated submodels.
Now, back to the philosophical question of "Why create a mesh with so many polygons?"
Short answer... it's easier.
Coding originally started with programmers trying to squeeze every millisecond out of an application by tedious optimizing, extensive use of assembly language and all sorts of other tricks and wizardry. Today's game artists do the same thing by automated mesh decimation and by the painstaking process of polygon reduction and other such wizardry. In the future I'm certain that you'll be able to take a Catumull-Clark subdivided surface of a generated (or scanned!) mesh amounting to over 100k polygons and directly import that into a game engine and have automatic subsurface scattering, etc done real-time... and I expect that fantasy to become a reality in under 5 years... you know, about the time when video cards are up to 1+ GB of memory and most of the game engine runs on the GPU and video cards come with physics co-processors (or at least the GPU has enhanced shader-like instructions that are tailored for physics calculations aka GPU+PHX)...
Ok, so games push the envelope a bit, and until you have a GPU that can do 100 million ray-traced polygons a frame at 30 frames per second with multiple hardware shaders, soft-shadows from multiple light sources, you're still going to have to be counting your polys... for now.
So... the answer to the "Why?" question is simply "It's easier."
08/31/2005 (3:28 pm)
Torque / DTS already supports what you're talking about through the use of the mount points. It doesn't support weighted vertices as you've already indicated, but I think it's a good idea. One thing you'll find, though, is that you're going to be spending a lot of time on this "workaround."I think the MDL file format is alot closer to what you're needing instead of the DTS file format. It has the notion of supermodels and submodels, including the ability of assigning vertex weights to bones in submodels when the bones actually reside in the supermodel.
If you don't have a solution, keep an eye on my blog. I've taken the intial steps of making MDL models work natively in TSE and I plan to eventually fully support all of the features of the MDL file format including dangly meshes (aka skins and cloth) and animated submodels.
Now, back to the philosophical question of "Why create a mesh with so many polygons?"
Short answer... it's easier.
Coding originally started with programmers trying to squeeze every millisecond out of an application by tedious optimizing, extensive use of assembly language and all sorts of other tricks and wizardry. Today's game artists do the same thing by automated mesh decimation and by the painstaking process of polygon reduction and other such wizardry. In the future I'm certain that you'll be able to take a Catumull-Clark subdivided surface of a generated (or scanned!) mesh amounting to over 100k polygons and directly import that into a game engine and have automatic subsurface scattering, etc done real-time... and I expect that fantasy to become a reality in under 5 years... you know, about the time when video cards are up to 1+ GB of memory and most of the game engine runs on the GPU and video cards come with physics co-processors (or at least the GPU has enhanced shader-like instructions that are tailored for physics calculations aka GPU+PHX)...
Ok, so games push the envelope a bit, and until you have a GPU that can do 100 million ray-traced polygons a frame at 30 frames per second with multiple hardware shaders, soft-shadows from multiple light sources, you're still going to have to be counting your polys... for now.
So... the answer to the "Why?" question is simply "It's easier."
#7
I suppose since I come from a web design background, I've been trained from the ground up to think
optimization in every aspect. So when I see what looks like "bloating" in any form it always looks
like fat that needs to be trimmed. Mainly because a jpeg at 100% is not much better looking than the
same jpeg at 82%. and the 100% version is often almost twice as big. I know that's a bad example, but in actual modeling,
I've seen high poly models that waste hundreds of polys hand over fist without much of a
visual difference than an optimized version. And building the low poly model is not significantly
harder than the high poly version, it's more of a matter of technique and awareness, IMO.
08/31/2005 (6:47 pm)
Quote:
So... the answer to the "Why?" question is simply "It's easier."
I suppose since I come from a web design background, I've been trained from the ground up to think
optimization in every aspect. So when I see what looks like "bloating" in any form it always looks
like fat that needs to be trimmed. Mainly because a jpeg at 100% is not much better looking than the
same jpeg at 82%. and the 100% version is often almost twice as big. I know that's a bad example, but in actual modeling,
I've seen high poly models that waste hundreds of polys hand over fist without much of a
visual difference than an optimized version. And building the low poly model is not significantly
harder than the high poly version, it's more of a matter of technique and awareness, IMO.
Torque Owner Adrian Tysoe
I might be wrong but in my experience this isn't feasible with any useable performance. I'm surprised you need so many polys. I rarely see anything modeled character wise that couldn't be reduced to 5000 polys and look as good, so long as your careful with placing your tris and turn the edges to get the desired smoothing. I think your first problem will be performance based
Both poly count and CPU.