Idea: Edge Clipping
by Will Harrison · in General Discussion · 08/27/2004 (5:38 pm) · 22 replies
(EDIT - Please jump down to about my 10th post below for newer idea)
I had this idea a while back, but I thought I would present it to all of you now and get some feedback.
I call it "Smooth Mask"... here it is in action:



Basically this extends the idea of normal maps (where you are only making the shaded surface look more like the way it was originally modeled)... now the silhouette can look smooth, the way it was made, thus eliminating that "low-poly" look we see too often. And, of course, the big benefit is that you're not adding more polygons to do it.
I'm not an expert coder, so I have no idea really how to make this work in a game.... but the principle is this: take the original shape and get its silouhette/alph-channel, then apply a Gaussian Blur with a 17.0-or-whatever-looks-good radius, then run that through a threshold filter (set at 128), and finally mask/stencil buffer the render of the model.
What do you all think? Is this doable? What would be the performance hit be for this added calculation/process? Anyone want to take up the challenge of doing this? Why hasnt this been done already?
I had this idea a while back, but I thought I would present it to all of you now and get some feedback.
I call it "Smooth Mask"... here it is in action:



Basically this extends the idea of normal maps (where you are only making the shaded surface look more like the way it was originally modeled)... now the silhouette can look smooth, the way it was made, thus eliminating that "low-poly" look we see too often. And, of course, the big benefit is that you're not adding more polygons to do it.
I'm not an expert coder, so I have no idea really how to make this work in a game.... but the principle is this: take the original shape and get its silouhette/alph-channel, then apply a Gaussian Blur with a 17.0-or-whatever-looks-good radius, then run that through a threshold filter (set at 128), and finally mask/stencil buffer the render of the model.
What do you all think? Is this doable? What would be the performance hit be for this added calculation/process? Anyone want to take up the challenge of doing this? Why hasnt this been done already?
#2
It's a good technique in that the silhouette/contours are pre-calculated for different views and are therefore moderately faster than dynamic calculation... however, this requires a finite set of possible views rendered from the POV of a limited amount of vertices that compose the sphere/silhouette map that they use. This means A. you have alot of extra data, and B. You can't apply the silhouette for every possible view without interpolating, which could be imperfect at best I imagine, and finally, C. worst of all, it can only be used with static, non-animated shapes!
It seems an over-complicated way to do the same thing... but it's nice to know some minds are or were at work on this.
....
I would be blown away if I saw this working... imagine you come right up close to an object or character in a game... and the edges are completely smoothed out, I mean not so much as a hint of a polygon... if you compare this to LOD, you are getting a much better final image I think without adding any polygons to achieve the effect.
EDIT- Would anyone like to collaborate on doing this in code?
08/27/2004 (10:05 pm)
Interesting! Thanks!It's a good technique in that the silhouette/contours are pre-calculated for different views and are therefore moderately faster than dynamic calculation... however, this requires a finite set of possible views rendered from the POV of a limited amount of vertices that compose the sphere/silhouette map that they use. This means A. you have alot of extra data, and B. You can't apply the silhouette for every possible view without interpolating, which could be imperfect at best I imagine, and finally, C. worst of all, it can only be used with static, non-animated shapes!
It seems an over-complicated way to do the same thing... but it's nice to know some minds are or were at work on this.
....
I would be blown away if I saw this working... imagine you come right up close to an object or character in a game... and the edges are completely smoothed out, I mean not so much as a hint of a polygon... if you compare this to LOD, you are getting a much better final image I think without adding any polygons to achieve the effect.
EDIT- Would anyone like to collaborate on doing this in code?
#3
08/27/2004 (10:39 pm)
Sounds good. One thing I can think of though would be that you would still see the polys when an object is obscuring itself (like the ear in your above picture). It wqould make a remarkable improvement though.
#4
Thanks.
"One thing I can think of though would be that you would still see the polys when an object is obscuring itself (like the ear in your above picture)."
Yes, someone in another forum mentioned this... it's true, there's not much you can do for shapes within the silhouette, but as the Harvard people said, "the appearance of the silhouette of an object is one of the strongest visual cues as to the shape of the object." ie. the edge is the most obvious in defining an object's shape.
Anyone have any ideas on how to do this in code? ...We could make Torque famous with this feature, hehe ; )
08/27/2004 (10:44 pm)
"It would make a remarkable improvement though."Thanks.
"One thing I can think of though would be that you would still see the polys when an object is obscuring itself (like the ear in your above picture)."
Yes, someone in another forum mentioned this... it's true, there's not much you can do for shapes within the silhouette, but as the Harvard people said, "the appearance of the silhouette of an object is one of the strongest visual cues as to the shape of the object." ie. the edge is the most obvious in defining an object's shape.
Anyone have any ideas on how to do this in code? ...We could make Torque famous with this feature, hehe ; )
#5
The biggest problem I have with this is the issue of skeletally animated objects. Hoppe's method uses some unusual data structures to speed up the problem of calculating the edges of the mesh (a tree of anchored cones, I believe) and the way this is set up does not mix well with animated meshes. We could of course calculate silhouette edges by normal means (back-face culling all faces of the high resolution mesh versus the camera, and then testing which edges of the mesh have exactly one attached face visible. This also makes assumptions about 2-manifold geometry...) but that would be intolerably slow. There are probably other issues involved here as well; I seem to recall that the way the winding is generated for meshes is somewhat peculiar.
The idea of procedurally generating smoothed "silhouette maps" at run-time is an interesting one, and a novel approach to the problem. If you could render a skeletally animated image into a texture and then somehow extract a smooth (read: pixel-accurate) outline from that image, that would be promising. This idea is interesting enough that I will try to find some time to research it, but any implementation would likely require PS 2.0.
Nicholas
08/28/2004 (12:38 am)
This has already been implemented, at least for static objects: see http://research.microsoft.com/~hhoppe/ and go hunt for the paper and video on silhouette clipping. Hoppe's method takes a low resolution and a high resolution mesh as source data and then uses the high resolution mesh to generate a list of "high resolution edges." The edges of the low polygon model are then clipped using some interesting stencil buffer tricks.The biggest problem I have with this is the issue of skeletally animated objects. Hoppe's method uses some unusual data structures to speed up the problem of calculating the edges of the mesh (a tree of anchored cones, I believe) and the way this is set up does not mix well with animated meshes. We could of course calculate silhouette edges by normal means (back-face culling all faces of the high resolution mesh versus the camera, and then testing which edges of the mesh have exactly one attached face visible. This also makes assumptions about 2-manifold geometry...) but that would be intolerably slow. There are probably other issues involved here as well; I seem to recall that the way the winding is generated for meshes is somewhat peculiar.
The idea of procedurally generating smoothed "silhouette maps" at run-time is an interesting one, and a novel approach to the problem. If you could render a skeletally animated image into a texture and then somehow extract a smooth (read: pixel-accurate) outline from that image, that would be promising. This idea is interesting enough that I will try to find some time to research it, but any implementation would likely require PS 2.0.
Nicholas
#6
I watched the video, it was interesting... and really complicated, I mean the way they did it. As you say, the drawback is that you would have a hard time making this approach work with animations.
As far as I can fore-see, my suggseted technique can be done on the fly with fully animated shapes with no extra calculations required.
"but any implementation would likely require PS 2.0."
I see.
08/28/2004 (1:01 am)
Thanks for your interest in this Nicholas.I watched the video, it was interesting... and really complicated, I mean the way they did it. As you say, the drawback is that you would have a hard time making this approach work with animations.
As far as I can fore-see, my suggseted technique can be done on the fly with fully animated shapes with no extra calculations required.
"but any implementation would likely require PS 2.0."
I see.
#7
____
Also, as an alternative solution, I believe the same effect can be achieved by back-face culling / edge-detecting, taking that polygonal edge or loop, then interpolating curvature between vertices of the that loop, then use that to clip the shape before rendering. There is an added benefit here of being able to account for spikes (ie. not undesirably curving-out spikes that should remain spikey.)... this can be done by calculating the angle between edges within the loop, if it's very acute then you know it's a sharp angle and you can ignore it, etc. Also, I believe this makes anti-aliasing the outline of a shape easier, another added benefit.
If this could be done on lower-end hardware, it would be all the more impressive.
08/28/2004 (2:22 am)
If it's any help Nicholas, I just heard that Tron 2.0 (the game) used the Gaussian blur filter to make glow effects.____
Also, as an alternative solution, I believe the same effect can be achieved by back-face culling / edge-detecting, taking that polygonal edge or loop, then interpolating curvature between vertices of the that loop, then use that to clip the shape before rendering. There is an added benefit here of being able to account for spikes (ie. not undesirably curving-out spikes that should remain spikey.)... this can be done by calculating the angle between edges within the loop, if it's very acute then you know it's a sharp angle and you can ignore it, etc. Also, I believe this makes anti-aliasing the outline of a shape easier, another added benefit.
If this could be done on lower-end hardware, it would be all the more impressive.
#8
08/28/2004 (3:29 am)
You would need to put a flag on pieces of the model that you would want to remain angular and retain their edges (boxy pieces).
#9
Also, there are some more obvious problems with this technique:
- Small and/or distant objects will be "smoothed" out of existance
- Very large objects that extend significantly in the Z direction will be mutilated.
- How would level geometry be handled?
08/28/2004 (9:51 am)
Isn't a 17px gaussian blur an extremely expensive operation? I know that Tron 2.0 had to strictly limit both the blur radius and the resolution of the screen objects are being blurred against (I seem to recall 256x256 being what they settled upon).Also, there are some more obvious problems with this technique:
- Small and/or distant objects will be "smoothed" out of existance
- Very large objects that extend significantly in the Z direction will be mutilated.
- How would level geometry be handled?
#10
08/28/2004 (10:37 am)
This would be extremely challenging to difficult on today's graphics hardware. They're just not geared for that sort of compositing. You might have better luck with the SM3.0 era of hardware, though. Or maybe when DXNext comes out...
#11
"Also, as an alternative solution, I believe the same effect can be achieved by back-face culling / edge-detecting, taking that polygonal edge or loop, then interpolating curvature between vertices of the that loop, then use that to clip the shape before rendering. There is an added benefit here of being able to account for spikes (ie. not undesirably curving-out spikes that should remain spikey.)... this can be done by calculating the angle between edges within the loop, if it's very acute then you know it's a sharp angle and you can ignore it, etc. Also, I believe this makes anti-aliasing the outline of a shape easier, another added benefit."
@Alan: yes, it would be, and unlike a blur or glow effect, it would have to be at a high resolution too... if done without some form of hardware acceleration, I think it's next to impossible to maintain an acceptable level of performance... but if you use PS 2.0, as mentioned, I think it's quite possible. However I'm leaning more now towards my alternate solution (see above).
- Not sure what you mean by this one...
- I think level geometry could be ignored for the most part... I mean, buildings tend not to need "smoothing out" if you know what I mean, but mechanical things like piping would benefit from this... so they could be treated as separate objects, I guess.
@Ben: this is probably true. There's only two ways that I can see: use newer technology , OR, more promising IMO, use the latter technique that I have mentioned. It's quite similar to the silhouette clipping idea seen at that microsoft research website.
08/28/2004 (12:11 pm)
@Scott: I suppose that is one way, but there are two alternatives, either have a "smart"-gaussian blur filter that can discern sharp edges (that should remain) from small, but obvious, angles (that should be rounded out)... and the other way is to use the aforementioned technique:"Also, as an alternative solution, I believe the same effect can be achieved by back-face culling / edge-detecting, taking that polygonal edge or loop, then interpolating curvature between vertices of the that loop, then use that to clip the shape before rendering. There is an added benefit here of being able to account for spikes (ie. not undesirably curving-out spikes that should remain spikey.)... this can be done by calculating the angle between edges within the loop, if it's very acute then you know it's a sharp angle and you can ignore it, etc. Also, I believe this makes anti-aliasing the outline of a shape easier, another added benefit."
@Alan: yes, it would be, and unlike a blur or glow effect, it would have to be at a high resolution too... if done without some form of hardware acceleration, I think it's next to impossible to maintain an acceptable level of performance... but if you use PS 2.0, as mentioned, I think it's quite possible. However I'm leaning more now towards my alternate solution (see above).
Quote:- if using the suggested gaussian blur technique, then the radius would be inversely proportional to the distance from the object. eg. if the object is far away, you don't need any filter applied at all.
- Small and/or distant objects will be "smoothed" out of existance
- Very large objects that extend significantly in the Z direction will be mutilated.
- How would level geometry be handled?
- Not sure what you mean by this one...
- I think level geometry could be ignored for the most part... I mean, buildings tend not to need "smoothing out" if you know what I mean, but mechanical things like piping would benefit from this... so they could be treated as separate objects, I guess.
@Ben: this is probably true. There's only two ways that I can see: use newer technology , OR, more promising IMO, use the latter technique that I have mentioned. It's quite similar to the silhouette clipping idea seen at that microsoft research website.
#12



The method of determining the edge is the same as that used to do cel-shading. However, unlike cel-shading, the end result is instead of drawing a thick line on the edge, you use the interpolated edge to clip the image before rendering.
Again, I'm not much of a coder, so can anyone offer specifics on how this would be done?
08/28/2004 (2:42 pm)
This is to illustrate the newer idea:


The method of determining the edge is the same as that used to do cel-shading. However, unlike cel-shading, the end result is instead of drawing a thick line on the edge, you use the interpolated edge to clip the image before rendering.
Again, I'm not much of a coder, so can anyone offer specifics on how this would be done?
#13
I am honestly clueless as to why this hasnt been done already....
08/29/2004 (11:10 am)
Do you think this is feasible in Torque? (the above technique)I am honestly clueless as to why this hasnt been done already....
#14
It's view dependent. With the current generation of hardware, that means that you'd either have to precalculate a bunch of clipping meshes for different views as you mentioned (too much RAM, most likely) - or you'd be doing it at render time. I can think of a few ways I could code it, but they'd all cost more than it's worth.
You might be able to get away with it for a demo or something, but I'd guess the requirements are a little high for a real game that's also worrying about physics and AI and such.
It would be a good feature for the future, but you'd probably need nVidia or ATI to add support for it in the hardware and drivers.
Whole-scene blur is pretty easy to do, but you already get most of the benefit of that just from anti-aliasing. Besides, you'd end up blurring things you don't want blur on.
You can also find other approaches to this same kind of problem in any paper on non-heightmap view dependent level of detail. Most often applied to terrain rendering, but there are some general case algorithms.
08/29/2004 (1:55 pm)
Feasible? Not especially, no.It's view dependent. With the current generation of hardware, that means that you'd either have to precalculate a bunch of clipping meshes for different views as you mentioned (too much RAM, most likely) - or you'd be doing it at render time. I can think of a few ways I could code it, but they'd all cost more than it's worth.
You might be able to get away with it for a demo or something, but I'd guess the requirements are a little high for a real game that's also worrying about physics and AI and such.
It would be a good feature for the future, but you'd probably need nVidia or ATI to add support for it in the hardware and drivers.
Whole-scene blur is pretty easy to do, but you already get most of the benefit of that just from anti-aliasing. Besides, you'd end up blurring things you don't want blur on.
You can also find other approaches to this same kind of problem in any paper on non-heightmap view dependent level of detail. Most often applied to terrain rendering, but there are some general case algorithms.
#15
Similarly, cel-shading is calculated per frame.
...and they are both view dependant.
...and they are both based on determining the vertex edges of the mesh.
Is this not a natural extension of these existing concepts?
08/29/2004 (3:37 pm)
Hard-edged shadows that are made with vertex shaders are calculated on a frame by frame basis.Similarly, cel-shading is calculated per frame.
...and they are both view dependant.
...and they are both based on determining the vertex edges of the mesh.
Is this not a natural extension of these existing concepts?
#16
First, some shadow volume pictures from DX9:


Quoting from my post from another board:
The first image is showing the edge-detection at work.
A series of interconnected vertices are created around the very edge of the mesh. This is an entirely separate set of vertices, based on the original model. These vertices are then extruded, parallel to the direction of the light source....
Below that you see where the shadow volume is stenciled into the ground (also, you can see some self-shadowing happening too).
These vertices that are extruded to create the shadow volume are pulled out according to the direction of the light source, as mentioned. However, with a few changes in code, the vertices can be extruded based on the camera's direction (ie. the player's POV)... So what you have now is this set of connected vertices, a kind of contour, tracing around the model always at the exact edge according to your view... it will be there no matter how you move around it, or if it rotates, or animates, or whatever....
So what this gives you is the first stage of this: (prior to interpolating)

The red contour line you see above will be there in real-time, calculated every frame, with a constant frame rate of 30+fps or so.... we know this is not a problem, as the same process is already used to do real-time shadowing and cel-shading in many existing commercial games.
Then, the next step is to take that contour and re-create it, but this time, add vertices in between the existing vertices...(there will be some logic here to control where to add vertices and where not to... eg. like where you dont need them at really sharp points and concave areas).... Now you have the extra vertices to make the contour look smoother, but it's still not smooth. You would have to take the original vertices (ie. not the newly added ones) and project them negatively (inward) on their normals, thus yielding the concpet in the above image... a kind of "interpolated clipping edge."
This illustrates the idea of "projecting negatively" or inward the original vertices (from the first constructed contour):

After that, I'm unsure exactly what to do.... but basically, I guess, you would take the newly made clipping egde and use that to stencil buffer the model, so as to "clip off" the unwanted sharp areas of the silhouette.
That is my theory in a nutshell.
Does it make sense?
08/30/2004 (12:47 am)
I would like to post here my theory on how to implement this idea.... if i could get some feedback, that'd be great.First, some shadow volume pictures from DX9:


Quoting from my post from another board:
The first image is showing the edge-detection at work.
A series of interconnected vertices are created around the very edge of the mesh. This is an entirely separate set of vertices, based on the original model. These vertices are then extruded, parallel to the direction of the light source....
Below that you see where the shadow volume is stenciled into the ground (also, you can see some self-shadowing happening too).
These vertices that are extruded to create the shadow volume are pulled out according to the direction of the light source, as mentioned. However, with a few changes in code, the vertices can be extruded based on the camera's direction (ie. the player's POV)... So what you have now is this set of connected vertices, a kind of contour, tracing around the model always at the exact edge according to your view... it will be there no matter how you move around it, or if it rotates, or animates, or whatever....
So what this gives you is the first stage of this: (prior to interpolating)

The red contour line you see above will be there in real-time, calculated every frame, with a constant frame rate of 30+fps or so.... we know this is not a problem, as the same process is already used to do real-time shadowing and cel-shading in many existing commercial games.
Then, the next step is to take that contour and re-create it, but this time, add vertices in between the existing vertices...(there will be some logic here to control where to add vertices and where not to... eg. like where you dont need them at really sharp points and concave areas).... Now you have the extra vertices to make the contour look smoother, but it's still not smooth. You would have to take the original vertices (ie. not the newly added ones) and project them negatively (inward) on their normals, thus yielding the concpet in the above image... a kind of "interpolated clipping edge."
This illustrates the idea of "projecting negatively" or inward the original vertices (from the first constructed contour):

After that, I'm unsure exactly what to do.... but basically, I guess, you would take the newly made clipping egde and use that to stencil buffer the model, so as to "clip off" the unwanted sharp areas of the silhouette.
That is my theory in a nutshell.
Does it make sense?
#17
This is where you start to get shaky. Cel-shading and Shadowing tricks don't add vertices on the fly - that's the essential difference.
You can do something like this with a low poly model and displacement mapping. But at that point, you might as well apply displacement mapping to the whole model and forget about the silhouette detection.
08/30/2004 (2:14 pm)
Quote:add vertices in between the existing vertices...(there will be some logic here to control where to add vertices and where not to... eg. like where you dont need them at really sharp points and concave areas)
This is where you start to get shaky. Cel-shading and Shadowing tricks don't add vertices on the fly - that's the essential difference.
You can do something like this with a low poly model and displacement mapping. But at that point, you might as well apply displacement mapping to the whole model and forget about the silhouette detection.
#18
Cel-shading doesnt add vertices on the fly (in openGL at least, in DirectX, it depends on your approach), but the shadow volume process does.
As can be seen in the above picture (where there's rays of green lines coming off that dwarf character).
The vertices don't have to be added to a whole new contour... rather, they can be added as the edge is being created initially. I suggested making a new set of vertices for clarity, but really, it can be done when the first set is already being created.
Re displace maps
Displacement mapping involves tessellating the entire mesh or adaptively where ever there should be detail.... this, on the other hand, is only tracing and smoothing the edge... no extra polygons are ever made or added to the mesh... the mesh stays untouched... just the corners are clipped off with this "interpolated clipping edge".
This approach doesnt require any special hardware with support for displacement mapping... or the use of n-patches or whatever, ...it's a relatively low-cost way to do more or less the same thing.
08/30/2004 (2:56 pm)
...from my above post:Quote:A series of interconnected vertices are created around the very edge of the mesh. This is an entirely separate set of vertices, based on the original model. These vertices are then extruded, parallel to the direction of the light source....
Cel-shading doesnt add vertices on the fly (in openGL at least, in DirectX, it depends on your approach), but the shadow volume process does.
As can be seen in the above picture (where there's rays of green lines coming off that dwarf character).
The vertices don't have to be added to a whole new contour... rather, they can be added as the edge is being created initially. I suggested making a new set of vertices for clarity, but really, it can be done when the first set is already being created.
Re displace maps
Displacement mapping involves tessellating the entire mesh or adaptively where ever there should be detail.... this, on the other hand, is only tracing and smoothing the edge... no extra polygons are ever made or added to the mesh... the mesh stays untouched... just the corners are clipped off with this "interpolated clipping edge".
This approach doesnt require any special hardware with support for displacement mapping... or the use of n-patches or whatever, ...it's a relatively low-cost way to do more or less the same thing.
#19
Which means you'd have to do some pretty complex geometric manipulation, per-frame, and upload it.
And then once you've got the geometry up there, how are you going to blur it? The ability of modern hardware to blur is VERY minimal. It _is_ possible but not in a generic way. It usually involves one or more render to texture passes. Doing a full screen blur for the glow buffer in TSE, even on SM2.0 hardware, is a bit of a hit, and mostly because of the render to texture operations. Trying to do gaussian passes on arbitrary screen sections is likely to enhance the number of ops.
Obviously, there's a lot of room for optimization, so I won't say it's impossible, but it wouldn't be a feature to add lightly. It's one of those things you'd end up structuring your engine around.
09/04/2004 (11:50 am)
What you're doing seems to be essentially a 2d composite operation, unless you plan on dynamically generating a _lot_ of edge geometry. Stencil shadows only match up to the existing geometry. Trying to do the sort of edge manipulation you're talking about is a much more complex clipping operation, which in most cases would involve modifying multiple faces.Which means you'd have to do some pretty complex geometric manipulation, per-frame, and upload it.
And then once you've got the geometry up there, how are you going to blur it? The ability of modern hardware to blur is VERY minimal. It _is_ possible but not in a generic way. It usually involves one or more render to texture passes. Doing a full screen blur for the glow buffer in TSE, even on SM2.0 hardware, is a bit of a hit, and mostly because of the render to texture operations. Trying to do gaussian passes on arbitrary screen sections is likely to enhance the number of ops.
Obviously, there's a lot of room for optimization, so I won't say it's impossible, but it wouldn't be a feature to add lightly. It's one of those things you'd end up structuring your engine around.
#20
If you read my last and second to last post up, you will see I'm talking about using edge detection and polygon-based clipping, rather than gaussian blur... using conventional methods, that is, the same methods used for volumetric shadows and cel shading.
To clarify...
Existing Shadow Volume Process:
- detect backfacing polygons to find edges
- create an edge loop around the edge of the model and put into vertex buffer
- extrude vertices of said loop parallel to direction of light source
- stencil buffer intersecting region (in ground) with dark quad to create shadow
- repeat process every frame
and....
Hypothetical Edge Clipping Process:
- detect backfacing polygons to find edges
- create an edge loop around the edge of the model and put into vertex buffer... as vertices are being created, add in-between vertices
- contract in-between vertices along normal to smooth edge loop (amount of contraction depends on distance to camera) (See illustration in above posts)
- use edge loop to clip via stencil buffer the original model to result in smoothed edges.
- repeat process every frame
(The gaussian blur approach was not a very good idea really, from a technical standpoint...)
EDIT -
I would like to add...
The result of this smoothing process can create completely smoothed out shapes at any distance... also any size... just like in most cg movies you never see poly edges... this can create the same effect... while youre playing the game.
If anyone wants to try coding this, please let me know! :)
09/04/2004 (2:16 pm)
Thanks Ben...If you read my last and second to last post up, you will see I'm talking about using edge detection and polygon-based clipping, rather than gaussian blur... using conventional methods, that is, the same methods used for volumetric shadows and cel shading.
To clarify...
Existing Shadow Volume Process:
- detect backfacing polygons to find edges
- create an edge loop around the edge of the model and put into vertex buffer
- extrude vertices of said loop parallel to direction of light source
- stencil buffer intersecting region (in ground) with dark quad to create shadow
- repeat process every frame
and....
Hypothetical Edge Clipping Process:
- detect backfacing polygons to find edges
- create an edge loop around the edge of the model and put into vertex buffer... as vertices are being created, add in-between vertices
- contract in-between vertices along normal to smooth edge loop (amount of contraction depends on distance to camera) (See illustration in above posts)
- use edge loop to clip via stencil buffer the original model to result in smoothed edges.
- repeat process every frame
(The gaussian blur approach was not a very good idea really, from a technical standpoint...)
EDIT -
I would like to add...
The result of this smoothing process can create completely smoothed out shapes at any distance... also any size... just like in most cg movies you never see poly edges... this can create the same effect... while youre playing the game.
If anyone wants to try coding this, please let me know! :)
Torque Owner Peter Kojesta
http://people.deas.harvard.edu/~xgu/paper/Silhouette_Map/silhouette_intro.html