Game Development Community

T3D performance issue

by Yuejun Zhang · in Torque 3D Professional · 07/08/2009 (4:50 pm) · 5 replies

I tested the t3d beta2 and beta3 in a Ati HD 2400 video card, 3.2Ghz, 2G ram. computer. it runs at 7-8 fps . I run tgea 1.7.1 at this computer , can run at 30-40 fps with similar maps. I can run crysis 2 and battlefield 2 smoothly in this computer about 20-30 fps. and some of my friends report the performance issue when they test my T3D game. is anybody out here have similar issue?

#1
07/08/2009 (6:56 pm)
I had the same issue. I downloaded the Direct X SDK and downloaded new drivers for my graphics card. After I reset my computer my framerate improved from 5-6 fps to 17-18 fps in the FPS Genre Kit. Running my own game my fps improved to 29-30 from around 10-11. It isn't as high as I would like but it was a major improvement.
#2
07/09/2009 (7:48 pm)
realize we are still in Beta and alot of the demo kits do not have LOD set up and it has been stated before that advanced lighting still has alot of optimization in store yet.
#3
07/09/2009 (8:02 pm)
Especially the FPS Genre kit does nothing in relation to scale (configurable shadow distance in otions and alike), so the performance gets even worse.
But thats actually good as it would potentially offer the opportunity for GG to create a "how to optimize your game with T3D / how to scale visuals smoothly depending on the hardware" guide which goes a bit into the details as T3D finally has the capabilities to scale due to the far lower amount of hardcoded stuff than TGEA.

Also its important to keep in mind that you are using the Advanced Lighting, which requires advanced hardware.
You have a low end card (when it was new it was mid range but thats 18 months back) and that you will have to disable / reduce advanced features to get on a usefull performance and I would expect that you will have to do this with the final release too.

Crysis 2: it does a lot of that automatisms I talk above behind the scene. It detects that you have a restricted card and just disables / scales down things internally for which your card would be too slow anyway. Also the comparision of a $1000 tech to a $500'000 per title tech is a bit out of scale. I would definitely expect it to run better, otherwise T3D would be significantly underpriced :D
#4
07/10/2009 (12:21 am)
I would like to point out that, actually, as far as Advanced Lighting is concerned, a "mid-range" card is actually no good. Lower-end cards tend to be light on memory bandwidth, and Advanced Lighting is a GPU bandwidth hog, as well as an ALU-operation hog. This is why a low/mid range card, which may be able to run pixel/vertex shaders quickly, can't run Advanced Lighting as quickly.

Now, that being said...you will run that scene at 30-40 fps...and if you add, say...50 dynamic lights (without shadows) to the scene...you will get 30-40fps. If you add another 50 dynamic lights to the scene (without shadows) you will still...get 30-40fps. It will start to degrade linearly (well dependent on light size, as well, technically) as you start to add more lights...but very, very slightly.

The take-away from this post is:

Advanced lighting has a somewhat-significant, one-time performance penalty; meaning that it takes a certain chunk of time to run no matter what you do in the scene. That time is dependent on the bandwidth of the GPU, the screen resolution, and the G-buffer bit depth (32/64 bits, right now it defaults to 64, but this is only to prevent people from doing stupid things like setting a visible distance of 20,000 with only 16-bits of z-precision).
#5
07/10/2009 (6:48 am)
@Pat: the problem is the shadow. Rendering the g-buffer isn't all that bad. But the shadow uses a *lot* of samples from a R32F texture.

Anyone having performance issues try turning the sun/scattersky shadow casting off before trying basic lighting.

The lightRays has the same issue. The shader uses 100 samples, which completely kills anything below modern mid-range or below old high-end cards.