Memory + System requirements consideration
by Michael Cordner · in Torque Game Builder · 07/13/2005 (9:04 am) · 6 replies
Hello everyone,
I'm a professional developer, very new to torque 2d, but am currently considering using t2d for my start-up company's next game project.
One of the things that I'm having difficulty finding information on is how t2d manages assets internally, and how to organise a game to best control when and where resources are loaded. Our game requires a lot of large scrolling backgrounds (not built up of tiles, but screen-sized 'paintings', if you like) with several layers, and many detailed animations superimposed on top of them. One of our requirements is that the game should run smoothly on lower end machines (not quite sure how low end yet, but think sub 1ghz 128mb ram with some kind of low-end 3d acceleration.. say 32mb vid mem) I have many questions around the implications using t2d would have on the game's design.
As an example of the kinds of issues I'm looking at... In our game design, one 'room' is a large scrolling area, say 8 physical screen widths at 1024x768. How well does t2d handle this, and more specifically, how do I set up the game script to manage the loading and unloading of several of these rooms? There will be several parallax layers on each of these scenes, so would each layer in this scenario take up a similar amount of memory (assuming, of course, the main viewable layer is always the last rendered layer). There will many background animations associated with each room, so I kind of assume that the requirements for an animation would follow the simple width*height*colour depth + any texture "padding" that's required (in the worst case, ignoring the eventual hardware representation).
Or perhaps not, myself being a relative noob to the technology. Does anyone have any experience with these kinds of issues, or know of a thread or document somewhere that outlines these considerations when designing a commercial quality game within t2d?
I'm a professional developer, very new to torque 2d, but am currently considering using t2d for my start-up company's next game project.
One of the things that I'm having difficulty finding information on is how t2d manages assets internally, and how to organise a game to best control when and where resources are loaded. Our game requires a lot of large scrolling backgrounds (not built up of tiles, but screen-sized 'paintings', if you like) with several layers, and many detailed animations superimposed on top of them. One of our requirements is that the game should run smoothly on lower end machines (not quite sure how low end yet, but think sub 1ghz 128mb ram with some kind of low-end 3d acceleration.. say 32mb vid mem) I have many questions around the implications using t2d would have on the game's design.
As an example of the kinds of issues I'm looking at... In our game design, one 'room' is a large scrolling area, say 8 physical screen widths at 1024x768. How well does t2d handle this, and more specifically, how do I set up the game script to manage the loading and unloading of several of these rooms? There will be several parallax layers on each of these scenes, so would each layer in this scenario take up a similar amount of memory (assuming, of course, the main viewable layer is always the last rendered layer). There will many background animations associated with each room, so I kind of assume that the requirements for an animation would follow the simple width*height*colour depth + any texture "padding" that's required (in the worst case, ignoring the eventual hardware representation).
Or perhaps not, myself being a relative noob to the technology. Does anyone have any experience with these kinds of issues, or know of a thread or document somewhere that outlines these considerations when designing a commercial quality game within t2d?
#2
Having said that, I do still have about 4-6 layers (only 1 of which scrolls, however) with acceptable speed (30-40 fps 800x600 fullscreen). I have an older GeForce2 card, so newer cards will give better performance, but I'm also concerned about older systems running the game acceptably. As Matthew said, there are going to be tradeoffs.
The asset management is kind of a black box for me, but it seems well done. I haven't had any issues of running out of video ram yet, and I don't explicitly manage the art very much. I will be more certain once I get Little Gods on some lower end machines.
07/13/2005 (10:23 am)
My game, Little Gods, uses a lot of layers and some scrolling, but I found that I had performance problems with scrolling layers (and, in fact, under D3D scrolling doesn't work at all for fxScroller2D objects although I'm pretty sure it does work for tile maps). For instance, the sky layer scrolls and I had a water layer that also moved, but adding the water layer was a 25% FPS hit, and the sky layer is almost as much. I'm certain this will be optimized in the future, so it won't be as much of an issue then.Having said that, I do still have about 4-6 layers (only 1 of which scrolls, however) with acceptable speed (30-40 fps 800x600 fullscreen). I have an older GeForce2 card, so newer cards will give better performance, but I'm also concerned about older systems running the game acceptably. As Matthew said, there are going to be tradeoffs.
The asset management is kind of a black box for me, but it seems well done. I haven't had any issues of running out of video ram yet, and I don't explicitly manage the art very much. I will be more certain once I get Little Gods on some lower end machines.
#3
At that point, you should be less worried about how T2D handles it and more worried about the underlying hardware.
1024*768 is 786432 pixels. Not that much, but you said that this room was 8 physical screen widths, so now it jumps to 6,291,456 pixles. Since each pixel takes 4 bytes, you get a grand total of 25,165,824 bytes, or 25MB to round off. I would not expect a 32MB graphics card to be able to store all of that in video memory at once, so there's probably going to be a bit of thrashing. Depending on how low-end you're thinking (pre-AGP), the thrashing penalty can get severe. Or OpenGL will fail to allocate the textures entirely (unlikely, but possible for non-AGP cards).
As it turns out, it's probably not too bad, since the textures that this large image will be broken up into will be managable, so the driver will probably only store the few that you are currently using on the card. But if you're really serious about running something like this on 32MB cards, I would seriously consider hunting one down and seeing how it performs. And reducing significantly the resolution (and the resolution of the sprites/backgrounds) down to 640x480. You're more likely not to kill the fillrate of older hardware that way.
07/13/2005 (12:27 pm)
Quote:In our game design, one room is a large scrolling area, say 8 physical screen widths at 1024x768. How well does t2d handle this, and more specifically, how do I set up the game script to manage the loading and unloading of several of these rooms?
At that point, you should be less worried about how T2D handles it and more worried about the underlying hardware.
1024*768 is 786432 pixels. Not that much, but you said that this room was 8 physical screen widths, so now it jumps to 6,291,456 pixles. Since each pixel takes 4 bytes, you get a grand total of 25,165,824 bytes, or 25MB to round off. I would not expect a 32MB graphics card to be able to store all of that in video memory at once, so there's probably going to be a bit of thrashing. Depending on how low-end you're thinking (pre-AGP), the thrashing penalty can get severe. Or OpenGL will fail to allocate the textures entirely (unlikely, but possible for non-AGP cards).
As it turns out, it's probably not too bad, since the textures that this large image will be broken up into will be managable, so the driver will probably only store the few that you are currently using on the card. But if you're really serious about running something like this on 32MB cards, I would seriously consider hunting one down and seeing how it performs. And reducing significantly the resolution (and the resolution of the sprites/backgrounds) down to 640x480. You're more likely not to kill the fillrate of older hardware that way.
#4
I guess most of my questions are stemming from a general ignorance of how T2D works. I'd assume that ultimately the precise memory requirement for a game would be dependent on the card, driver, and rendering pipeline, (assuming T2D itself doesn't go any further, and sends primitive and texture data into some virtual class implementation that handles the OGL or DX specifics).
That's more along the lines that I was getting at... in my last engine implementation, a large, arbitrarily sized image would be split up into 256x256 chunks (a texture size limitation on some older cards) with extra chunks to hold overflow from oddly sized images... one question I might have there would be around the specifics of T2D's implementation... does it handle large textures in a similar fashion, and if so, is there any control over the chunk sizes... what would happen with 'blank' chunks?
For example, if I have a parallax layer that has the same physical height of the main background image, yet is mostly made up of transparent pixels do the completely transparent chunks still get drawn, rendered, and blended, or does T2D do some optimisation there before sending it to the underlying pipeline?
Also, some questions surrounding the actual physical loading of a datablock.. when is the underlying system call to get the image off the disk, is there any way of controlling it from script, how does the threading behind it work, does T2D hold an intermediate version of the raw image data before sending it to the pipeline, and when is it freed?
I apologise for rambling, but these are the kinds of questions that come to mind as I try and figure out whether T2D is suited to our purposes. Again, I could just be talking nonsense out of ignorance, but I'm sure I'll have more specific and relevant questions as time goes on. I tried to have a look at the torque core docs to get a better sense of the underlying engine, but, only having bought T2D and not 3D, I was unable to access it.
One specific idea I am thinking about at the moment though is the concept of using the 2D equivalent of 'texture packs' for the game. Our artists work exclusively in flash, and the plan is to export from flash to png by merely setting the flash stage area to the target resolution. I'm thinking, given T2D's use of its own, non-pixel driven coordinate transform (applause), it should be possible to export say three different versions of every background and / or animation, and essentially present the option to use the lower res versions on older systems.
07/13/2005 (2:25 pm)
Thanks, guys those are all helpful responses.I guess most of my questions are stemming from a general ignorance of how T2D works. I'd assume that ultimately the precise memory requirement for a game would be dependent on the card, driver, and rendering pipeline, (assuming T2D itself doesn't go any further, and sends primitive and texture data into some virtual class implementation that handles the OGL or DX specifics).
Quote: As it turns out, it's probably not too bad, since the textures that this large image will be broken up into will be managable
That's more along the lines that I was getting at... in my last engine implementation, a large, arbitrarily sized image would be split up into 256x256 chunks (a texture size limitation on some older cards) with extra chunks to hold overflow from oddly sized images... one question I might have there would be around the specifics of T2D's implementation... does it handle large textures in a similar fashion, and if so, is there any control over the chunk sizes... what would happen with 'blank' chunks?
For example, if I have a parallax layer that has the same physical height of the main background image, yet is mostly made up of transparent pixels do the completely transparent chunks still get drawn, rendered, and blended, or does T2D do some optimisation there before sending it to the underlying pipeline?
Also, some questions surrounding the actual physical loading of a datablock.. when is the underlying system call to get the image off the disk, is there any way of controlling it from script, how does the threading behind it work, does T2D hold an intermediate version of the raw image data before sending it to the pipeline, and when is it freed?
I apologise for rambling, but these are the kinds of questions that come to mind as I try and figure out whether T2D is suited to our purposes. Again, I could just be talking nonsense out of ignorance, but I'm sure I'll have more specific and relevant questions as time goes on. I tried to have a look at the torque core docs to get a better sense of the underlying engine, but, only having bought T2D and not 3D, I was unable to access it.
One specific idea I am thinking about at the moment though is the concept of using the 2D equivalent of 'texture packs' for the game. Our artists work exclusively in flash, and the plan is to export from flash to png by merely setting the flash stage area to the target resolution. I'm thinking, given T2D's use of its own, non-pixel driven coordinate transform (applause), it should be possible to export say three different versions of every background and / or animation, and essentially present the option to use the lower res versions on older systems.
#5
That's exactly what fxChunkedImageDatablock2D does. At least as far as I know.
As for optimizations of blank chunks - my logfile says this:
OpenGL Init: Enabled Extensions
...
ARB_texture_compression
EXT_texture_compression_s3tc
So, if available, maybe texture compression is used (if not, it would be easy to code this feature yourself). And blank images should compress *very* well.
However, I would use a tilemap with very large tiles instead (~ 256 x 256). There are easy to set up and you could easily just leave out the blank tile, not just saving memory by not storing blank images but also saving rendering time (altough there may be some optimization ignoring fully transparent primitives). Writing a tool that analyses a png-image and generates a torque2d tilelayer from it doesn't sound too complicated.
07/13/2005 (5:31 pm)
I'd be interested too when datablock-data actually gets loaded.Quote:
Thats more along the lines that I was getting at in my last engine implementation, a large, arbitrarily sized image would be split up into 256x256 chunks (a texture size limitation on some older cards) with extra chunks to hold overflow from oddly sized images one question I might have there would be around the specifics of T2Ds implementation does it handle large textures in a similar fashion, and if so, is there any control over the chunk sizes what would happen with blank chunks?
That's exactly what fxChunkedImageDatablock2D does. At least as far as I know.
As for optimizations of blank chunks - my logfile says this:
OpenGL Init: Enabled Extensions
...
ARB_texture_compression
EXT_texture_compression_s3tc
So, if available, maybe texture compression is used (if not, it would be easy to code this feature yourself). And blank images should compress *very* well.
However, I would use a tilemap with very large tiles instead (~ 256 x 256). There are easy to set up and you could easily just leave out the blank tile, not just saving memory by not storing blank images but also saving rendering time (altough there may be some optimization ignoring fully transparent primitives). Writing a tool that analyses a png-image and generates a torque2d tilelayer from it doesn't sound too complicated.
#6
I seriously doubt it. What constitutes "transparent" is entirely dependent on the blending parameters; just because the alpha is 0 doesn't mean that it is transparent.
Texture compression doesn't work like jpeg, png, or any compression format that you're used to. If you have a 256x256 image, and you want to compress the color and alpha, you will get an image of 65,536 bytes in size, guarenteed.
07/14/2005 (11:44 am)
Quote:For example, if I have a parallax layer that has the same physical height of the main background image, yet is mostly made up of transparent pixels do the completely transparent chunks still get drawn, rendered, and blended, or does T2D do some optimisation there before sending it to the underlying pipeline?
I seriously doubt it. What constitutes "transparent" is entirely dependent on the blending parameters; just because the alpha is 0 doesn't mean that it is transparent.
Quote:And blank images should compress *very* well.
Texture compression doesn't work like jpeg, png, or any compression format that you're used to. If you have a 256x256 image, and you want to compress the color and alpha, you will get an image of 65,536 bytes in size, guarenteed.
Torque 3D Owner Matthew Langley
Torque
The rest comes down to "level design." Just like in 3D level design its all about where you place things and how you place things. For example with a highly graphic water system you want to keep any area from showing too much water, at least areas where a lot of 3D assets will need to be loaded. To avoid this you make more hills, or more obstacles that keep too much water out of view, at least for areas that a lot of animated meshes might need to be rendered and collision needing to be processed.
The same goes for 2D level design (I'm sure I'm preaching to the choir, just answering to the best of my ability :). You want to really make sure you keep your particles highly optimized. Optimized in the sense that you don't have a huge ammount of dense particles in one area. That can kill the performance fairly quickly. You may want to set up different detail levels that the user can choose (for lower and higher end s ystems). There are lot of ways you can set up options to add/remove visual effects that may kill performance. The biggest performance killer that I've found is particles, if you aren't careful you can really hurt your performance bad with one very inefficient particle. So I'd suggest designing them to be as sparse as possible enough to get the effect wanted :)