Game Development Community

dev|Pro Game Development Curriculum

Simulation: Scaling Processing Power

by Demolishun · 12/08/2011 (8:16 pm) · 8 comments

When thinking of simulation within a game it is usually within the confines of a single computer system. However, what if your processing needs might exceed the user's machine, but you still want to create the detail of a much bigger set of processing power?

I was flipping through the book "Tricks of the Game Programming Gurus". This is an older book and most of the information is outdated, but one thing in there sparked a thought. It was the subject of AI. It talked about how in the really old computer game days programmers would come up with "patterns" for the computer AI. There was jut not enough processing power to display the game and do deep AI "thinking". It was later that it hit me: Why can't we offload the processing power for much deeper AI decisions to a machine on the internet? This is what an MMPORG is doing at some level.

Now for practicality and speed some things will not easily go off the machine. Such as say a vehicle simulation there would be a speed issue. For AI, however, a significant amount of depth could be achieved. The speed would not be critical other than the bandwidth for sending enough information to make decent decisions.

Where am I going with this? The current project I am cooking up I am wanting to do a simulation with no human interaction except for decisions as to characters and equipment. The simulation will be done weekly with results posted for users to check them out. One thing I wanted to do was allow a lot of AI in the simulation. The engine will most likely be loaded just trying to display the large number of bots in the game. The video part is of importance to me because I intend to make this a spectator sport. So if I want really sophisticated AI group thinking and individual thinking I started thinking of options. That is when I found this: www.parallelpython.com/ It was the result of a conversation I had with someone here at GG about threads. It really intrigued me.

I have come to understand that a lot of super computing power is controlled and synced with a Python interface. Not that the language is important, but it would allow a quicker development time to leverage those resources. Then I thought if you have a dedicated process running in a thread next to the desktop or server Torque simulation then you could use it to coordinate external computing sources: programs on other cores or on other machines. That means you could hook in an Amazon.com super computer cluster for specific simulations only paying for the time you use the cluster. Now this may not be really practical for an MMPORG to constantly use such a resource so purchasing a cluster and maintaining it might be better in that case. However, who knows, in the future it may be less costly to purchase that kind of computing power even if using it constantly. It certainly would be more conducive to scaling.

Now I am just thinking about my own project and renting some high end computer resources each week to do highly resource intensive simulations sounds like a practical thing for my goals. I hope this inspiration has the affect of getting people thinking about a potentially useful resource for game simulation. It certainly triggered some cool things in my mind as to what I could do with it.

About the author

I love programming, I love programming things that go click, whirr, boom. For organized T3D Links visit: http://demolishun.com/?page_id=67


#1
12/09/2011 (6:56 am)
I don't know about anyone else, but I would love to see you continue to write up blogs or threads about your progress with this. I've been doing work on cloud-based networks and separating out functionality from T3D for gaming for the last few months, and so far the SaaS-type designs show a lot of promise for bringing the quality of gaming up.

The more information the community has on using T3D with AWS and the cloud in general for simulations or gaming, the better!
#2
12/09/2011 (7:40 am)
Quote:
Why can't we offload the processing power for much deeper AI decisions to a machine on the internet?
...
I am wanting to do a simulation with no human interaction except for decisions as to characters and equipment.

That's a neat idea ...

In a panic, they try to pull the plug ... *cough* couldn't resist ;)



#3
12/09/2011 (11:00 am)
@Ted,
I will keep posting as I move forward. I am pretty close to figuring out some things with my SWIG stuff. I have been reading through the console code to determine how things work. I am trying to decide if I want to keep the current interface design or take a different direction. One of the things I am looking at is to decide if I want to allow a separate instance of T3D in each thread. If I go that route you would be able to launch multiple servers in one Python app on a server for instance. I figured that would be an interesting way to do multiple zones. Not sure if it will be a "good" implementation, but I mostly am doing it because I want to see what I can do with it.

$Steve,
I will definitely put in a detection feature that if someone disconnects thinking they can affect the outcome. Then when they log in I will play the doom tones: "dun, duun, duuun" with the message "you were too late".
#4
12/09/2011 (11:21 am)
Interesting ideas.

I'm part of the AI Game Programming Guild, and we've talked about offloading AI from the CPU to a co-processor and/or a server farm but, so far as I've heard, nobody has come up with an AI related problem that could benefit from it. But that's mostly because we tend to need "real-time" calculations.

I'm really interested in seeing which AI tasks you manage to offload.
#5
12/09/2011 (2:07 pm)
You could do wonders with access to a powerful planner, something like Drools. Again, that's possibly not a real-time solution, but for something like a strategic controller, planning in the domain of a half-hour to an hour of action, you could create procedural stories for groups of characters, creating game logic such as quests and encounters.

I reckon the down-and-dirty of per-character AI won't necessarily benefit that much from asynchronous AI processing; but once you start to talk about an entire town's worth of characters, over a significant stretch of time, the idea of a separate AI manager (similar to the concept used to manage games of Left 4 Dead, but more pervasive) becomes very interesting.
#6
12/09/2011 (2:31 pm)
I would think you could use it for a singleplayer/coop game with a *large* world that you wanted to be very dynamic.

Once the player(s) leave a certain area, hand the AI interactions off to a cloud service that can calculate very complex changes to that world in an "offline" state (marriages, alliances, assassinations, natural disasters, new business opening, older businesses being sold or going bankrupt, new building being built, crops being grown and harvested, NPC's moving out of the area or immigrating into it, cities growing in size, etc). Allow the webservice talk to other zone managers and to the current player's zone so that changes in the other parts of the world can effect the calculations and I think you could have something pretty cool.
#7
12/09/2011 (4:25 pm)
FWIW:
I really have no clue as to what I am talking about. Keep that in mind.

I was thinking several stages to this:
- Local processes (to the server or client) that maximize the core usage of the server. This would be fairly fast and could handle lower level AI functions. So if you had 2 or 4, 8 core processors this could give you some significant processing power. 16 to 32 processors depending upon the server. Maybe only 8 for single CPU machines. I don't know what hyper-threading would to do this. Double? So lets say you have 5 AI per core (they are really deep thinkers) that would give you 16*5 or 32*5 AI.
- Be able to link into a cluster for higher level functions such as strategy. Probably need to abstract the data to represent something like chess pieces or something. More likely to have "group" thinking here.
- However, I would think that with some care of design even the cluster could handle some of the low level stuff. Even if it had 1 to 2 second delay on input, every AI in the system would have 1 to 2 second delay. For a pure simulation they would be equally handicapped.

Now, I have no clue what to put in the "think" process at this point. I have not gotten that far. I would think that an AI that had 1/5 of processor power of a 2GHz processor could do some very significant pathfinding solutions. At the same time an upper level team driven AI could coordinate goal objectives so if that had 1 processors worth for each team some really deep strategy could be done. It might even be enough to beat a human opponent without cheating! Okay, maybe not.

Hmmm, I wonder if some real time pattern matching could be done. The hard part will be representing data to the AI to give it an accurate world view for which to make decisions. Maybe it will be like the Matrix and version 1 will be too perfect and I will lose an entire batch of crops...
#8
12/10/2011 (1:25 am)
Nooooooooooooooo!!!!!!

Thwarted by "singleton already created"! Okay, that means more than one Torque in one app is OUT!!! Maybe for the server version, but probably not. So I am going to completely discard that idea as I don't have time to pursue it anymore.

So one engine per app. It always comes down to this monogamous thing doesn't it?

Edit:
But I can run two instances of T3D on my machine at ONCE! Ha, take that you one timin' biotch!!!

I know, I know, no more sugar or caffeine for me...hey, I spelled caffeine right! Yessssssss!!!