Game Development Community

dev|Pro Game Development Curriculum

Earthshire(Working Title) April Development Diary

by Ves · 05/03/2007 (4:46 pm) · 1 comments

Well it appears that I need to make monthly blog entries as a part of the contest requirements for 1 year MMO contest. This is fine, I have wanted to blog some of my progress and thinking even in this really early stage of development (at the very least as a reference for me to come back and look at later on).

You can read my contest entry here. It's not much and it certainly is not an indication of how flushed out the concepts I am working towards are. I have a really good sense of where I am heading even though it may "appear" a little unfocused initially. In the coming months I suspect it may be surprising how this starts coming together.

Since this is my first blog entry on GG it is probably prudent to at least discuss the engines I am using. After quite a bit of research the platform I am focusing on for the contest entry is TGEA modified using the soon to be release Torque MMO Kit (this is based of off Minions of Mirth. I am using the community edition of that kit right now). I expect to make quite a few modifications to the to Torque engine (c++ side) to get things the way I want them but most of those will need to wait until the pro edition of the MMO kit is released (community edition does include engine source and Josh/PG has made quite a few modifications to the source that I need to see first to understand what I am dealing with server side). I am hoping that the server setup for MoM (login, character, world, zone clusters, etc) will save me a lot of time and allow me to get a prototype up quickly.

I am taking an Agile approach to development. Clearly my requirements are very fluid and I am not nearly as familiar with Torque or the MoM kit to make hard decisions at this point on very specific game design elements. But I need to start somewhere (which is fitting for the Agile process as well) and the contest requires a design document. I am in the process of transcribing this (from head to paper) and it looks like I have a good month to complete this (so next months blog should have something about this... probably me complaining that it took too much time and then admitting how great it was for me to do it anyway )

In any case the Agile approach dictates a consistent feedback loop, which means that I am shooting for a very feature striped prototype in the short term. I originally had set an internal deadline for early May (for something...anything.. playable). But that did not happen as it was too aggressive and I just don't know enough about the tools that I am working with to deliver that yet.

Anyway I need to set some short term goals for May so here we go...

My goals for May revolve around:

-a playable version of the game for one challenge area (even using MoM as is, just something playable).
-a working autogenerated AI pathfinding system.
-a completed design document



MoM and Python

Now at first I had some real reservations about using MoM (Minions of Mirth) as my MMO engine framework. I played MoM a little and there were parts that I didn't like and looking at the engine documentation it seemed needlessly difficult in spots. And I really didn't want to learn another language. To be honest I am just getting over the first few humps of Torque script. Throwing python in the mix was the last thing I wanted. I really never saw the advantage to python and having met many php/perl fanatics in my time, I "thought" this was simply another hopelessly monogamous script language zealot simply pushing "their style" on to me without a real good reason.

This has turned out to be a great mistake in thinking on my part. Sometimes you need to drop pride and discriminating generalizations and evaluate things professionally. In any case I see the light now. If Josh is a python peddling zealot he has won another convert with me .

After spending time to go through the tutorials that come with python (windows version) I started to see how easy it was to implement certain things right out of the box(database support, remote server communication, data manipulation, etc). It does make a remarkable amount of sense as MMO server engine. I was so impressed that I started writing python for certain tasks I needed to do relating to my real job (web/database/server development).

I still hate the interfacing of python with torque and torque script as it seems very rough but I love the language. I really wish torque natively used python instead of torque script. But I am done discriminating against technology. There is likely a good reason Torque Script makes sense too.

World Building

This past month I was able to flush out a first zone rough draft, at least in terms of layout and the challenges that will be present in the zone. This particular zone is a single poor district of large castle complex that will eventually grow to encompass many zones. I first drew the entire castle outline in Photoshop which as it turns out was a great idea. I was able to take this rough outline and form selection areas in Photoshop that were easily used to modify a heightmap that I auto generated in L3DT. I then used the outline and overlayed it in parts onto the heightmap so that eventually I was able to basically "carve" the castle foundation structure into the heightmap. Once I imported this back into L3DT I was able to generate the terrain that will be used for the entire castle complex (spanning many zones). I then exported the unique texture from L3DT and loaded it up in Photoshop.

The Outline I created initially for the castle complex could then be overlayed as a layer in Photoshop. This proved very valuable as I was able to add many other helpful layers (such as a grid, subzone area markers i.e. run down houses in this area, monastery here, challenge area here, wall and tower placement points, etc). I could then save the texture with just appropriate layers I wanted to work on for the day. I had modified a main.cs that was prepped and ready to simply use the Geometry I exported from L3DT and the texture I exported from Photoshop (with appropriate layer overlays) and recreate the .atlas file on the command line before starting up the mission editor. This worked wonders for me as when I was working in the mission editor I could see and use the zone design information baked right into the terrain texture.

Using atlas terrain also gives me the advantage of creating a huge map that spans multiple zones. Using one map gives me a better sense of spatial awareness in terms of the zone lines. When in the mission editor I am using the same terrain just starting in a different location on that terrain and setting the mission area to the area that the zone encompasses (which is far smaller than the whole atlas map). My guess is that by doing it this way the world will seem more seamless even if it is broken into separate zones. Looking beyond the zone lines lets you see the exact atlas terrain for the next zone over. For objects that are shared between two zones (walls separating zones at zone lines, large towers in the distance, etc) you need only copy the raw script object defining it from one mission file to the other (the positional data is the same). The real truth is that it was just easier to design and work with one atlas map (even if it is much, much bigger than a normal map). The only downside being that I am working 100+ Meg texture files when in Photoshop (which honestly isn't that bad) and L3DT data take a lot longer to generate.

Pathfinding in TGEA

Well I am waiting a bit for the actually pro version of MoM MMO kit for Torque to be released before I start building a lot more content but that does give some time to familiarize myself with raw Torque a bit more (outside of MMO kit) and to work with MoM community edition a bit more. One of the things I dislike about torque is the pathfinding.

I just don't like putting in pathfinding waypoints. It just seems like a thankless job that you get to tweak on a level by level basis (stupid bot getting stuck... sigh add another waypoint). It's a completely logical job too. Meaning you will always be adding waypoints at logically appropriate places (at corners, doorways, etc). It doesn't make any sense to have this process un-automated.

I am also starting to come to grips with just how much I need to do in a small amount of time. Content will be a large part of the game I finally deliver and tweaking NPCs is a large part of MMOs. It will be well worth the time invested to have an automated tool for setting up NPCs to navigate from on place to another.

So I have decided to try and come up with a "solution" for automatically adding pathfinding data to mission maps.

I started this premise with the idea that I would do something similar to the other solutions you have seen here (most of them unreleased). I tried building a grid of waypoints every X increment on the map and then using A* to navigate. This basically gives you a 1000x1000 map with roughly 1 million waypoints ( assuming you want a waypoint every meter and that's assuming 2D...). Clearly navigating 1 million waypoints is unreasonable even with A* so I thought to go down a path similar to what others have done and group the waypoints.

My thinking was that there would be a large amount of space where each waypoint could see all of the other waypoints around it (waypoints in this grid like structure only connect to their 8 closest neighbors ... NW, N, NE, E,.. .etc). We would have huge blocks of points that can basically all see and navigate to each other in a straight line (so we don't need A* for that inter-node navigation). I wanted to group all of these types of points and make it seem as if these groups of points were a single node in the A* navigation map. This required me to take a map of waypoints and generate a list of as few rectangles as I could manage that encompassed the most amount of points. I would then take these rectangles and A* between them on normal navigation. This (in theory) made the possibility of successfully navigating on that many points in real time a reality.

My biggest fear was constructing rectangles that encompassed the largest amount of points possible. This I did solve though by writing some code offline to find the largest rectangle in the area and then second largest, etc. I was even able to do this in a reasonable amount of time (brute force checking this was a ridiculous O(N^3)+ operation... on a million possible points that was basically a no go).

I got pretty far into constructing this navigation style offline (meaning developing it in C# so I could test the logic outside of the encumbrance of the other Torque code) before I had one of those shower time epiphany's.

It occurred to me that storing 1 million points on a map of where a bot can go is inefficient. This data is simply not nearly as significant as where a bot "can't" go. This isn't intuitively obvious at first glance since the actual act of navigating is done by telling the bot to "start at this specific place and go to some other specific place" and not "don't start at all of these places and then don't go to all of these places". The reverse thinking fails at first glance. But once we get beyond the first steps of starting specifically at a place and going specifically to another place. We end up with the actual engine reality of "we can't go to these places on our way to our destination". That is the actual problem we are trying to solve. As we really don't care where our bot goes (as long as it reaches its destination), what we really care about is where it does not go (through walls, up cliffs, etc).

This reversal of thinking was critical for me. It is has quite possibly been intuitively obvious to a lot of others who have worked on this before but I honestly haven't stumbled on someone writing about this problem and framing it like that.

With that in mind it occurred to that the most efficient way of constructing an AI grid map would be to construct "edges" of where the bot couldn't go. Hang with me for a second. These edges are simply lines that a bot can't cross. If I could construct a map of these lines, I could then generate a series convex polygons and for each convex polygon at least one of the polygon lines has to be a no-go edge (the other edges would be connecting points to other polygons). These convex polygons would then be used as A* nodes for navigation. The bot would then get a path to navigate along just as before. And the amount of data and processing used to "generate" this path should be far less than any other because (and here is the core assumption that this 'theory" rests upon) there are far more places you can go than places you "can't" go so testing places you can't go is the more efficient option.

I've talked about this theory in 2D up till this point but in reality we are navigating in a 3D world. However understand that from the point of view of a bot, in terms moving from one place to some other place, assuming the bots do not fly, we are simply moving in 2D (it moves x,y.. it falls or climbs in Z based on map data outside of the bots control). A better way to think about it is to understand that bots are confined to a surface and while surfaces can overlap, movement is still confined to a single surface at any given time. Movement across any surface can be reduced to 2D movement. To be honest I do need to determine which "surface" a point lies on. If we consider a 10 story building each inside floor surface of that building has a point that is say 10,10 (x,y), and the identifier for the actual surface (ground floor, floor 1, floor 10, etc) is what uniquely identifies a point in real 3D space. As long as the surfaces are distinguished in 3D space (floor1 can't be in the exact 3D physical location as floor2, it must be -10 z) we should be able to identify them uniquely and should be able to translate between the two coordinate systems (i.e. 10,10,30 to 10,10,surface1).

Thinking about 3D movement as 2D movement across a surface it then becomes a little more obvious how "edge" avoidance can make some sense. If we needed to list every polygon used to define every structure on the map then we would need to use 3 polygons oriented in 3D space not 2D lines oriented on a surface plane and the amount of data required to represent and process a path for navigation increases dramatically (this is a bit like reducing a O(n^3) operation to a O(n^2) operation). We should have already been thinking this way if we were generating waypoints in a grid like fashion. When we reduce a map in this fashion collision testing will then break down to testing if a line segment crosses another line segment which is a very simple and relatively CPU insignificant test.

The question still remains as to how to generate this "edge" map and how to generate the convex polygon list that we will use for A* navigation. I am currently building in torque a TestPath player that simply is a normal Torque AIPlayer that will start at a point and try to move in the 8 standard directions (45 degree angles starting at NW). Then start at another point (usually where the move left us after the first move) and try the same operation all over again. This is similar to what had to be done to generate the grid map of waypoints (though I think most implementations used a form of ray casting from each location to determine if movement was possible and didn't spawn a specialized bot). By doing this (using a bot and moving it around) we can use the current style of player movement code to eliminate all of the insignificant polygon data on a map. Meaning that if I move east and there is a step there, the normal player movement makes me simply step up and start moving on top of the step surface. From a surface standpoint in terms of "where I can't move" that step is completely insignificant. I never need to even know about it. The player movement code needs to be heavily modified in order to test like I am attempting to test since there is a ton of physics that goes on in this block of code that we really don't want done (IE while we want to navigate up and down across the terrain and steps, we do not want to slide horizontally across vertical surfaces since we really want to mark the hitting of that surface as a point on an "edge"). After this test player bot tests moving from a series of locations we can then use the data generated from where it stopped (hit a wall, etc) to generate the edge map. I can collapse these points into lines with some tolerance for error (if a point is 0.1 off of a line then likely it lies on that line, etc). To avoid too many small convex polygons it might be useful to find a ways to decrease the detail of the edges by forming bounding boxes around groups of them.

This method of scanning is clearly not the most accurate method to determine where the edges are. But the problem is that I am not just scanning for edges of a certain slope (it's is easy enough to get that information from the normal of the collision polygon), I am scanning for edges of certain slope that exist on a given surface. The caveat being there will be many surfaces I don't care about (a rooftop that we can't reach, gaps where a bot can't fit, etc). Giving the bot a starting location and only storing info on areas it can get to seems to be the only way to viably eliminate all of these other extraneous surfaces that I don't care about and associate the found no-go edges with the proper surface. I am going to think more on this though. Mathematically manipulating the raw data to generate these surfaces would seem to be the most accurate approach (collapse z to zero for polys that have normals we can walk across and collapsing z to 1 for polys that have normals we can't get past). Handling steps would of course be an exception that creates complications.

Defining the dividing lines of these surfaces will also be troublesome, and determining where the surfaces overlap will be an issue (since I am storing nothing but edges I don't have any indicator of where I actually am in z space, just what surface I am on). Including z data to define the space between edges could be a solution but we are then back to storing 1 million vectors for 1000x1000 map since I need to store z data at a regular interval to be accurate (more than that since we have overlapping surfaces). At that point it's better to use the real 3D data. Though in truth I only need to store this actual z data of a surface until all of the surfaces that the bot could reach are built. After that I can through it away. And Navagation (which is the big concern in terms of execution speed) doesn't need this data.

I am hacking through this in any case. It's taking longer than I want it to but I am gaining valuable insight into the inner workings of the engine. Knowledge is often undervalued in this industry in terms of project development. I would bet that hardly any project management documents include time for the implementers to learn and understand the problem domain before they implement, even more importantly the designers need to be complete experts to make good design decisions. Everyone just assumes that everyone that is designing or implementing are already "experts" in the problem domain or can get to expert level in an insignificant amount of time. I am especially bad about this. You will never catch me admitting to it taking a long time to learn anything, but even after working with the same code base for literally years at work, I still stumble across something every now and then that I didn't know. Or I find that something works in just a slightly different way than I was expecting (having profound engineering consequences of course).

Well that's enough for a first blog post. If you made it this far I hope you got something out of it.