Economics: a zero sum game?

Recently replaying some old favourites of mine such as Mount and Blade and the Total War series, I got to thinking about one of my favourite topics at university. It came out in a number of different projects, of which the latest and most advanced of these I decided today to revisit. In it’s simplest form, it was an economic simulator set in space with multiple interstellar colonies and traders shipping goods around between the colonies while travelling faster than the speed of light. The end result had a couple of bugs (including one fairly interesting one which made it into the final submission) and the UI was fairly crude and non-functional in places.

Here is the post mortem I wrote at the time (1055 words). To summarise, I didn’t set exact goals about what I wanted to encapsulate and test in the simulator, I made an over-ambitious attempt to include facets of a previous (similar) project and had difficulties developing the UI to a satisfactory level that would display the reams of data being processed behind the scenes (also, haha, I blame the libraries for something at one point). I did end up building a very nice demonstration of node traversal although most Comp Sci grads would probably say it’s fairly simple what I chose to demonstrate. I like to think the setting more than made up for that.

The PM also lays out how the simulator actually functions, which I’ll be referencing later when I dig into some things I want to revisit with it specifically.

IND302 Space Sim – An attempt to simulate a real world economy in a fictional framework
Miles Whiticker

Approaching the end of the project, I looked back over my progress to see how it compared to what I initially envisioned it to be. Although going into the start of the project I was quite sure what it would involve (if not exactly what the end result would look like), I definitely hadn’t pinned down the specifics – and an extremely enthusiastic week 3 spent designing did not help to present an achievable goal. Mentor feedback from the learning contract also reflected this, although I did not take that on board until later.

Although it’s said that proper planning is always essential to producing a good end result, I think the lack of concrete plans (while hindering me in many ways) did not ultimately prevent me from achieving the end result. The one thing I knew I wanted the simulation to have was ships carrying goods between planets, and being able to slow down and speed up to see those goods having an affect (as well as the ships zooming around). That provided a solid core around which to base my early design considerations (as I mentioned above, some of which went quite overboard). As well as this, I wanted the project to showcase my skills in developing AI and complex, realistic systems – and what better way to do that than to develop a [limited] economic simulator?

The economic simulator would be the focus. I decided that colonies (the primary ‘agents’) would require a functioning industrial backbone in order to provide the primary supply and demand. Therefore, colonies would need to be able to build factories to turn materials into goods. Like any economy though, it’s affected quite strongly by its labour force. I represented this by applying a percentage modifier to productivity for each factory that did not have full employees. Employees (through the form of colony population) were the next most complex subsystem and would need to be carefully maintained by the colony AI as well. The population of a colony required housing to grow, as well as hourly supplies of food and water (and oxygen if the atmosphere wasn’t breathable). Power would also be required to make sure everything runs, which consumes fuel. It was a delicately constructed balance– just like the real thing.

Working through development, a major pitfall was realised relatively early in the development of the UI. Due to the simulation being driven by a series of complex systems, a large amount of rapidly changing data would need to be presented to the user in several different forms. This data would come out in 17 different resource types and 18 different infrastructure, although they would be combined or used to calculate other secondary and tertiary data which was also output to the user. A mockup for the UI was developed in week 5, but it was redesigned several times over the next few weeks and each time the entire UI system had to be overhauled (see weekly reports).

Early in UI development, optimisations were taken to mass output all the primary data (and secondary / tertiary data was largely ignored). This took the form of looping over all resource/industry types and outputting them in a uniform table. The later designs were much more user-friendly, and presented secondary and tertiary data as well, but conversely took significantly longer to implement and were one of the major contributing factors to the mid-project delay.

The other notable failure of the project was to develop a realistic and complex AI for the traders. As I mentioned above, major delays came about during the middle of the project which reduced time available to work on the intermediate and later goals. In its final implementation, the trader AI was crudely represented by simply buying cheap goods, selling expensive goods and moving between random destinations. The initial concept documents called for dynamically choosing destinations based on profitability and distance, as primary factors (and various personality characteristics affecting other choices being secondary factors). As designed in previous projects, I planned to develop a complex probability system allowing for random decisions across a weighted tree of possible actions, using such concepts as Bayesian probability and fuzzy logic.

Fortunately (or not), an alternate system was discovered that was crucial for trader navigation. As the original concept called strongly for trader movement across the various ‘views’ or ‘levels’ (representing nodes across a multi-branched tree), I realised that it called for a complex tree traversal algorithm that would need to be custom tailored to this specific situation. Research into viable algorithms to base it off showed that I would need to do what is effectively an iterative reverse depth first algorithm (based off a sample implementation that I had prepared as the prototype algorithm). The specifics of the traversal were as follows:

– Starting from the destination node (a colony somewhere), loop upwards until you either find a node that has the same parent as the current location OR you reach the root node.
– For each successive iteration that does not find a matching parent, move the current location up one level.

This algorithm resulted in a destination level ‘above’ or on the same level as the ship’s location (represented by moving up in scale, from planet -> star system -> stellar group, etc). Once the ship reaches that level, they travel ‘across’ the level in 2D space until they reach the destination, then travel ‘down’ the tree until they reach the destination (moving down in scale, eg from stellar group -> star system -> planet). The way I setup travelling ‘up’ and ‘down’ the nodes also made ships only go up a level when they reached the edges of the screen, and when moving down a level they ‘arrived’ at the edges of the screen, creating the effect of the ship coming in from some distant, offscreen location.

In reflection on the goals of this project, I never fully determined exactly what hypothesis I was testing, or whether I even had one to test. Ultimately, I think the interlocking systems and subsystems of the project are a goal enough to have produced, and when combined with the sum learning experience I have undergone, I believe the project more than achieved its aims.

After updating various libraries (SFML and SFGUI, two codebases I’ve been doting on for almost three years now) to more updated versions I was quickly reintroduced to the one significant bug I was unable to fix before submission – somewhere in the route planner for my traders I’d failed to catch a default case (no destination/heading/something) and as a result after a few years of simulated time they would inevitably set off to seek their fortune in a galaxy far far away, which made the app crash when their ships reached the edge of the screen on the most “zoomed out” display mode. I say this bug was fun, because at the time I was certain that if I’d had a few more days I could have cracked it. I’d quickly and clearly pinpointed the issue (and it would have been the quickest fix ever), it’s just… ah.

Putting aside that and the bugbear I had awoken while building the UI for that project, the other thing that jumped out at me was the course of development of the colonies. Although I’m fairly sure there was a bug in displaying some data for certain colonies (towards the end I was getting incredibly frustated with rebuilding the UI from scratch so many times) most colonies would eventually stabilise at a population of a few hundred people, while one or two colonies sometimes managed to overcome the initial challenge of settlement and steadily grow for as long as I ran the simulation (the lucky ones, I guess).

Going over the algorithm that decided which types of infrastructure to upgrade in the colonies, I think I’ve picked out what caused most colonies to eventually stabilise development (and thus growth). First off, all development required resources (circuitry, components, sheet metal, girders). First priority went to maintaining existing infrastructure, then whatever was left over could be used to build more. So naturally once construction resources flatlined, so did development. As a direct consequence of having no construction resources, all infrastructure built by that point would start falling to pieces because there was no resources to maintain it, which would eventually result in the death of the colony. As a safeguard to this, I introduced a control measure to ensure that the colonies would immediately build production facilities to ensure there was a minimum of the necessary resources being constantly produced (for sake of simplification the necessary raw materials were always available everywhere). The colonies that managed to thrive seemed to be the ones that upgraded their resource production facilities beyond the minimum mandatory level I had instructed all colonies to immediately upgrade to.

This is in contrast to a potential cause I had tossed over a few days ago which was that development and maintenance required employees set aside to actually happen, but I hadn’t got around to coding that in so the vital labouring crews were just pulled from the unemployed pool – which due to the job assignment and colony development AI was almost always at 0%! This turned out to not be the case, as I had actually told the colony AI to reassign the jobs of the entire population every month and 10% of the population went to the construction industry before anything else! Talk about job security.

A caveat to this situation is that my trader AI is less than perfect, but probably would have been able to offer a much more “in universe” solution – which was to ship the necessary resources from those lucky colonies that had plenty to the colonies that were struggling to produce the necessary materials to maintain their existence. It’s just that the traders were effectively “dumb” agents in that they randomly chose which planets to fly to 😦 the best part of the simulation and I was too busy programming the back end of it the entire time!

Regarding the colonies’ growth plateau, in all cases enough food and water was being produced while the power and oxygen systems were large enough capacity to cover the entire population – the thing curbing population growth was lack of housing. I mercilessly decided to simulate death of homeless people due to environmental exposure by stopping all babies being born when the colony ran out of population room. At least a regular monthly loss of 1-5% of the population due to natural causes kept things in a little bit of flux.

One of the conclusions I came to that has most stuck with me since finishing the project was the potential depth sink with simulating an economy. Dwarf Fortress is the only thing I can compare it to, in that there’s probably an infinite amount of complexity I can eventually build into the sim but each time I do I’ll need to rebalance the existing conditions to make sure the world is a) reasonably realistic and b) self sustainaning (making it a wicked problem). The name of this post (zero sum game) is a reference to the competition between the different types of buildable infrastructure for the available resources in the colony AI.

Now that I’ve updated the relevant libraries to more recent versions, I’m planning to mess around with it over the coming days or weeks and I might post a progress update if there are any interesting developments. Anyone interested in the simulation can find the code for it at https://github.com/mileswhiticker/ind302-spacesim (minor recompilation may be needed).

Inaugural GameMaker Club 5/11/13

A week ago I had the first after school session of a new club I’m running at a local primary school, teaching the fundamentals of making games to students from grades five, six and seven. It was a great success, and as I now have a degree in making games I can confidently say I taught them some great stuff (in theory). The program I’m basing the club on uses GameMaker 7, which features a drag and drop system of event-action scripting bywhere a couple of dozen preset events trigger reasonably complex chains of actions with conditional logic being possible. Events include things such as keys being pressed, collisions occurring, timers running out and the like. The areas I want to cover are:

– Basic programming and associated maths
– Mechanics design
– Story, world and character design
– Level design
– Asset creation

Last week we built an extremely basic “almost game” where students can make a character they drew in MS Paint walk around the screen, with static objects such as trees or walls onscreen as well. Some students managed to enable collision on their doodads as well, but we’re wrapping that up this week. I’ll write about each successive week in more detail, but with each week we will add new features to our games while also introducing theory to back it up and complemented by simple, easy to remember homework which includes working on the game and experimenting with new ideas.

This week (tomorrow) we’ll be looking at what makes a game “a Game” and how we make our games fun. An important thing to get across is that making games is not easy or quick, and requires just as much dedication as any other skill or career. Hopefully by the end of the course I’ve managed to instill into the participating students initiative, creativity, logical thinking and the ability to approach problems from a different angle – all values I hold very highly.

Shuriken, a look at the AI

A year ago I built a ninja game about throwing stars, which you can read about here. Thinking back on my initial inspiration for the game, it was exciting to see the things that Overgrowth was producing and to think about the possibilities inherent in realistic and believable AI. I read an article back then that linked AI difficulty with perceived intelligence, and how developers could use tricks that enabled the AI to cheat enough to always present a [ramping] difficulty but not enough to overwhelm the player. In RTS games (which is a genre I particularly favour) it was an old and established trope to enable resource and unit detection/tracking bonuses to computer players to increase the difficulty level as an alternative to actually increasing the complexity and efficacy of the computer’s strategy. While definitely increasing the difficulty, this did not endear the player to actually feeling like they achieved something when they beat the computer as opposed to defeating actual human beings. All the good RTS players would eventually be able to defeat multiple computer players simultaneously anyway, so multiplayer is where the upper echelons of skill always occurred.

In games where single player is the main format however, such as RPGs or adventure/stealth like Shuriken was, it’s a very different kind of AI you are building. Immersion is a major factor, and it’s not about building an AI that can defeat the player so much as building an AI that seems like it can defeat the player, or at least fit in to the world of the game. My major inspiration here was Left 4 Dead with it’s AI Director, which would spawn more zombies if the players were doing well or less if they were doing badly. The spawns were also timed to come in waves and in so doing have peaks of activity followed by rest periods where the players could resupply and prepare for the next wave.

The initial AI design for Shuriken would emulate this, with the player coming across areas heavily populated by guards that they had to get through. Upon killing a guard, the body would be discovered and they’d have a peak of activity as they evaded the other guards that would search nearby. With an emphasis on realism, my initial idea was to have them choose random points throughout the area to walk to, then resume patrolling. This would result in things happening like the player right around the corner of a body, but all the guards randomly choosing to go and search in the opposite direction instead of the clear hiding spot adjacent to the murder scene. Similar to escaping guards in Assassin’s Creed, you could hide in plain sight but unlike in Assassin’s Creed, there was no clear reason for the guards to call off the hunt as they had obviously done a terrible job of searching. By making the player’s job easier, the AI seemed dumb or broken.

The logical follow on to this is to have “hide” nodes nearby which the guards would always check, and by extension build the AI by placing context sensitive nodes around the area. I wanted to build the perfect system though, one where the guards would examine their surroundings and decide what to do based on their internal decision making process. Big ask. The navmesh was too finicky and the system built to handle geometry too delicate – one thing out of alignment and the pathing would be completely broken.

Fallout 3’s level design tool allowed designers to manually place navmeshes, and I think this would have been the ideal way to approach the situation. Manually placed navmeshes designed to map out the geometry according to human sensibility, or even pathing nodes with spline paths worked out between them to make AI movement seem natural despite movement effectively following squares on a chessboard. I didn’t want to recreate the SWAT games, but what I ended up doing was taking tools out of the hands of the designers and not building an adequate replacement.

This goes to support my earlier conclusion of always starting with the end in mind, I guess. I wanted to make a game where the AI would seem realistic and smart, searching for places the player would likely to be but still look like they were making decisions according to the world around them. In future, I’ll build a system of manually placed nodes and context-sensitive data for the AI to interact with before I try to build the system that automatically generates it based on a static level that has already been built.

Quirks of Low Level Coding

As seems to be becoming a habit, I just tackled and beat a significant bug (which I’d actually noticed much earlier and had put off fixing until now) so I decided to write about it. The “moral” from this one seems to be that that low level coding will invariably have interesting little quirks and anti-features which will always be lurking in the wings to knock experienced and novice coders alike off their feet. This bug (like the last one) was in the grid cell code. As I have mentioned previously, this project involves a 3D grid map with various things moving in freeform around it but still interacting with the grid for some things.

In my mind map of future progress, the interaction between freeform objects and grid cells will mostly come about through environmental effects like airflow, gravity and toxic gasses. Without going into too much detail on gameplay (that’s for later), I’m still playing around with how best to handle it. My current implementation is a holdover of my old habit of premature optimisation, where each object that needs to check the grid will periodically (or after moving) recheck with the MapSuite to see if their latest position has resulted in a cell change – if it has, inform their old cell that they’re leaving and inform the new cell that they’re entering.

This is to replace a system where the objects simply grab and reapply the environmental settings every update regardless of cell changes or the like. The advantages of the new system is that (obviously) there’s less lookups, but also that there’s a more clear chain of communication for events such as gravity/atmospheric changes to be passed down from a cell to it’s contents. The only appreciable disadvantage that I see with it is that if I don’t account for all circumstances where something could move, it won’t update it’s cell location which will have weird effects with the aforementioned events system. I’ve got the periodical recheck as a failsafe in case the delayed cell changes miss out on something, but that’s still a symptom of the old system where there is a redundant system which may or may not be necessary. Tradeoffs, I guess.

Just quickly while I’m on the subject of the cell map, I thought I’d lay out how it works under the hood. It’s relatively simple, with all the cells being stored in an STL unordered_map and indexed by a hash of the cell’s co-ordinates (which are normalised to integers, but I’m still a little wary about possible floating point error). I’ve also got a plan in the back of my mind to have additional collections of cells where they’re sorted by relevant attributes, eg cells that temporarily need regular environmental updates.

But the bug! What was the bug? Well I mentioned it was a somewhat low level quirk of C++, to be specific in how it handles numbers. As I mentioned above, freeform objects determine their current cells by taking their exact position as a float vector and normalising it to an integer vector with an offset of (0.5, 0.5, 0.5) to account for rounding. For simplicity’s sake, the way I was doing the rounding was just casting the float to an integer. This set of warning bells even before I had written the system the first time, but I decided to try it out and see how it went.

Turns out that when casting from a float to an integer, values above 0 round down but values below 0 round up – which results in a whole extra anomalous tile on each axis when trying to find the current cell that objects are in! The way to fix it is just to add a correction value of -1 when casting from float to integer for values below 0, but I was getting some weird bugs with gravity before I found it almost indirectly while working with the player code.

Here’s to hoping that the bug where rays of artificial gravity being projected from a tile adjacent to where they were supposed to originate from will disappear! Speaking of improbable bugs, maybe my next entry will cover how I found and fixed the null pointer error while assigning bullet quaternions… better not. I might never write an entry here again 😉

Overly engineered redundancy

I just had a fairly textbook bug pop up, so I thought I’d write about it while it was still fresh on my mind. Unlike most “interesting” bugs, this one was both easy to locate and easy to fix – but it came about as a direct result of the way I had setup a system.

In this project, I have a 3D world divided up into cells and each cell can have a single structural frame or be empty and in order to place a frame in a cell, there has to be a frame in a cardinally adjacent cell. I have a global manager which handles creation and deletion of these frames, and I was investigating why creating or deleting frames from cells wasn’t properly triggering the creatability/deletability of frames in adjacent cells. It turned out that a reference to the cell itself wasn’t being passed around, and in fixing that issue I realised there was something I was forgetting – I hadn’t actually setup the system to ensure that there could only ever be a single frame in a cell.

So what I did was added a check to the manager’s creation function for any other frames, and if there were it would delete them. Unfortunately, I placed it after the new frame was created but before the new frame was initialized (I’d setup delayed initialization as an option to make larger or frequently resizing worlds lighter on the processor). Obviously that was completely the wrong place to put the check, as it not only attempted to delete the frame I’d just created, but it hadn’t been fully created so it was crashing every time.

Easy find, easy fix, simple mistake, but I see it as a direct symptom of a system that I engineering to not only be capable of everything I wanted it to do now, but also everything I thought I would want years down the track while still making it simple and modular enough to be expandable and adhere semi-strictly to OOP (or my garden-hedge understanding of it).

After about 12 hours over the past few days spent restructuring and cleaning up how frames and cells are handled, I’ve been reconsidering my policy of rewriting systems to be better under the hood. As a self-analysis process, this has been ongoing ever since I discovered Unity and Winforms coding over a year ago (quote “You don’t need to make the code good, because this is scrub coding land”) but I’m still on the fence with my conclusion – that a combination of the end product and the goals for making it are what justifies the approaches, procedures and technology used.

In case of this project, I wanted to make a well designed and robust system to account for complex gameplay and in doing so improve the quality and speed of designing/implementing systems that I work on… while working on it in my spare time. In other projects, if I wanted to create a game with specific gameplay and timeframe to work in, then I clear deliverables which override my desire to learn and work at my own pace in doing so. Where this has been an issue for me in the past is with university projects – it’s a learning environment where I want to maximise the convenience of being able to make frequent mistakes and effectively analyse them, but still succeed (the latter was essentially my approach for final project).

In a commercial environment it strikes me that the deliverables would be the goal, but for viability of long term usage/expansion the system under the hood needs be clean and maintainable. Anyway, I guess it’s just a line which everyone needs to work out before they start.

Ludum Dare, all over bar the shouting

It’s now a few hours post submission cutoff, and I’ve had a chance to play several of the other submissions and see the quality of people’s work. Now (as I was before we started) I was impressed by the number of people willing to try weird and outlandish combinations of technology (I’ve seen several games deployed solely to Linux, at least initially). My favourite would have to be one of the afternet#ludumdare halfops who made some kind of simple platformer in assembly, I think it was for a TRS Arm.

As of writing, there are over 1700 entries so I couldn’t possibly hope to sample more than a tiny selection, but of the ones I have I was quite impressed by Undercolor Agents (pick up weapons, shoot colour blocks to return the world to monochromey goodness, check it out here) and one I think was called Minima, where you had to flip perspective and move a cube around (can’t find a link).

My own humble submission can be found here, and although the end result is not something I’m 100% happy with, I don’t think it’s possible to do a game like that in 48 hours. Following a few desperate hours testing before submission cutoffs, I managed to get some valuable feedback – the gist of it was that my game was simply too confusing. My “interpretation” of the theme was that I would minimise player teaching, and attempt to get them to teach themselves. I definitely overestimated my ability to convey that, but of my testers who stuck with the game long enough they figured out how to get far enough to win. Of note I think, was stumbling across http://www.bfxr.net/ (before I found out about the list of tools suggestions, where it featured prominently). BFXR speedily gave me some neat retro sound effects which I could slot into my game, and coupled with additional HUD feedback and the two tooltips, I was able to overcome some of the drawbacks.

I think the main thing LD48 did for me was opening my eyes to the wider world of indie gamedev. I wasn’t expecting so many people, so many wild ideas and crazy technology. It was inspiring to hear people talk about using OpenGL/DirectX/LWJGL/SFML/Low level Graphics Library #279, so I think for the next compo I’m definitely going to attempt C++, most likely with SFML and one or two other libs (game enough to try Chipmunk again? maybe). Judging is on for the next three weeks, so I’m definitely going to be exploring more of the submissions over that time.

Ludum Dare 3/4 done!

This post is actually a few hours late, but I got sidetracked by various things. Progress hasn’t been as good as I’d like, but I’m still fairly happy with where the game is at. Gameplay has been refined further, GUI/menu/level handling have been improved and some new (less horrible) textures have been put in.

My plan from here is to grab a few hours sleep, then come back and really hammer out level generation/difficulty. At this rate I’m not going to have any audio, unless it’s something I can track down from the internet in 5-10 minutes (is that allowed? have to check the rules).

Build 3 is available from dropbox here, and there’s an embedded web player available here. My plan from here is to grab a nap, then spend the last few hours trying to work in some feedback from testers – my next post will be after I’ve submitted I guess.