Inaugural GameMaker Club 5/11/13

A week ago I had the first after school session of a new club I’m running at a local primary school, teaching the fundamentals of making games to students from grades five, six and seven. It was a great success, and as I now have a degree in making games I can confidently say I taught them some great stuff (in theory). The program I’m basing the club on uses GameMaker 7, which features a drag and drop system of event-action scripting bywhere a couple of dozen preset events trigger reasonably complex chains of actions with conditional logic being possible. Events include things such as keys being pressed, collisions occurring, timers running out and the like. The areas I want to cover are:

– Basic programming and associated maths
– Mechanics design
– Story, world and character design
– Level design
– Asset creation

Last week we built an extremely basic “almost game” where students can make a character they drew in MS Paint walk around the screen, with static objects such as trees or walls onscreen as well. Some students managed to enable collision on their doodads as well, but we’re wrapping that up this week. I’ll write about each successive week in more detail, but with each week we will add new features to our games while also introducing theory to back it up and complemented by simple, easy to remember homework which includes working on the game and experimenting with new ideas.

This week (tomorrow) we’ll be looking at what makes a game “a Game” and how we make our games fun. An important thing to get across is that making games is not easy or quick, and requires just as much dedication as any other skill or career. Hopefully by the end of the course I’ve managed to instill into the participating students initiative, creativity, logical thinking and the ability to approach problems from a different angle – all values I hold very highly.

Advertisements

Shuriken, a look at the AI

A year ago I built a ninja game about throwing stars, which you can read about here. Thinking back on my initial inspiration for the game, it was exciting to see the things that Overgrowth was producing and to think about the possibilities inherent in realistic and believable AI. I read an article back then that linked AI difficulty with perceived intelligence, and how developers could use tricks that enabled the AI to cheat enough to always present a [ramping] difficulty but not enough to overwhelm the player. In RTS games (which is a genre I particularly favour) it was an old and established trope to enable resource and unit detection/tracking bonuses to computer players to increase the difficulty level as an alternative to actually increasing the complexity and efficacy of the computer’s strategy. While definitely increasing the difficulty, this did not endear the player to actually feeling like they achieved something when they beat the computer as opposed to defeating actual human beings. All the good RTS players would eventually be able to defeat multiple computer players simultaneously anyway, so multiplayer is where the upper echelons of skill always occurred.

In games where single player is the main format however, such as RPGs or adventure/stealth like Shuriken was, it’s a very different kind of AI you are building. Immersion is a major factor, and it’s not about building an AI that can defeat the player so much as building an AI that seems like it can defeat the player, or at least fit in to the world of the game. My major inspiration here was Left 4 Dead with it’s AI Director, which would spawn more zombies if the players were doing well or less if they were doing badly. The spawns were also timed to come in waves and in so doing have peaks of activity followed by rest periods where the players could resupply and prepare for the next wave.

The initial AI design for Shuriken would emulate this, with the player coming across areas heavily populated by guards that they had to get through. Upon killing a guard, the body would be discovered and they’d have a peak of activity as they evaded the other guards that would search nearby. With an emphasis on realism, my initial idea was to have them choose random points throughout the area to walk to, then resume patrolling. This would result in things happening like the player right around the corner of a body, but all the guards randomly choosing to go and search in the opposite direction instead of the clear hiding spot adjacent to the murder scene. Similar to escaping guards in Assassin’s Creed, you could hide in plain sight but unlike in Assassin’s Creed, there was no clear reason for the guards to call off the hunt as they had obviously done a terrible job of searching. By making the player’s job easier, the AI seemed dumb or broken.

The logical follow on to this is to have “hide” nodes nearby which the guards would always check, and by extension build the AI by placing context sensitive nodes around the area. I wanted to build the perfect system though, one where the guards would examine their surroundings and decide what to do based on their internal decision making process. Big ask. The navmesh was too finicky and the system built to handle geometry too delicate – one thing out of alignment and the pathing would be completely broken.

Fallout 3’s level design tool allowed designers to manually place navmeshes, and I think this would have been the ideal way to approach the situation. Manually placed navmeshes designed to map out the geometry according to human sensibility, or even pathing nodes with spline paths worked out between them to make AI movement seem natural despite movement effectively following squares on a chessboard. I didn’t want to recreate the SWAT games, but what I ended up doing was taking tools out of the hands of the designers and not building an adequate replacement.

This goes to support my earlier conclusion of always starting with the end in mind, I guess. I wanted to make a game where the AI would seem realistic and smart, searching for places the player would likely to be but still look like they were making decisions according to the world around them. In future, I’ll build a system of manually placed nodes and context-sensitive data for the AI to interact with before I try to build the system that automatically generates it based on a static level that has already been built.