Shuriken, a look at the AI

A year ago I built a ninja game about throwing stars, which you can read about here. Thinking back on my initial inspiration for the game, it was exciting to see the things that Overgrowth was producing and to think about the possibilities inherent in realistic and believable AI. I read an article back then that linked AI difficulty with perceived intelligence, and how developers could use tricks that enabled the AI to cheat enough to always present a [ramping] difficulty but not enough to overwhelm the player. In RTS games (which is a genre I particularly favour) it was an old and established trope to enable resource and unit detection/tracking bonuses to computer players to increase the difficulty level as an alternative to actually increasing the complexity and efficacy of the computer’s strategy. While definitely increasing the difficulty, this did not endear the player to actually feeling like they achieved something when they beat the computer as opposed to defeating actual human beings. All the good RTS players would eventually be able to defeat multiple computer players simultaneously anyway, so multiplayer is where the upper echelons of skill always occurred.

In games where single player is the main format however, such as RPGs or adventure/stealth like Shuriken was, it’s a very different kind of AI you are building. Immersion is a major factor, and it’s not about building an AI that can defeat the player so much as building an AI that seems like it can defeat the player, or at least fit in to the world of the game. My major inspiration here was Left 4 Dead with it’s AI Director, which would spawn more zombies if the players were doing well or less if they were doing badly. The spawns were also timed to come in waves and in so doing have peaks of activity followed by rest periods where the players could resupply and prepare for the next wave.

The initial AI design for Shuriken would emulate this, with the player coming across areas heavily populated by guards that they had to get through. Upon killing a guard, the body would be discovered and they’d have a peak of activity as they evaded the other guards that would search nearby. With an emphasis on realism, my initial idea was to have them choose random points throughout the area to walk to, then resume patrolling. This would result in things happening like the player right around the corner of a body, but all the guards randomly choosing to go and search in the opposite direction instead of the clear hiding spot adjacent to the murder scene. Similar to escaping guards in Assassin’s Creed, you could hide in plain sight but unlike in Assassin’s Creed, there was no clear reason for the guards to call off the hunt as they had obviously done a terrible job of searching. By making the player’s job easier, the AI seemed dumb or broken.

The logical follow on to this is to have “hide” nodes nearby which the guards would always check, and by extension build the AI by placing context sensitive nodes around the area. I wanted to build the perfect system though, one where the guards would examine their surroundings and decide what to do based on their internal decision making process. Big ask. The navmesh was too finicky and the system built to handle geometry too delicate – one thing out of alignment and the pathing would be completely broken.

Fallout 3’s level design tool allowed designers to manually place navmeshes, and I think this would have been the ideal way to approach the situation. Manually placed navmeshes designed to map out the geometry according to human sensibility, or even pathing nodes with spline paths worked out between them to make AI movement seem natural despite movement effectively following squares on a chessboard. I didn’t want to recreate the SWAT games, but what I ended up doing was taking tools out of the hands of the designers and not building an adequate replacement.

This goes to support my earlier conclusion of always starting with the end in mind, I guess. I wanted to make a game where the AI would seem realistic and smart, searching for places the player would likely to be but still look like they were making decisions according to the world around them. In future, I’ll build a system of manually placed nodes and context-sensitive data for the AI to interact with before I try to build the system that automatically generates it based on a static level that has already been built.

Advertisements

Shuriken wrap-up

The project has concluded now, and the final handover has been finished. At some point, I might write up a proper post mortem but I think the main thing I took away from the game is that now enough prep work was done to familiarise myself and the other coder with the technology – and thus to determine whether we made the right choice early in development (or even before it starts!).

I say this, because looking back on the project the main pitfall was that milestones were consistently delayed or missed because the underlying technology wasn’t behaving as expected, or there simply wasn’t time to familiarise ourselves with some minutiae of function to achieve the desired result. The main culprit for “not behaving as expected” was the physics engine, or rather the physics wrapped – we were using the OgreBullet wrapper around an (eventually) out of date version of Bullet, the initial idea of which was to smooth out and speed up development of the core gameplay. Just “slot in” Bullet physics to get character control fairly polished early on, then focus on pathfinding / AI / graphics for the majority of the rest of the project.

The main problem with OgreBullet (apart from it’s outdatedness quickly becoming a problem) was that some functions (raycasting) simply didn’t work, or behaved completely unintuitively (rigidbodies locked the parent scenenode to their position, and did not nicely handle any form of manual transforms to be applied). The icing on the cake was that only by digging through the OgreBullet source and constant testing of different implementations was any real progress made as documentation was non-existent. Apart from that, Bullet was nicely featured and I’ll definitely use it again in future (in fact, I am using it right now).

It should be noted that a lot of time was also spent mucking around with Recast, and getting NPC movement to mesh nicely with the character controller of the week. This one was more a fault of “not enough time familiarising with the technology” than “in hindsight this tech was not the right choice.” Dynamic navmesh generation is pretty complex, but we managed to get it working and tweaked a couple of months into the project (as evidenced from various demo pictures I’ve posted earlier).

I’ll discuss different implementations we tried, and particular methods we ended up using in a later post, but this project really rings home the mantra “You learn more from your failures than your successes.” No-one would say that this project was a complete success, but I don’t think it was a failure either. We produced a sharp, good looking game with mostly smooth gameplay, and given that we were absolved from the requirement to produce a commercial success (Shuriken was a final year university project) at the end of the day the goal was to produce a game, and target it for commercial release.

0.100000 != 0.100000

Or, screenshots taken of some of the notable, broken or just plain weird things we’ve encountered using the Ogre engine. In (approximate) chronological order;

The Windmill bug:
Windmills for everyone!
We’re using Unity to lay out level geometry and objects instead of developing a level editor from scratch, and it’s worked reasonably well so far. The one major hurdle we’ve had to overcome however was 3DS Max’s up-axis – Z, as opposed to the almost universal Y. FBX files exported from Max came in on their sides in Unity. A variety of compensations at various places in the pipeline were used (using various different methods). The image above was a result of multiple such corrections having been applied simultaneously to the various structure sections – half of which had been applied early on and forgotten about later!
We eventually fixed the error and resolved the axis issue by using an exporter which auto-corrected the mesh, then removed the corrections, but for a while this caused a lot of grief.

The myth of ‘acceptable’ placeholder assets:
March of the fireants
As seen in the previous image, the twisted green creature on screen was an example of the player mesh, v2. Funnily enough, v1 actually looked much better – a shopstore dummy in T-pose instead of a gremlin from the depths of hell. It’s too bad I can’t upload the walk animation it was using, because that thing was ungodly. The image above is a screenshot of animated v3, which is v1 but with the animation of v2. Not sure if it’s an improvement, but at least it looked more human. I’d been constantly asking for placeholder assets to replace the ones I’d used to concept proof various features, so in future when you ask for placeholder assets, don’t expect them to be better than programmer art 😛 Also seen above is the ground texture used for an early test of parallax mapping.

Skeletal minions
Skeletal miniones
Setting up instancing on the meshes resulted in some interesting things. Apparently the player and guard meshes had been exported from Max with material ids not being set properly, which resulted in various, seemingly random submeshes disconnecting and flying around the map – they seemed to group up into two separate ‘groups,’ one composed almost entirely of joints (as well as part of the wrists, soles of the feet and the top of the head) which behaved approximately correctly according to physics and AI, and the other which was composed of the rest of the mesh and flew in a zig-zagging pattern around the center of the map… in a flock.

88 miles-per-hour
Unfortunately, I don’t have a screenshot of this one as it was quite intermittent, and we still haven’t exactly pinned down the issue (although it’s popped up with a couple of different models, so it’s probably an issue with the way the models were created). Basically what happened was that some random vertices were detaching themselves from the model and… moving away, resulting in hugely stretched faces intersecting and moving randomly around the map. Some of the memorable occasions when it occurred included when part of a guard’s fingers suddenly jumped twenty metres into the air and hovered above him as he patrolled around the map – it was like a gigantic piece of cloth was suspended from an invisible balloon above his head. Another time, a line of vertices detached themselves from parts of several buildings and moved in sync with the player across the map. That one was weird.

Finally, on a lighthearted note, here’s a stark reminder we received not long ago of why you shouldn’t use direct compares on floating point numbers.

0.100000 != 0.100000

Maybe next post I’ll show off some mroe of our experiments with parallax mapping… or maybe not.

Shuriken, a Ninja Game (day one keeness)

One of the people who saw the pitch yesterday just described it as ‘Batman, but with only the batarang.’ I think that’s a pretty apt description, considering you’re a ninja who throws an uber shuriken around killing mooks and big bosses. Day zero preparation was nerve-wracking, a combination of technical problems and late culmination meant that the prototype was barely prepared in time, and in the end all the frantic last minute code-wrangling to get it to compile came to nought as we pitched without it, but got green lit anyway. Moral of the story: get code-related sections of presentations running in advance. A long time in advance. Then don’t touch it. Or something. We’ve had gameplay, art and mechanics discussions going constantly since then, and some very cool things have been coming out of it.

As to the prototype, maybe it’s better if we just… forget about that code.

Here’s some background: at my university, the students dong my degree spend the final two thirds of the final year developing a full, commercial scale video game. My team concepted up a stealth-assassination game, where the player is a ninja that has to sneak around to the ultimate sniping spot, then throw a shuriken to kill the target – and the shuriken can be steered to avoid obstacles (and even avoid being seen, which will reveal the presence of the player!). The game is a #D, third person rpg-shooter combination both styled after and inspired by PS1 and PS2 games such as Shinobido and Tenchu, but draws gameplay elements from more recent games such as Batman (Arkham City and Asylum), Hitman and Assassin’s Creed.

The team pitches their concept on day 1 of the project to a panel consisting of senior lecturers and local industry representatives, who 9throughout the course of the porject) form a combination of assessor, teacher and ‘client,’ who alternately give us feedback / assistance, tell us whether we should cut back or have room to upscope and demand us to have features implemented by a certain date. That last one is a fun one, according to previous year’s graduates.

Our progress has been great so far, steaming ahead with greyboxing and prototyping from the get go, so only the next 26 weeks will tell whether we can maintain this pace 😀

One thing was for sure though, after a long day our level editor was working and more than capable of greyboxing. It’s a great feeling when your level editor is working from day one (see blog for his more indepth post about the editor).