Showing posts with label i3110. Show all posts
Showing posts with label i3110. Show all posts

Tuesday, 10 December 2013

Post Game Engines: Gamma Gears Analysis


This semester, we had one class in which we as students would be learning more in depths ways to program using engines.  This class was Game Engine Design and Implementation.  At the start of this class, we were introduced to Ogre and Havok and were given full use of these tools in the TwoLOC engine provided by our Teaching Assistant Saad.  Now, my team had created a great engine last year and were slightly upset that we were not able to transfer it over to this years GDW project.  After coming to terms with the fact however, we approached our Doctor Hogue, about possibly taking on something more challenging and unique in comparison to everyone else.  We did not dislike the great drawing abilities of Ogre, or the incredible physics provided by Havok, we just wanted to take on a different approach.  So, in response Dr. Hogue provided use with the Phyre engine that Sony uses on their systems.  Needless to say, we were very excited at the possibilities that were provided by this engine.  

After examining the code structure of the sources provided, and playing around with included level editor, our team started to formulate ideas as to what type of game we hoped to create.  I personally suggested a survival horror game, but the team discarded the idea rather quickly.  Then we stumbled upon the idea of possibly creating a isometric 3D arena brawler game.  After some ideas were thrown around and we came up with a general concept, Gamma Gears was born.  



Now we knew as a team that creating this game with the Phyre engine would not be an easy task.  Since the engine is very focused towards being used by those would helped design the engine itself, we found that it was difficult to find documentation through our search of the assets.  However, we were fairly confident that our programmer James would be able to dissect the assets and get us to a working prototype.  While James was working on figuring out how we would use the engine, Bobby, Vincent and I got to work on creating the assets needed in the game.  This required many stages from concept art, to modelling, then to rigging and texturing the models.  By this point, James was able to get models loading into the engine, so it was time to start animating the models.

The concept of animating for the Phyre engine was very similar to how we would have accomplished the task with TwoLOC, however there were some slight differences worth noting.  In the same fashion as previous years, the characters animation can be set using keyframe animation in Maya.  Whats different about getting these animations to Phyre however, is when the animated model is exported.  Included with our version of Phyre was a plug-in for Maya named the Collada exporter.  What this exporter does, is instead of exporting a standard obj. or maya binary file, Collada exports our animated models as a Phyre compatible .dae file format.  From here, we needed to ensure that our models were being exported to a location that Phyre could work with.  We found that the easiest location for our purposes was the contained Media folder in our Phyre folders.  Once this had been discovered, we as a team were able to implement our main character Alex into the scene.  There was an issue however, as Alex would only sit in his idle position as we had not character controller set up for our asset.  This is when we as a team realized how useful the Lua included scripts in the Phyre engine would be.

Alex in his idle stance in our scene
After the group spent some time reviewing the included Lua scripts in the Phyre sub folders, we realized that we had a choice to make when it came to make our character move around the game space.  Our first choice was to attempt to use the included Character Controller lua script that provided all of the functionality we were looking for in terms of movement, however the script was lacking collision detection.  After searching for some sort of collision script to use, we were unable to locate anything that may assist us, so we attempted to implement raycasting techniques that we had implemented in our game last year.  These techniques also ended up failing as we tried to implement them.  That is when we stumbled across the Physics Character Controller in the media scripts.  This controller allowed us to control everything from velocity to jump height, as well as gravity and the size of the collision capsule that was now attached to the character.  We were very excited as we now had a fully animating character!  However, the character was animating across an empty game world.  So we as a team got to work putting together our scene.

Alex with the physics controller implemented

The Phyre level editor is one of the most fantastic assets for constructing scenes for our levels, as it works in a similar fashion to Maya.  Although we constructed the bulk of the scene in Maya, after implementing it into the Phyre level editor, we realized that micro changes needed to be made to allow for level fluidity and function.  In other engines, this would require movement back and forth between Maya and Phyre, but the level editor allowed us to change and test these options on the fly.  After some hard work, we now had our character animating within a scene! 

A screen shot of our scene in the early stages of development!

Our next gameplay focus could not have been more perfectly timed, as it tied directly in with the homework assigned for the class.  This was of course the trigger system that we had been in need of since production began.  Since our game have different power-ups that the character needs to collect in order to become stronger and faster, we needed some form of triggering to occur to signify that the item needs to disappear from the screen.  How this works in the engine involves the use of included lua trigger reciever and quarrys scripts, as well as trigger bounding boxes that are also provided in the engine.  The first step is to link the trigger receiver to the bounding box that is created around the object we wanted Alex to collect.  From this point, we are able to add the quarry to Alex's physical collision body.  This allows the engine to recognize not only when something has collided with the trigger, but also check that it is Alex's physical body colliding.  From this point, the item in question will disappear when the proper bounding box makes contact with the proper trigger.  It was a very difficult task to figure out, but it helped not only with the game progress, but also with our class workload as well.

Our final stages for this semester was to implement an attack animation and then transfer all of this data to our Phyre project that we had designed and set up in Visual Studio 2012.  To implement the attack animation or our character, we used the same method as all of our other animations, but with a special twist thrown in at the end.  Since we needed to ensure that the player makes contact to the NPCs in the game to do damage, we needed to add a separate trigger to the gamespace occupied in front of Alex when he attacks.  Once this trigger was set, we added quarries to all of the NPCs to allow them the ability to recognize when they are getting hit/taking damage.  Since we had already figured out the trigger systems in the Phyre engine and how they functioned, this task was accomplished very quickly and efficiently. 

From this point, the team just needed to export the Phyre level editor information, into the Base Phyre project we had set up to accommodate the incoming files.  Although our first several attempts had some issues with animation and triggers, we were able to make the necessary adjustments needed to have a fully functioning prototype for the due date.  

Gamma Gears in its current stage is in no way a completed title ready for release.  We as a team are aware of the work that remains before we can successfully call our title a game.  With the base framework set and our newly gained knowledge of the Phyre Engine however, we all feel that the new semester will be a productive period for all of us working on Gamma Gears.  


Sunday, 8 December 2013

The Camera Systems of God of War III!


God of War III is a hit title released by Santa Monica Studios and Sony Computer Entertainment.  As the third title in the God of War series, GOW III was released back in 2010 for the PlayStation 3.  As a third person action adventure game, GOW III falls into a category that is filled with great titles from various AAA developers.  Because of this reason, the developers of the GOW series needed to differentiate themselves in a way that had not been seen in the current market by players.  The main way the developers chose to do this was through the cinematic feel they imbued in the franchise, via their in game camera systems.  Since I have already covered the basics of camera systems in games in my previous blog post, this post will focus more on how GOW III designed, tweaked and implemented into their game to create these breathtaking cinematic experiences that have helped to cement this series as one of the greatest third person action titles of all time.


At the beginning of the development of the GOW III camera system, Sony Santa Monica realized that one of the main reasons their games have been so successful, is because of the camera systems that they have implemented in their two previous titles.  For this reason, SSM wanted their third title to have the most epic feelings possible for the player experience.  To accomplish this task, four of the 120 team members working on the title were given the sole tasks of creating and implementing this intricate scripted camera system for GOW III.  The teams theory behind having a completely scripted camera system tied in with their already outstanding gameplay mechanics that had been working for the past many years.  They felt that if the gameplay in a game is solid and fluid, why should the camera be focused solely on the characters close vicinity, when it could pan across a beautiful scene while the player hacks and slashes their enemies.

This scripted camera system is unlike most traditional third person games where the player is given control of camera movement, since in GOW III the player has no influence on camera position.  For this process to be successful in game however, there must be a significant amount of time put into each cameras exact position.  For the GOW III team, this whole process started out with simple level design sheets being provided to the camera team.  With these design sheets in hand, the team is able to figure out the best locations for camera placement based on enemy locations, where the player needs to be moving towards, and unlockables that need to be visible for the player.  Once all the cameras have been implemented and playtested, the camera team sends their work to the art department.  This is where the majority of the level and camera design time is spent, as the artists are required to create these massive and epic scenes around the camera design of the level.  Before the completion of a level, the art department will send the level back to the camera team, as many cameras will need micro adjustments to work within the level being designed.  This is a very tedious process for the team, but it is necessary to accomplish that cinematic feel that everyone looks for in a GOW title.

What the camera team sees while designing a level!

Now, since a scene can have anywhere form a few, to dozens of camera locations, the design team needed a way to keep everything within the frame, while simultaneously showing the landscapes and epic designs that the artists had spent so long working on.  The first system that I discovered while learning about the GOW III camera system was the rail/dolly system they used.  This system was mainly introduced in the GOW III as the technology had advanced enough to allow for this.  Usually this system was used in long linear sections of the game.  How this system works, is that the designers would set a NURBS curve along the path they want the character to take.  A camera is attached to this curve, as well as many points that the camera can smoothly interpolate towards.  The animations are mapped to the rail as well, and during each frame, the system calculates the nearest point on the rail in relation to the character model, and interpolate the camera to that point, making for a movie style dolly effect.  Now this works great for following the character, however the designers needed to implement what they called the boom of the camera as well.  This boom was used for calculating the look vector of the camera, starting at the dolly and ending at the target position (generally the character model).  This boom would allow for panning in and out of the scene, as well as up down movement of the camera to allow for showcasing of the beautiful environments of the game, while still keeping the player in the viewport.  

The Rail Driven system used in GOW III
 

Now the system shown above works great in many of the level traversal aspects of the game that are critical to the gameplay.  However, the team needed to design a system that would work for when the main character Kratos is fighting multiple enemies in a large, wide open setting.  For the camera system to work well, Kratos needs to always be in frame and the focal point, but the team also needed to keep all enemies visible for the player as well.  To deal with this dilemma, the team implemented a weighted average camera system.  How this functions is that each character is given a weighting of "importance" in the current scene.  Generally Kratos is the highest weighted the team wanted to keep him as the central atom of the camera.  These weighted positions are all added together and then divided by the sum of all the actual weightings of each character.  This gives the team a value that they can then feed to the boom system to use as the boom target, effectively keeping every character within the viewport of the scene.  An important note is how the team gets these weights for each character (ranging from 0 - 1).  We already mentioned the base weight of the character (how important the character is), but the team also takes into account the distance weight (distance from the hero, not the camera), as well as the activation weight of each character.  This activation weight is used to smoothly transition the camera when an NPC is spawned into the scene (a high weight) or when they are killed off (a low weight).  This value helped to remove the jerking around that occured with the camera when an enemy was killed and the cameras boom target value would change drastically from one from to the other.


This system worked great when the team was testing large battles with a plethora of enemies, but when testing one on one fights, they stumbled across an issue with the weighted average system.   


As seen in the image above, the camera is too far away from the action then the designers had originally planned.  So they decided that instead of implementing the weighted average system throughout the entirety of the fight scenes in the game, they would also implement a new system which they called prioritized framing.  Along with the weighted average system, the prioritized framing system is used in the majority of the large scale battle sequences in GOW III, so I felt it was important to cover the basics of this system as well.
The main focus of this new system, was to ensure that the highest priority entity (in this case Krators) is always remaining in the frame of the scene.  From here the system would work outwards by priority level, doing its best to include as many of the less important entities as possible, while still keeping an acceptable frame distance.  The system worked on an algorithm that functioned as followed five steps to function, the first being centering the camera on the heroes location.  From here, the algorithm calculates the extents of  the lowest of the priority levels.  Now that we have these extents, we can minimally track to frame extents of the current priority level we are focusing on.  Step for is variable in time to complete, as the algorithm must work its way through each priority level completing steps two and three of the process. Once all the priority levels have been completed however, we can calculate the delta to the Azimuth and the elevation that will be needed for the booms target location.  For a visual representation, the framing would begin as this: 


And end up as this, where the red is the out of frame area with NPC's still inhabiting the space:


It was a truly ingenious fix to the problem the team was having, as they came to the realization that not all of the enemies are relatively important enough to be on screen at all times.  Also, if we remember the zoomed out screenshot shown above, the fix implemented by the team showed through as the scene now looked like this: 
Zoomed in more and framed correctly!


The cameras implemented in God of War III are truly works of art that shine through in the gameplay.  Creating these epic cinematic sequences is no small feat by a team, and the work is definitely evident when a player takes control of Kratos.  Not only does that camera work as a viewing mechanism in the game, but it also becomes the players ally at times, giving hints on different unlockables, as well as panning slightly to reveal where the player must go next.  Its a truly remarkable system that I would personally love to see carried into more games in the future.  Hopefully the God of War team can bring this technology to a next-generation game and further prove to us how versatile and amazing a camera system can make a game! 

God of War IV would be pretty nice!






Saturday, 30 November 2013

Stop, Camera Time!


So for today's little blogging adventure, I have decided that I am going to be covering the topic of cameras in Games!  Cameras play such an important role not just in allowing the player to see what is occurring, but also with setting the mood of the game and adding another layer to the immersion.  Now there are a few different types of cameras that can be used for different games and I will be getting to those shortly, but first I am going to give a little explanation to what a camera is and how it fits into our game space!

By now, you should know that a camera in a game does not really function in the same way that a camera in real life does.  It does not record images onto film or digitally and allow the player to pull them up for their enjoyment at a later time.  Instead a camera creates a technical window into which our player can view the action that is occurring on the screen.  Now, even though many of the games that we play our classified as 3D, we still are not able to play them in a 3 dimensional space. (Unless of course you can afford a 3D TV!)  It is the cameras job to capture the 3D action occurring in the game world, and display it to the player on the screen in the form of a 2 dimensional image.  In a way, this makes current generation 3D games feel as though they should be called 2D games with very nice 3D models!

This is done by the use of creating a viewport that the player can look through to view the game in front of them.  In a perspective camera, much like the ones we use in Maya while modeling our characters, we are able to view our images in what is called the frustrum.  The frustrum of a camera is the distance between the near clipping plane and the far clipping planes.  For those who may be confused, a near clipping plane will not load any objects past its threshold (set distance away from the camera), while a near clipping plane will not load any objects in the viewport that are closer than its threshold.  Now, in a renderable scene there must also be an aspect ratio.  This ratio is determined by setting the width of a scene, as well as a height, then dividing the two in this order.  Since cameras work in the way they do, with a near and far clipping plane, objects that are closer to the far clipping plane will appear smaller than objects closer to the near plane.  This is how we create distance on screen and a sense of 3 dimensions!  The other form of camera used generally is called an orthographic camera.  This camera does not create a sense of distance and is therefore generally unpopular in gaming.

A classic perspective Camera
Now that we know that it is necessary for a camera to be involved for 3D images to be displayed with distance on screen, we can understand why its so important for games!  When it comes to cameras, a developer has a very important decision to make.  How will my camera system work?  There are more options than one would think, from the simple single camera system that follows the player from A to B throughout the entirety of the game, or will my game have many cameras, making the player feel as though they are immersed in a movie.  This important decision can not only impact how the player will play the game, but also the feelings and emotions they will have as well.  For this reason, camera choice can not be taken lightly.  So, before a designer can choose how they want there camera to function, they must first choose which camera type they would like to use, fixed or dynamic.

When I think of fixed cameras, there is really only one game that stands out in my mind, and that is the God of War franchise.  I am not sure if this is by my own personal preference, or if this is the result of three years of lectures with Dr. Hogue.  Even if this is not by my own choice, I can not argue with my professor on his choice, as the GOW series has a fantastic camera system.  This is because the camera is not meant to only be focusing on the main character constantly.  In my opinion, this camera system is best used for creating a very cinematic and epic gaming experience for the player.  The camera does not follow the player on a 1-1 character to camera movement basis, instead the player works inside the camera interpolates around the screen at its own pace.  The character is always kept in the frame, while still giving the player the sense that the camera is mounted on a swivel in another area, capturing the most cinematic of shots.  The cutscenes follow this trend as they are laid out in a way that the player feels as though they are watching a movie and they are the main protagonist.

This can be seen as Kratos is not perfectly centered like most camera systems.
These systems are beautifully executed in the game, but they are not a hard concept to grasp when some thought is put into the idea.  A camera such as this could be set up in Maya with the use of locators in different positions around the current level.  These could then be exported and simple interpolations could be implemented between each of the locators.  As the players moves about the level, the camera will smoothly pan from one location to the next, giving a very cinematic and cutscene like feeling to the levels.  Although fixed cameras do have their perks, many games have been implementing the second form of camera systems, Dynamic cameras.

These cameras, unlike their counter-parts, are definitely harder to understand when it comes to implementing them into games.  As discussed earlier, dynamic cameras are those that follow the player in a 1-1 manner.  If the player moves forward on the X axis, the camera will also move forward on this axis.  Since this is the case, a designer of a game has to ensure that this camera style will fit into the mood of the game they are trying to portray.  If a cinematic feel is what they are after, they should probably stick with the fixed system.  This camera style does have the advantage of putting the player directly into the action of the character however through the use of the first person camera.  The first person camera is great for games such as where the player wants to take on the role of an entity in the game, as the camera view is through the eyes of that character.  This is relatively easy to set up, as the cameras position is centered on the head area of the character, looking in the same direction that the eyes would be.

Through the eyes of the hero - First Person View


Setting up a camera location for the other form of dynamic camera, third person, is another story.  The third person camera view has the player from an over the shoulder perspective.  Most or all of the characters body is in view, and generally allows the player a more in depth view of the space the character is holding in the world.  This can be difficult to implement into games however, as the designers have to be very aware that the camera is actually slightly behind the players physical entity in the worldspace.  This can cause problems when a character runs toward the camera, while simultaneously running towards a wall, causing the camera to malfunction or show the character through a wall.  These are the challenges facing those who decide to implement a third person camera. 

Many fixes have been implemented into games over time however, and some work much better than others.  A way that has always thrown me off personally is having the camera reset to the behind the back position, upon coming into contact with a wall, while still allowing the character to keep running.  This always disoriented me, and made for a very un-enjoyable gaming experience.  A successful fix for this problem involved having the camera travel upwards along the wall so that the player was looking down at the character.  This doesn't have the disorienting effect that the other method has and it still allows the player to keep their thumbstick pointed in the same direction. 

Hopefully this blog has helped to outline the importance that cameras carry when being implemented into a game.  Although a bad camera system may just show the player what is occurring on the screen, I believe that a great camera system can involve the player on a whole new level, making them feel apart of the action taking place in front of their eyes.










Thursday, 28 November 2013

Scripting: The What and the Why.



Recently in one of our lectures presented by Dr. Hogue, we went over an exciting topic that is sure to benefit our game creation dramatically.  This topic was of course, scripting!  He went over the various uses of scripting, the different types of languages used for scripting, and how we as students will be able to include scripting into our games to save time and effort.  For this blog post, I will first be outlining what scripting is, then outlining some of the scripting languages used currently, and finally, finishing off with how our GDW group will be using scripting to our benefit!

Up until this point in our education, coding in C++ has been a pretty straight forward task.  We write the code in our .cpp files and .h files, then compile the program and run it to see if it functions.  It is a system that has worked for all these years, even though it is inefficient.  If a small change is made in the code and the programmer wishes to test it, the project must re-compile all over again to see if the change was a success.  For smaller projects, this time does not seem like large burden to the programmer, however, when a team is assembling a large project (such as our games), having to re-compile after each change can mean substantial amounts of down time.  To help combat this problem, scripting was introduced to the mainstream programming scene.

Scripts in themselves are generally small text files written separately from the main code base.  These small texts are then taken and compiled at run time, outputting some option for the user.  This allows for the rapid development of minor changes in the project, while also making the engines core behaviours simpler for the programmer. For a non-programming type like myself, this also meant that I could write code in scripting language, which I found much easier to understand and interpret for my small tasks.

Now before a script can be written, the user must realize that there are two different types of scripts that can be written.  These scripts are:

Interpreted

Interpreted scripts are usually intended for smaller tasks, as the process of "sticking together" a succession of compiled programs, is not very ideal for larger projects.  Interpreted scripts is a program where a logically sequenced series of operating system commands are handled piece by piece, using a command interpreter.  This interpreter does this, by going through the scripting code line by line, examining what is occurring at each stage.  These lines are then parsed, interpreted using the operating system and executed as they are passed over.  This is why larger systems generally stray away from using the interpreted scripting method.

Compiled

Compile scripts are what programmers use when they need to handle a larger task, but still do not wish to compile the project as a whole because of time constraints.  Similar to how C++ functions, for this style of scripting to function, we need to use a compiler that creates bytecode from the written script.  This bytecode that is written in a special format, in which the interpreter can load the bytecode and understand what is occuring in the script.  From this stage, the bytecode in loaded into a virtual machine that is residing in the script application.  While this bytecode is being loaded in, the project can run in a piece by piece format, saving the programmer their most valuable resource, time!

Since we had all of the benefits of scripting sitting in front of us, we as a Studio decided that using scripting in our program would be the most efficient method of completing this project on time.  Without the fear of needing to make large changes to the code to justify re-compiling, we as a group figured we could have a semi flexible game idea that allowed for constant modification and testing when a new idea was formed.  The only issue we had was which scripting language would we want to use?

Now, we planned on using scripting to cover the majority of our workload.  This would include using scripting for things such as previewing character animations, testing level design layouts and concepts, as well as previewing different mechanics for our game and how they would function in code.  Since we knew this all had to fit within the abilities of what our scripting language was capable of, we needed to make the smartest decision.  In class, Dr. Hogue outline many of the different languages used today.  The list was as follows:
  • Lua
  • Python
  • GameMonkey
  • Squirrel
  • AngelScript
  • SpiderMonkey
  • Ruby
  • Gmae Maker Language
  • Unreal Script
Needless to say, we had a tough choice to make.  Since our programmer James was going to be using the language the most, we first and foremost wanted to be sure that he was comfortable working with the language we chose.  So, after careful deliberation and a some time looking into each language, we all settled on the decision to use the Lua programming language.  
   
Even though it is not the most exciting Logo
Since we were starting with a fresh new language that we had little to no skill in, we decided that our first task should be to find suitable knowledge sources to use.  As it would turn out, the Lua website provided many of the resources we needed.  In their documentation section, the website provides many sourced written documents, including topics such as the implementation of Lua and the design of the programming language as a whole.  These sources were a great benefit for us, as it gave the team a starting point to work from.  When it came to specific problems we ran into with production involving Lua, Youtube came in to save the day.  Many users have uploaded visual demos, that helped us to work through our problems, when the documentation did not cover the issue, or was too difficult for us beginners.

An example of an issue our team came across very early on in the use of Lua, was the lack of libraries being included in the Lua pack itself.  Without these libraries, there would really be no use in using Lua, as it would not properly run and compile.  So, the team was faced with yet another choice.  Do we attempt to create our own libraries, or do we seek out a 3rd party one to download?  While discussing this among the group, James was able to put the issue to rest by locating a pack of Lua libraries that would successfully run and compile our scripts.

Although it seems like a lot of work to learn and implement scripts into our games for the year, it has truly proven to be an invaluable resource.  Not only has it saved the team a substantial amount of time compilation wise, but it also allows us to rapidly implement, test and either scrap or keep new ideas for our game.  The benefits have far outweighed the cons and I am greatly that this knowledge was included in the teachings for our Game Engines class.

Tuesday, 26 November 2013

My Favorite Game Engine... The Source Engine!


For as long as I can recall in my gaming history, I never really took an interest into how game engines worked.  As a child, if I could throw a game into my computer and have it run at 20 Fps, I was a happy camper.  Since I grew with a sub-par PC at home, I found it difficult to run a variety of the higher end games that always looked so great on my friends gaming PC's.  I had lost all hope for running one of these graphically intense games, until in 2005 I picked up Half-Life 2 from a designer called Valve.  When I popped the game into my CD drive, I was stunned at the look of the game and how well it was running on my PC!  At this point, I wanted to know what was so special about this game that it could run so smoothly on my not so great system.  This hunt led me to an article on the at the time year old Source engine.  I instantly fell in love and began picking up any game I could find that was using this great technology.

As you may be able to guess, I still love the Source engine today, so I found it fitting that I should do a blog post on an Engine that I still use to this day.  During the upcoming blog post, I am going to be outlining a bit of the history behind the source engine, some features relating to its design and use, as well as what Valve has planned for this incredible technology in the future.

I still can not believe I could run this on my machine!
A HISTORY OF THE SOURCE ENGINE

So before I get into the technical aspects of the Source Engine and the incredible features contained within it, I thought that it would be fitting to give some background on the engine and its creation.  The first deployment of the source engine was in 2004 when it was released in tandem with Counter Strike Source.  Players were amazed at the optimization of the engine and how well the game performed, even on low to high end systems.  Released shortly after CSS was Half-Life 2, which featured even more realistic character models, animation, facial movements and ragdoll physics.   Valve was being praised for this new source engine, however it would not have been possible without the GoldSrc engine, which in itself is a very heavily modified version on the Quake Engine from years past.  This use of the GoldSrc engine can even been seen in code snippets from both Half-Life 2 and CSS.  Although Valve did give credit to the GoldSrc engine, initially referring to their engine as the GoldSrc and Source engine, it was in 2003 at an E3 conference where the engine name "Source" became the one officially used by the public and Valve employees.  Over the years, the designers and programmers at Valve have taken on the task of replacing the original GoldSrc engine code, with their own better optimized code for current consoles and PC hardware.  

Since the source engine is an ever changing staple of Valves company, they needed a way to constantly be updating these features for the players, to always improve on their gaming experiences.  To accomplish this task, Valve decided to use their already successful steam online digital distributer to constantly feed the players with new updates as well as fixes that may be required for certain games.  This has allowed the Source engine to constantly evolve, and even be used in current upcoming next generation game titles such as Titanfall set to be released in 2014.  Now that you know a nice chunk of the history behind the engine, lets get into more of the technical aspects and what this engine is capable of.

Not a bad looking Logo either!

THE MEAT AND POTATOES OF THE ENGINE!



Since the creation of the Source Engine back in 2004 many different technologies have been developed for implementation in the engine, as well as removed when they have become outdated.  What I am going to be discussing was what is currently present in the Source Engine, thanks to an information licensing sheet released about the engine.  To start, I thought I would touch on some of the assets included in the Source renderer.  These features include Direct 3D rendering on PC's, Xbox systems, thanks to the multiple CPU cores included in all of these systems.  HDR or High Dynamic Range rendering is also included, as well as 3D bump mapping on all models in the game.  For rendering the lighting in their games, the Source engine uses pre-computed radiosity lighting as well as dynamic shadow maps on character and object models.  My favorite part of the engine, the physics engine, was derived from the classic physics engine that has been in use for years.  This physics engine is network-enabled and very bandwidth efficient to not disrupt players.  Among these major technologies included, the source engine contains water flow effects, dynamic 3D wounds, cloth simulation, an advanced particle system as well as a whole host of other features that are exciting to discover.  

One of the key areas of the engine that I really wanted to focus in on was the animation of characters faces and actions, as well as the material system included.  To start though, I am going to outline some of the key ways that Valve created these life like characters that we all fell in love with.  Using the source engine, the designers were able create characters with believable character traits, as well as interactive and intelligent responses to character movements and actions.  The Source engine a realistic eye system included, which allows the NPCs to focus on either the character or different objects, not just parallel views, while also adding realistic eye bulge that is present in humans.  The simulated muscularity of the characters provides emotions in speech in body language, while the Faceposer included can also be used to craft more dramatic emotions and lip syncing.  Another animation asset included is the skeletal and bone system, as well as the layered animation system.  When used together, these two features can synthesize complex animations out of several pieces of data.  Now we come to the material system used in the source engine.  Valve really took everything into account when creating this system, since instead of traditional textures each object is given sets of materials to define what the object is made of and what texture to use.  These materials are important, as they define what the object will do when fractured, dragged along a surface, as well as its mass and buoyancy.  It is a very interactive system to use and can even have effects on NPCs or objects, such as mud slowing a character, or ice causing them to slide.  I was really intrigued by how this system works, and it shows through in the games how well it works.



Since we have discussed some of the background elements of the engine that the players know is there, but don't pay a second though too, lets discuss some of the features we interact with all the time.  These features of course are the multiplayer, AI, the sound, and the UI that is implemented in the games using the Source engine.  Since we are touching on the multiplayer first, I thought it would be fitting to discuss how robust the networking code of the Source engine is, as it allows support for up 32 players in both LAN and internet based play.  Not only this, but a complete toolset is included to for use by level designers, character animators, and the creation of demos for players to experiment with.  Since collision can sometimes run into some faults via multiplayer play, the source engine includes a prediction analysis system, that is used for interpolation collisions and hit detections of players and objects.  This makes for a much smoother online play experience for all players.  Having tested it myself many times, I can say that the multiplayer in the Source engine has worked flawlessly for me over many different games and consoles.  

The AI in the source engine is what truly helps to bring many of Valves cult classics to life.  This is evident in games such as the Half Life series, where the player is able to form strong meaningful bonds with characters such as Dr. Kleiner and Alyx Vance.  This is thanks to the advanced AI included in the Source engine that allows for these meaningful interactions.  This AI includes features like graphical entity placement for level designers to quickly control the game environment, sophisticated navigation to allow characters to run, jump, climb, and take cover.  The AI also use senses to see, hear, and smell, as well as determine whether an entity is friend foe or just an object.  The one feature of the AI that truly stuck out to me however, was the squad AI.  Upon finding survivors, they will join your party and behave like a real military unit.  The characters operate together, knowing when to advance, take cover, suppress the enemy and retreat to different cover.  It was a really immersive feature included in the source engine and helped to further the realism of the game. 

The Source engine is fully supportive of 5.1 surround for up to 4 speakers.  A high quality 3D spatialization effect is also included to give the character a sense of distance and direction of the enemies.   The engine also has a pre-authorized doppler effect included, as well as support for users to stream audio on any wave.  The audio really helps to bring to life the fights, especially when alien weaponry is involved in games such as Half-Life 2!

Finally we come to the user interface of the source engine.  The source engine uses a very simplistic approach when it comes to their UI, however it successfully gets the job done.  The main area where the UI's simplistic nature works well is in the multiplayer server browsing area.  The server browser displays all of the currently active servers for the player to choose from, as well information pertaining to each.  From this screen the player is able to filter out different settings, pick favourites, and see past servers to help with their choice.  The messenger in the source engine can also be used here for chatting with friends, as well as joining servers that friends are currently playing in.  Its a great interface and although I struggled with it at first, I now know why the Source engine includes this UI in all of their games.



WHERE IS THE SOURCE ENGINE HEADED

Now as I have previously mentioned, the source engine is constantly being updated with new content to help further the gamers experience, however there are a few different technologies Valve is working on to improve Source.  The first of these is the development of new content authoring toolsets.  Since Valve has received a heavy amount of criticism over their Source SDK toolset, in regards to being outdated and too difficult to use, Valve has invested many of their assets into creating these new sets.  The new toolsets will allow content to be created faster and more efficiently as even Gave Newell himself has called the curent tools "sluggish and very painful to use"

There have also been confirmation as to Valve creating a brand new engine which they have named the "Source 2 Engine".  The engine has been in development for some time now and currently Valve is waiting for an appropriate title to launch the engine with, similar to how they launched Source with the Counter Strike Source title.  

The final piece of technology that Valve has been attempting to integrate into their Source engine is the Image-Based Rendering system.  The system was due to release back in 2004 with the Half-Life 2 game, however it was struck of the list before the release was set.  Currently Valve is attempting to integrate this feature into their Source engine, as it would allow support for very large scenes that currently are not possible with polygonal objects.



As I stated before, Valves Source Engine is still my favorite engine after so many years of gaming.  The ease of access, the modding capabilities and the optimization are all what give players like myself such a joy from using it.  I can not wait to see what Valve has in store for this engine and what exciting technologies and games they will be able to release in the coming years.  

Half-Life 3 perhaps? 




Sunday, 24 November 2013

Insomniac and Navigation, an Analysis Blog


Since the beginning of the video game revolution, one aspect of games has almost always been necessary to create an enjoyable experience for the player.  This game component that I speak of is of course Artificial Intelligence.  Without this crucial element, current generation games would not be possible, as non-player characters would have no concept of what action to initiate, to attempt to counter the player.  What I feel is at the core of video game AI, would have to be how NPCs navigate throughout the game space provided.  In one of our lectures with Dr. Houge, we discovered how navigation is presented in the Insomniac Engine from Insomniac Games.  The main focus of the presentation was how navigation of NPCs (mainly enemies) can influence the immersion presented to the character.  It was a very informative talk that covered many aspects of why level navigation is becoming increasingly necessary, as well as improved as our knowledge of AI deepens.

The lecture presented to our class was designed by one of the employees from Insomniac Games.  This employee went into great detail about the navigation systems used by the engine, and which games they applied to from Insomniacs AAA lineup.  In this blog, I will attempt to give my understanding of the talk given by Reddy Sambavaram and discuss how I feel these systems work and what sort of impact they have on the players gaming experience.  To analyse this, I will begin with the earlier works of Insomniac, Ratchet and Clank, and slowly progress through their timeline and navigation development with games such as Resistance Fall of Man, Resistance 2 and Resistance 3.  Before any of this occurs however, I will first touch on why navigation is necessary in games and what players can take away from a system that performs well.  So without further delay, lets begin!


We as gamer's are always looking for the next best game to be released.  Whether it be a FPS, RTS, RPG, or even a simple puzzle game.  We are always looking for games that will make us feel immersed in the world, games that will transform these character models into living breathing entities.  For this realism to occur and attachments to be formed, the game must have a solid AI.  For this AI to occur however, the NPCs must move and act in a way that brings their character to life.  When this navigation functions well, the player will feel involved and will feel as though the NPCs belong in the world.  However, when glitches and errors occur in the navigation systems, the players immersion can be broken, causing the game to lose its magic appeal.  This is one of the main reasons that game developers, such as Insomniac, are investing a large portion of their time and energy into creating a navigation system that feels very natural to the player and can occur without them realizing what is taking place.

In some cases, AI is a large and cumbersome task for level designers and developers.  Having to input commands for the NPC to move from point A to point B, all the while avoiding obstacles C,D,E is very expensive in time it takes to implement from a designers standpoint.  Even when this is completed, the designers must rigorously test the system to ensure that the animations flow realistically, while also following the path laid out for the NPC to follow.  Noticing this problem, many companies including Insomniac are trying to move in a more designer friendly direction, by making the movement of NPCs a more dynamic action, instead of being heavily scripted.  Many variations have come and gone in the industry, including such methods as sphere and box based approach, point and connection placement by the level designer, a 2D grid approach, as well as the widely used A* path finding algorithm method.  Now although these methods all have their pros and cons, Insomniac has moved towards a mesh based system, which allows a mesh to be placed over the current game mesh, to outline where many NPCs can inhabit and not move towards.  It is not a perfect system, sadly no system has been discovered yet, but it does offer many benefits towards Insomniacs titles.

An Enemy that would need to navigate in Ratchet and Clank: Deadlocked
RATCHET AND CLANK: DEADLOCK

Now before I get into the navigation mesh method of navigation that was touched upon above, I felt it fitting to include a short summary of how Insomniac was doing NPC navigation, before the introduction of these meshes.  In Ratchet and Clank: Deadlock, the enemies did not use any of the navigation methods used currently by Insomniac.  Instead they used a way-volume representation system with connection nodes in between.  The way that this system works, while simplistic, did the job at the time for the game.  NPCs would inhabit these way-volume spaces, doing basic animations such as idling, seeking the player, and when necessary moving.  To move from area to area however, the enemy would need to choose one of several adjacent connections.  These connections were all previously laid out and designed by the team to give the most realistic movements for the NPCs, while still restricting where they can move.  The systems used volumes as nodes for A* graph to determine which connection would be used to move in a direction to face the player.
An outline of the way-volume connection system.
Although it worked, at times the movement did not feel natural and enemies would end up in positions that were clearly not advantageous to their goals, breaking the immersion.  Insomniac decided to combat this issue with the introduction of their Navigation Mesh (nav-mesh) system in their new PS3 title, Resistance: Fall of Man.






RESISTANCE: FALL OF MAN

Since the release of Resistance FOM was release on the PS3, the opportunities for improvement pertaining to navigation had grown in size for the team at Insomniac.  The decision to implement the nav-mesh system into this title, truly led them on a path towards more intelligent AI and a more immersive gaming experience.  For the production of the this title designers certainly had a tough task ahead of them.  For the level to be designed for NPC navigation, the level designers were tasked with taking each area where NPCs could inhabit and laying out a nav-mesh in maya.  At runtime on the systems, tools in the engine would take these meshes and convert them to convex poly meshes.  After much time in the designing stage of the levels, the system actually functioned pretty well.  This was because the polys of the mesh were treated as though they were nodes in the A* path finding algorithm, allowing them to move between separate nodes based on their needs and behaviours.  It was a great revolution for Insomniacs nav system, however, the team ran into issues occurring in the PPU when navigating 8 NPCs, as well as their distance based Ai lod restriction.  This sent the team back to the drawing board to think of ways to fix this issue for their upcoming title Resistance 2.
Nav-Mesh being implemented in Maya

RESISTANCE 2

Moving into their new Resistance title, Insomniac had few goals in which they hoped to achieve to alleviate the issues that plagued Fall of Man.  The team hoped to fix the PPU bottleneck that occurred with multiple NPCs, as well as fix or remove the Ai lod restriciton, while simultaneously adding in support for a 9x nav-mesh poly load.  To start, Insomniac decided that by moving the processing of the nav-mesh system to SPU, they would be able to cure many of the problems that were associated with the PPU hitting a bottleneck.  With this issue solved,  the team was able to come to the conclusion that the convex poly idea of their mesh system was not well suited to the A* path finding.  To resolve this issue, the team implemented a triangulated system, giving the navigation system less paths to worry about when moving NPCs.  It was also necessary to use tri-mesh and tri-edges as A* nodes, to ensure that the shortest path was always being discovered.  The final change made to the mesh system was a jump up and down parameterization.  This would allow enemies the ability to climb certain objects to access more advantageous cover against the player.  To achieve this, the team added different coloring of meshes based on height and accessibility.   

Nav-mesh with the height addition


With all of these changes being done on the mesh system, the team at Insomniac need to develop a new pathing system for the NPCs.  At first, the team attempted to introduce a hierarchical path finding system.  This system however came with its issues, as at times a high-level path would be established, without a low-level path being present.  To try and fight this issue, the team implemented a path caching system to try and determine if there was a start and end point within the "successful" path.  Although the system did eventually function as the team had predicted, it just did not save the system enough time during path finding to justify being added into the system.  For this reason the team scrapped the idea of hierarchical path-finding and instead stuck with the A* method, making slight changes such as parameterizing path-queries to allow the use of selective nav-meshes by NPCs.


Another key element fixed during the development of Resistance 2 was the pathing around corners or the "bend point" of the turn.  The issue during the production was that NPCs generally had to be running around a corner for the animation to play in a semi-smooth looking manner.  The enemies were not slowing down enough during their turns in a realistic manner, as the currently implemented string pull approach was not allowing it.  To combat this aesthetically unappealing look in the animation, the designers decided to add in a Bezier curve in the bend point.  What this allowed to occur, was the enemies could arrive into the turn, target the midpoint of the bezier curve (in comparison to the bend point in the old method) and slow their actions for the animation to appear realistic and smooth.  To allow many NPCs to be completing these animations and turns at the same time, the team made sure to keep the steering system very lightweight, as to not bog down the engine.  



The final change the team came to in Resistance 2 was the use of the mesh to avoid obstacles in the environment while traveling on their path.  For the objects in the game, each NPC was given a set of escape tangents, which when used will modify the path traveled to avoid collision with the object in the worldspace.  As shown in the image below, when a capsule and a box object are loaded into the world, each object is given two different tangents to follow so as not to collide.  Based on the path of the NPC discovered using A*, the system will make a decision on which tangent path to use to avoid these obstacles.  This system came with a couple flaws however, as enemies were still getting stuck in objects, and enemies were being perceived as objects in the queries of other enemies, causing 80% of the time being spent on object avoidance alone.  To combat enemies becoming stuck in objects, the system performed a sweep check during the course of every collision check.  If the enemy would still collide with an object, the system would add 90 degrees to the path to avoid collision.  To help with the time burden of enemies seeing each other as obstacles, the designers introduced a special chache they called the "Grim Cache" (Grims being in game monsters).  Since they all contained the same threshold tolerance they had to maintain for boundaries, the Grims would function as a unit instead of individual entities, sharing information between each other on obstacles and which paths to be taken.  It was an ingenious system that ended up fixing their issue.






RESISTANCE 3

Since so much was changed during the course of Resistance 2, there was not a lot of work to be done to navigation system when it came to Resistance 3.  However, the team at Insomniac were very adamant about releasing the caches developed at the end of Resistance 2 in a more formal fashion.  To accomplish this task, the team split the SPU into three different passes.  The first of these passes was responsible for accumulating a list of all of the objects that would be in the path of moving NPCs.  Next up was for the SPU to gather all of the boundary information of the obstacles and ensure that no NPC would get stuck in the object, or avoid it in an unrealistic fashion.  Finally, the last pass would review the second step of the process and ensure that all of the tangents for the NPCs are set correctly and avoiding all the objects.  

One of the more exciting and visible features that the team included in Resistance 3 was using custom links to further include the vertical movement of enemies.  To allow the designers to add these custom links for things like jumps, ladders, etc. The team implement a box system for links instead of points on the mesh.  These boxes were more adaptable when it came to level changes and allowed the designers to place the link and not have to worry about them at a different point in time.  This worked really well with the A* approach as these edges (or clue edges) allowed the NPCs a more dynamic movement style instead of on the ground linear movement.  In Resistance 3, we can see enemies jump from one rooftop, across a clearing to another, all because of these custom links implemented by Insomniac.  It helped to really enforce the personality of the NPCs, as it really gave the illusion that they were thinking and communicating, while trying to get to the best position to thwart the players efforts.  


An NPC uses the custom link to climb the wall!
CONCLUSION

Before learning about how Insomniac handles their navigation, I was unaware of the challenges that came along with moving an NPC from point A to point B.  With the evolution of the AI and navigation we as players are starting to realize the potential for games and how NPCs can be given real emotion through there actions.  I can not see this area of the industry slowing down and I can not wait to see what the big name companies will bring to the table in the coming years.