Saturday, March 05, 2011

GDC 2011 - Friday

Friday was the last day of the show. I started out the day by attending a session about the LOD generation system in Halo: Reach. For characters that are very far away from the camera, they automatically generate a mesh that's only about 12% of the poly count of the original. The generation process involves creating a voxel version of the model to ensure uniform tessellation, and then simplifying that. They bake material properties such as diffuse color and specular brightness into the vertices. This means that the low res version doesn't use any textures and doesn't need a pixel shader. Next, Chris Tchou spoke about some of the effects tech that he created for Halo: Reach. First he showed the light-weight particle system that he wrote that allows for a high number of small particles. It's all done on the GPU so it's really fast, and they're even able to calculate particle collision using the depth buffer. Next, he talked about the energy shield effects and showed several different variations that could be created with it. The effect is created using hull meshes extruded out from the base mesh. Then they use the depth buffer to make the parts of the hull that cover the character more transparent and the parts around the silhouette more opaque. Finally, Chris showed how they were able to create atmospheric effects by sorting the smoke and other large particles into buckets. Particles closer to the camera (that filled a large part of the screen) were rendered to a lower resolution buffer to save on fill-rate, and then composited back into the scene. The take-away that I got from this talk is that the depth buffer is really useful for all kinds of effects.

Next I went to a talk by the team at Nexan in Korea. They create a system the controls the behavior of helper bones in real-time in the game engine instead of baking the behavior of these bones to keyframes. These bones are used to help with deformation problems - wrists, shoulders, pelvis, etc. They wrote code in the engine to mimic the effects of all of the different types of constraints that could be used inside 3ds Max. Then they exported an XML file from Max for each character that described which bones were controlled by constraints instead of with keyframes. Finally, the game engine would read the XML file and apply real-time constraints to these bones. Using this technique, they were able to reduce the size of their animation data a lot. It also made it easy to adjust the behavior of these bones since they could just do it once instead of having to re-export all of their animation data after an adjustment.

After lunch, I attended a talk by Andrew Gordon from Pixar. Andrew talked about traits that make a good animator and the process of creating successful animation. The talk felt a bit rushed since he was working from notes that he usually used for an all-day presentation. However, his material was fantastic. He talked about having a good attitude, taking critiques, doing lots of preparation, having a good knowledge of design, weight, and physicality. He talked about noticing all of the little things that people do - subtle gestures and weight shifts - and incorporating those things into your work.

I ended the conference by attending the tech artist round table run by Jeff Hanna. I really love attending the round table since it's a room full of people who think like I do and face similar challenges every day. They're a great group to talk to and it's fun to discuss all of the issues we're dealing with.

This year's GDC was amazing. I can't wait to get home and start experimenting with some of the ideas I picked up during various talks.

Friday, March 04, 2011

GDC 2011 - Thursday

I started out the day by attending a session about creating great characters. It was presented by Matthew Lund from Pixar. The first part of the talk was about how to "find" the character. Matthew suggested that in order to create a good story, you first have to develop the character and define who he is. Once this is done, you can create a story around things that happen as a result of his personality traits. In the story, the character's fears and passions should be what drive his decisions, and supporting characters should be designed so that they help bring out the character traits in the main character. Over the course of the story, there should be an inner conflict in the character and an outer conflict which is the main plot. Matthew stressed that the crux of the story should be on the inner conflict rather than what's happening around the character. In the end. the way that the character is changed as a result of the inner and outer conflict is what illustrates the theme of the story. This was an amazing session and I was impressed by how well Matthew was able to boil down and define exactly what it is that creates a meaningful character and a strong story.

Next, I attended a talk by Jeremy Ernst on the facial rigs he developed for Gears of War 3. The facial rigs have several layers. The first is a low resolution cage mesh that roughly fits the shape of the character's face. Morph targets are created for this cage for each of the major muscle groups in the face. The next rig layer consists of helper points that are pinned to key locations on the cage mesh and move with it and it is deformed with the morphs. The next layer is called the offset rig. It's a set of control shapes that go along for the ride as the helps move with the morphs. This layer exists so that the animators can go in and fine-tune things after they've created the general pose with the morphs. Finally, the actual face bones that are used in the game are driven by the offset shapes. The powerful thing about this rig setup is that the whole system can be shared to any character by simply creating a new morph for each character that fits the face of this character. When that morph is dialed in, the rest of the rig goes right along with it and fits itself to the new character. Since the animation data is stored as curves on the morph targets, it can be easily transfered from one character to another.

After lunch, I attended a talk by Mike Flaven - a graphics/engine programmer at Volition. Mike talked about a new rendering technique they used for Red Faction: Armageddon called Infered Lighting. This technique is similar to deferred lighting. In the first pass, several textures are written to g-buffers including z-depth, normal, and specular power. In the second pass the lighting is created based on the light sources in the scene and the g-buffer data. Finally, in the third pass, non-light information is rendered, such as diffuse color, reflections, emissive, etc. The advantage of this technique is that lighting complexity is decoupled from scene complexity - so the system is able to handle over 100 light sources without issues. Mike also discussed some clever techniques they developed to render the lighting pass at a lower resolution to gain performance and also correctly handle several layers of transparent objects.

Next I attended a talk by John Bellomy from Naughty Dog about the structure and format of their animation blend trees and state graphs. The main interesting point that I gained from this talk is that they define a main motion graph for an NPC, but then on top of that, they're able to define a smaller set of override animations so that they can make an individual character look unique without having to create a whole new graph. Each character can have multiple override sets.

The last talk I attended was given by Donald and Geremy Mustard about the art that they created at Chair for Infinity Blade. It was pretty cool to see all of the little tricks they used to squeeze as much graphical polish as possible out of the iPhone and still maintain the frame rate. I was surprised to hear that the iPhone has a ton of graphics memory but is weak on draw calls and fill rate. This meant that the team had to be very careful about the number of objects on screen and particle counts had to be kept low, but they were free to make high res. textures and light maps.

I finished up the day by attending the speakers reception. It was create to meet several of the people who's talks I have really enjoyed or who's talks I'll be attending on Friday.

Thursday, March 03, 2011

GDC 2011 - Wednesday

I started out on Wednesday by attending the keynote by Iwata-san from Nintendo. The main take-away that I got from this talk is that successful games are creative and unique, not necessarily technically superior. Also, Iwata gave a pretty powerful warning about the number of games being created for mobile devices and how many developers are stressing quantity over quality - a problem that he considers to be dangerous for the industry.

Next I attended a session about some of the animation techniques used in Halo: Reach. I was really impressed with the system they developed for blending between walk and run and turning in place. Their foot fitting system was fantastic for making sure that the feet where always planted nicely on the ground - even when walking on various slopes. And their system for blending between jump animations of various heights and distances to best fit the current jump distance was beautiful.

After attending these two talks, I had lunched and hooked up with Aaron Otstott to prepare for our talk. We went through our notes one last time and put in a few last minute comments - and we were ready. I felt like our talk went really well. We were able to confidently deliver the material that we had prepared. Aaron and I presented 5 elements from our lighting system that we have added in order to achieve our goals of achieving the artistic vision, seperating the characters from the background, and automating the process of quality lighting. We presented in a very large room that was more than 75% full and when the talk was over, we received lots of insightful questions. Overall I was happy with the experience.

Next, I attended a talk by Dan Baker from Firaxis on a method he helped to develop for creating better specular highlights. It's called LEAN mapping. It's something that I'd really like to investigate further for reducing specular sparkles.

My final talk of the day was given by the art director at Cryptic Studios on the systems they developed for character customization in Champions Online and the Star Trek MMO. This was pretty impressive stuff, and I was blown away by the vast array of options that the provide to users for customizing their characters.

To finish off the day, I hooked up with the other guys from BioWare and we all went out to eat at The Empress of China. This has become a bit of a tradition for me at GDC, and since my old friends from Vicious Cycle weren't here to share it with me, it was fun to take a new group out for the experience.

Tuesday, March 01, 2011

GDC 2011 - Tuesday

The tech artist all-day session today was amazing. I've got my head all full of ideas I want to try and things I want to learn. This is what I love about GDC - it's a major recharge for my creative batteries.

Keith Self-Ballard from Volition started off the day with a talk about why the industry needs tech artists. This was great information to hear - especially since Keith is an art director, not a tech artist himself. Next, Scott Goffman from Blizzard spoke about what tech artists should do to make sure that the tools that we write get used by the artists. He talked about getting the word out that there is a new tool, simplifying the UI so it doesn't look too intimidating, providing documentation, deploying the tool in stages so you can make sure that it's working as intended before expanding the functionality, creating the tool with a clearly defined goal in mind, and knowing and understanding the needs and aptitudes for the tool's target audience.

At that point, I took a detour from tech art and headed over to the mobile games summit to listen to my friend, Donald Mustard, talk about his work on the iPhone game - Infinity Blade. Donald does a great job of presenting his topic and he was really fun to listen to. The guidelines that he created when making the game were - 1. You have to be able to play the whole game with just one finger. 2. It has to be designed for super short play sessions but still feel fun and meaningful. 3. Have an original and unique device-specific design. 4. Be easy to learn but hard to master - truly skill based.

After lunch I was back with the tech artists. Seth Gibson gave his talk on personality profiles. He had the audience take a short profile test and then talked to us about the strengths and weaknesses of each personality - what type of work they might enjoy, and what may feel like drudgery to them. This was a pretty insightful topic and got me thinking about the kinds of work I enjoy and what makes me feel successful at work.

Next, Steve Theodore gave his talk about what happens when tools fail. He talked about designing tools compartmentally so that things are broken into small, manageable functions instead of long, mega-scripts. He stressed the importance of documenting your code, and went over various methods for debugging problems.

Adam Pletcher gave a talk about using databases to store and retrieve information. He showed some sample python scripts for writing some information to a database, and gave some examples of uses for the stored information - such as graphing how long it takes to open 3ds Max for the artists over several weeks and months, and keeping track of which artists are using what tools, and which artists are having the most problems with tools.

Bryan Moss talked about using video footage as a texture map. For his motorcycle game, he set up a high resolution cloth simulation inside Max, and then used that to render out a series of normal maps. Then he used After Effects to combine these normal maps into a video and apply some post-process touch-ups. Then he applied this "animated normal map" to his character models in the game to make them look like their clothing was getting blown by the wind. This is a really clever technique that could be used for all sorts of things.

Finally, Bronwen Grimes gave her talk on a couple of techniques that she developed for Portal 2 at Valve. First, she talked about how they used Houdini to create flow direction maps that defined the direction that the water would flow in each map. These directional map are pretty much impossible to create by hand, but Houdini seems to have made it pretty easy. I'd really like to learn how to use this software. It seems like it could open up a whole new world of possibilities for me. Then Bronwen talked about the shader that she wrote that defines the appearance of the gel material that the player can paint on the levels. She came up with a clever method of making the material appear to have bubbles suspended in it - so it looks like real gel with volume and thickness - even though it's really just flat. Brilliant.

I ended the day at the technical artist get-together. It was great to get a chance to talk to several tech artist friends and catch up on what people are working on.

GDC 2011 - Monday

I'm at GDC again this year. On Wednesday I'll be giving a talk about the character lighting in our game. If you're attending GDC, I'd love to have you come. Here's a summary of the talk from the GDC web site:


Monday was the first day of the conference for me. After getting up way too early for a flight to San Francisco, I attended an all-day session on DirectX 11. Speakers were from AMD, Nvidia, Firaxis, BioWare, and Dice. It's exciting to see how much more flexibility the new DirectX offers. There's room to move operation around to the different shaders (geometry shader, hull shader, etc) to fit the needs of the specific application instead of always having to do everything in the vertex shader or the pixel shader. I saw several examples of how moving things out of the pixel shader to earlier in the pipeline can yield pretty significant performance gains. DirectCompute was also a hot topic. It's a way to use the graphics hardware as a more general purpose, multi-threaded compute engine as opposed to just something to put pixels on the screen. A couple of the talks covered ways to optimize techniques like filtering and summed area tables using the compute shader instead of shoe-horning them into the pixel shader as has been done with DirectX 9. Several talks focused on using tessellation - a topic that I'd like to become more familiar with. And the guys from Dice, BioWare, and Firaxis talked about how they're using DirectX 11 in Battlefield 3, Dragon Age 2, and Civ 5 respectively.

This is going to be a great week. I'm looking forward to Tuesday when a group of tech artists is putting on an all day training session on being a technical artist. Should be great! I'll type up my impressions tomorrow.