GDC 2011 - Friday
Friday was the last day of the show. I started out the day by attending a session about the LOD generation system in Halo: Reach. For characters that are very far away from the camera, they automatically generate a mesh that's only about 12% of the poly count of the original. The generation process involves creating a voxel version of the model to ensure uniform tessellation, and then simplifying that. They bake material properties such as diffuse color and specular brightness into the vertices. This means that the low res version doesn't use any textures and doesn't need a pixel shader. Next, Chris Tchou spoke about some of the effects tech that he created for Halo: Reach. First he showed the light-weight particle system that he wrote that allows for a high number of small particles. It's all done on the GPU so it's really fast, and they're even able to calculate particle collision using the depth buffer. Next, he talked about the energy shield effects and showed several different variations that could be created with it. The effect is created using hull meshes extruded out from the base mesh. Then they use the depth buffer to make the parts of the hull that cover the character more transparent and the parts around the silhouette more opaque. Finally, Chris showed how they were able to create atmospheric effects by sorting the smoke and other large particles into buckets. Particles closer to the camera (that filled a large part of the screen) were rendered to a lower resolution buffer to save on fill-rate, and then composited back into the scene. The take-away that I got from this talk is that the depth buffer is really useful for all kinds of effects.
Next I went to a talk by the team at Nexan in Korea. They create a system the controls the behavior of helper bones in real-time in the game engine instead of baking the behavior of these bones to keyframes. These bones are used to help with deformation problems - wrists, shoulders, pelvis, etc. They wrote code in the engine to mimic the effects of all of the different types of constraints that could be used inside 3ds Max. Then they exported an XML file from Max for each character that described which bones were controlled by constraints instead of with keyframes. Finally, the game engine would read the XML file and apply real-time constraints to these bones. Using this technique, they were able to reduce the size of their animation data a lot. It also made it easy to adjust the behavior of these bones since they could just do it once instead of having to re-export all of their animation data after an adjustment.
After lunch, I attended a talk by Andrew Gordon from Pixar. Andrew talked about traits that make a good animator and the process of creating successful animation. The talk felt a bit rushed since he was working from notes that he usually used for an all-day presentation. However, his material was fantastic. He talked about having a good attitude, taking critiques, doing lots of preparation, having a good knowledge of design, weight, and physicality. He talked about noticing all of the little things that people do - subtle gestures and weight shifts - and incorporating those things into your work.
I ended the conference by attending the tech artist round table run by Jeff Hanna. I really love attending the round table since it's a room full of people who think like I do and face similar challenges every day. They're a great group to talk to and it's fun to discuss all of the issues we're dealing with.
This year's GDC was amazing. I can't wait to get home and start experimenting with some of the ideas I picked up during various talks.