Sunday, August 24, 2008

Siggraph 08 Thursday

My last day at Siggraph was interesting. The first session I attended was a panel discussion entitled "Games: Evolving on An Order of Magnitude." Several prominent people from various game companies discussed how the advances in video game hardware are driving major changes within the industry. Since games are so much more complex to create, larger teams are required, more complex management structures are needed, etc. The problem is that budgets are not increasing proportionally to the demand for content. This means that teams have to find ways to work more efficiently - writing tools to automate tasks, etc. I took lots of notes on this discussion. Let me know if you're interested in reading more.

Next I attended a session on hair and cloth. I thought that I'd be able to learn some things that would help with the projects I'm working on now, but all of the papers presented were on very expensive techniques - like simulating every single strand of hair, or simulating cloth by mimicing the behaviour of the yarn. Results were super cool, but compute times were very high - nothing near real-time which is what we need.

Next I attended a session on physics. There was a paper on doing hair in the real-time on the GPU, but I missed it because I got there late. I'm hoping that Nvidia will post it on their developer web site. It's pretty close to what I'm going to need to work on soon. The second paper was on using lots of GPUs in parallel to run particle systems with very high particle counts. By using 6 or 7 GPUs together, the guy was able to do simulations with a million particles in real-time. Very impressive. Now if we could just get game comsoles to ship with 7 GPUs we'd be all set! :P The final paper in the session was on the bent and broken metal effects in the Hulk movie. They used an interesting technique for allowing the Hulk to destroy metal objects. They basically turned on cloth simulation with the cloth stiffness set very high - but only at the moment of impact. This allowed the surfaces to deform, but then they'd turn the sim off when the impact was done so the surfaces would keep their deformed shape.

Finally, a session that I didn't get to attend (because I had to catch my flight back home) but that looks very interesting was presented by Jon Olick from id
Software. You can grab the paper here:

http://s08.idav.ucdavis.edu/olick-current-and-next-generation-parallelism-in-games.pdf

The first half of the paper talks about their usage of the PS3 hardware - which is nice in itself, but the second half is the really interesting part. He talks about a new way of rendering meshes that's kinda like what they're doing with mega texture - only it's for geometry instead of
texture data. Basically you'd be able to create environments with an interface similar to Zbrush - where you could just create as much detail as you wanted with no concern for polygon counts. Then at run time, the software would do all of the LODing automatically - so you'd get an environment that looked super high detailed - but the detail would only be exactly were it was needed. It's very much like the way mega texture works - but this is really taking it to the next level. Exciting stuff!

Thursday, August 14, 2008

Siggraph 08 Wednesday

I attended lots of paper sessions today. The first session I attended was on real-time techniques. There were papers presented on shadow mapping techniques, real-time refraction with caustics, real-time smoke rendering, and meshless hierarchical light transport. The refraction paper created beautiful results but was quite limiting since it used a voxel grid of only 128x128x128 and only had a frame rate between 2 and 7 fps. The smoke paper also had beautiful results for lighting smoke with diffusely convolved cube map. Its limitation was that it required that the smoke be pre-processed so you couldn't dynamically change the smoke. I did find the shadow mapping paper and the light transport paper. I'll probably look into them some more.

Next I attended several sessions hosted by Nvidia. They presented a paper on hair rendering, a paper on terrain rendering and LODing on the GPU, and a discussion of their PhysX system. The hair paper was pretty amazing. The do a real-time simulation on around 160 guide hairs and then instance those to make it appear that there are many 10s of thousands of hairs. Once the verts have been simulated, they're conected with B splines, converted to camera facing triangle strips and rendered using the Kajiya Kay lighting model. The results are really beautiful, but I wonder if the performance requirements are just a but too high for current hardware. The terrain paper was pretty straight forward. Their LOD system basically just reduced the tesselation of the quads based on distance from the camera. They also biased the distance based on the topology of each quad - so if the quad had higher height changes it would be LODed less. They did the same thing for quads that contained a silhouette edge. Pretty good ideas! The PhysX paper was mainly about things to think about when adding physics sims you your projects. I was pretty disappointed that the presentation was so short on actual implementation details.

After the paper session and some lunch, I attended a session on the special effects on Cloverfield and Iron Man. It's always very interesting to see these talks where the special effects artists break down their work on films and talk about how they achieved their results. While it's not directly applicable to my own work, it is very inspiring.

I left the session a little early so that I could attend an appointment with the guys at Image Metrics. They're a company dedicated to facial animation and they do really great work. It was fun to talk with them about their work.

My last paper session of the day was entitled "Many Things." Some guys from Pixar talked about how they created the shaders and textures for all of the robots in Wall-E. They used some really cool material layering techniques blended with geometry specific maps including ambient occlusion, blurred edge maps, up facing maps, and fractal noise patterns. Their material system also allowed them to add specific details like decals. The results were impressive and they were able to create the surfaces for all of the robots in under three months. The second paper in the session was on the foliage system used in Madagascar 2. They did a good job of giving the artist control as well as automating the repetative tasks. The enabled the artists to create a branch and then mark growth source points. Then the user could click on a growth point and the branch would be cloned to that location. Doing this over and over would create the full set of branches for the tree. If I were going to write some software to do trees, I'd probably do it this way. The next paper was on AI driven cars for Speed Racer. The author did lots of work using Massive to create an AI system to drive the cars at over 300 miles an hour around the crazy Speed Racer tracks. The funny thing about this presentation is that the presenter told us that none of his work was actually used in the final movie. Oh well! :) It was pretty cool driving AI anyway. The final presentation was also from Pixar. The presenter explained a method called Brain Springs that allowed them to automate the motion of the robots and imitate physics simulations without actually running any sim at all. This was a very interesting idea to me - translating a character's changed in velocity into automatic secondary movement.

I finished out the day by attending the Computer Animation Festival screenings as well as attending a tribute to Stan Winston. Today was really full! Tomorrow should be great too. I'm looking forward to it.

Wednesday, August 13, 2008

Siggraph 08 Tuesday

Today was interesting. The first thing I attended was a papers session on animation. The first paper was on Motion Graphs. The idea behind motions graphs is that you give a character a set of animations and then give him a goal to achieve (ie - reach a specific location). Then the character figures out how to reach that goal with the animation set that he has. This paper presented a way to figure out how likely it would be that the character would be able to achieve that goal based on the characteristics of the environment. The second paper was on transitions between motions. The presenter made it clear that the length of a blend from one motion to the next was very important and showed several case studies that proved that better results could be achieved in motion blending if you figure out the best blend length for each transition instead of using a fixed blend length for everything. The next paper presented methods for processing free form motion - that is, motion on an object or character that's done without a skeleton. The author developed a method that would allow him to manipulate and blend free form animation while preserving volume. The most useful case study that he showed was that a cloth simulation could be adjusted and corrected easily after running the simulation and without needing to re-run the simulation. The last paper in this set was on dual quaternion skinning. It's a better method for moving verts around with bones than standard skinning or spherical skinning. The results were good and so was the performance. I'd like to look into this idea some more. Lots of information is available here:

http://isg.cs.tcd.ie/projects/DualQuaternions/

Next I hit the show floor. It was fun to run into some people that I know - Chris Evans from Crytek, Kevin Bjorke from Nvidia, and Bobo from Frantic Films - all great guys and fun to talk with. I was disappointed to find that Natural Motion didn't have a booth as I was hoping to get some more information on their upcoming version of Morpheme.

In the afternoon I attended another papers session on Characters. The first paper presented was on creating crowds. The goal was to see how many character clones you could get away with and not have the viewer notice that two characters were identical. They changed things like clothing color, and animation to see what the most important factors were in hiding the fact that you have a limited set of characters. Next, Chris Hecker from Maxis presented his work on the character animation system of Spore. The presentation was short on details, but Chris had a fun time showing off a lot of the creatures that have been generated by the community. The third paper was by Michael Kass from Pixar. His talk was on Wiggly Splines. He presented a method for quantizing soft body dynamics in characters and giving artists control over the amount of secondary motion that a character has. The final paper also showed a method for soft body simulation - but it was focused on real-time performance. The results were very nice. This is also something I'd like to look into more.

I finished up the day by heading over to the Nokia Theater for the Computer Animation Festival. I also stayed for Studio Night. John Lasseter from Pixar showed "The Man Who Planted Trees" and invited the animator - Frederic Back - who is now 84 years old - to come and talk about his film. It was very touching to hear him talk about how important the earth is and that we should take care of it. John Lasseter said that there were lots of elements in Pixar's films that were inspired by Back's work. After this brief interview, Lasseter screen a documentary on Pixar animation studios. It started way back when John went to school at CalArts and covered the history of the developement of the studio all the way up through the Disney purchase. It was very cool to see and really inspired me to want to do my very best work in the projects I'm involved in.

Tuesday, August 12, 2008

Siggraph 08 Monday

I'm in Los Angeles this week attending Siggraph so I decided to post my comments on the show each day here on my blog. I arrived in LA around noon after an uneventful flight (the best kind). I was hoping to get over to the show in time to catch Ed Catmul's talk, but travel time, hotel check-in, etc, prevented that so I missed the first talk I was planning to attend. Once I got my credentials, I met up with a friend, Josh Stratton, and we talked about the week and the sessions we were each planning to attend.

Next I headed to a session on real-time rendering hosted by ATi/AMD. I won't go into too much detail, since you can download the talks online, but I will say that I was pretty impressed with the effects that the guys at Blizzard talked about. You can grab all the talk notes here:

http://ati.amd.com/developer/techreports.html

After the session was over, I headed down to the Shrine Theater to attend the Autodesk User Group meeting. It was a good show with Autodesk showing off some of the new features that they're adding to the latest versions of their various software packages. I was especially impressed by the additions that they showed off in the new Mudbox software, and I also enjoyed see all of the stereoscopic footage that they showed - clips from lots of recent movies in 3D. They also showed a bunch of new tools that they're adding to Maya and Toxic to make it easier to work with and tune stereo movie footage. Neat stuff!