Normal Map Compression
I've added support for DXT5 "swizzled" normal map compression to the HLSL shaders on my shaders page. I got the idea to do it while I was working on finishing up the normal mapping tutorial. I did some tests and was really amazed by how high the quality was on the compressed normal maps - even after 4 to 1 DXT5 compression. The guy that came up with the idea should get a medal. It may have been one of the guys that wrote these papers on normal map compression:
NVIDIA - Bump_Map_Compression.pdf
ATI - NormalMapCompression.pdf
Normal map compression is going to be extremely important. This is especially true for next gen consoles which are rumored to have 10 times the processor power as current conoles, but only 2 times as much texture memory.
I'm probably going to need to write another tutorial that details the steps that need to be taken to get the compression to work correctly. It's not very hard to do - and I even created an Action for Photoshop that does it almost instantly. I'll create that tutorial and post the Action along with it in the next few months.
NVIDIA - Bump_Map_Compression.pdf
ATI - NormalMapCompression.pdf
Normal map compression is going to be extremely important. This is especially true for next gen consoles which are rumored to have 10 times the processor power as current conoles, but only 2 times as much texture memory.
I'm probably going to need to write another tutorial that details the steps that need to be taken to get the compression to work correctly. It's not very hard to do - and I even created an Action for Photoshop that does it almost instantly. I'll create that tutorial and post the Action along with it in the next few months.
11 Comments:
Hi Ben, me (Daniel) again. Thanks for adding support for swizzling in your shaders section, this is something I've been meaning to look at. I agree that under DXT5 compression, the increased resolution of the Green channel, and accuracy of the Alpha channel are definitely worth making use of.
From what I can see your using the Alpha channel to store the normal's x component, the green channel to store the normal's y component, and deriving the normal's component from within the shader. This makes sense and yields good results.
But your also doing something extra there too, your actually using the red channel to store the offset map. This is...interesting. I agree that the precision of the offset map is less important then that of the normal maps components so its better having it in the red channel then in the alpha channel.
But should the offset map be here at all? Based on my understanding of DXT5 compression, there are 2 RGB colors per 4*4 pixel bit block. All RGB pixels within this block are an interpolation between these two colors. So if in the original image, block was for example made up of shades of green pixels, then the first RGB color, might be a Dark Green, and the second one a Light Green(LG). The pixels in the block would all be 2-bit interpolations between the two green (DG & LG):
00 - 100% DG.
01 - 67% DG + 33% LG
10 - 33% DG + 67% LG
11 - 0% DG + 100 LG
This works well.
But by using the red channel for your offset map you are introducing red into the mix. So if when the photoshop plugin (or other software) is choosing two colors for the block, its now going to have to start examining the red channel as well, because the pixels in the block don't all have the same value in the red channel. I believe that this must result in a loss of precision within the green channel interpolated values. I know that it seems that it shouldn't make a difference, because the Red component of the two color values is independant from the Green and Blue components. But the problem is the interpolation. Producing optimal intorpolation for the green channel will I think yield a different set of 16 2-bit values to producing optimal interpolation for the red channel.
I think this can be proven. If you look at the red channel of an image in photoshop, its always different from the other channels, unless the its some kind of sepia-esque mono image. This means that if the red channel was compressed as a greyscale using DXT1 compression, it would be different from the same image's DXT1 compressed green channel. DXT5 compression is the same as DXT1 but with the addition of an independant Alpha Channel.
So I think that by using the Red Channel to store the Offset map, the accuracy of the normal map compression is reduced. The RGB channels are not independantly compressed, in the way that the Alpha channel is.
Sorry if that was long winded, but I wanted to be clear in my own head that what I was saying made sense :)
Now it could be argued that the artist can avoid this problem by simply not choosing to include anything in the red channel. Obviously this precludes the use of an offset map.
I suggest that the offset map could be be placed in the Color maps Alpha channel. I know that this is currently used for the specular map, but I think that its often to useful to have the same map for both the specular and the offset map. For example metal grates, engravings, polished wood etc.
Using the Alpha channel will precision more precision to the offset map then using one of the RBG channels.
It will still be possible to have a specular map only, an offset map only, or a map that is used for both.
And if someone *really* wants specular and offset mapping using independant channels, they can add a third map channel.
Another reason to not put the offset map with the normal map is ATI's 3DC compression. This looks a lot better then DXT5 as I'm sure you've noticed. Its going to be big on the the PC, and in the Xbox 2, since thats using X800 style architecture. It uses 8 bits per pixel to compress 2 of the components of the normal, in a similar way to how swizzeled DXT5 works. It takes up the same amount of space, but is much better suited to compressing normal maps. It only has 2 channels so can't store an offset map there.
I think Nvidia will be forced to either support 3DC compression or launch something very similar.
Your right - normal map compression is very important.
When you say that the Next Gen consoles will have 2 times the texture memory, thats a typo right? I mean XBox 2 is 256mb + 10mb, (Might go to 512mb, but I guess we won't know until E3)
Thanks for maintaining this site, its really useful, and all the people I've shown it to liked it. They way that your information and tutorials is presented is superior to many books I've read.
Hi Ben, there is something else I've been meaning to address. It relates directly to your site so I figured why not open the topic here:
Parallax/Offset map capable plugins.
ATI's Normal Mapper will create a normal map expressing the details of a high poly model. But what about the heights?
I think the next step is to express the high poly's geometric 'comparitive altitudes' in the form of a parallax/offset map.
So a modern brick would have an offset map generated for it that expressed the concave area in the centre of its 'top'.
It seems this would add a lot - normal maps when combined with parallax/offset mapping look great. The high polygon head you use as an example would benifit from this.
But I'm not aware of any plugins that can generate parallax/offset maps in this way yet. Are you?
It seems like the way forward.
- Daniel
Daniel,
Once again, thanks a ton for your insightful comments. It's great to be able to have a dialog with someone else who's interested in the same topics that I am.
You're perfectly correct. Putting the offset map in the red channel results in lower normal map quality than if the red channel was blank.
(Thanks a lot for your detailed description of how compression works. I didn't know the specifics of it and learned them from your post.)
The only shader that is coded to use the red channel in the DXT5 compressed normal maps is the offset map one. All the rest assume that it's blank and don't use it at all for just that reason - better quality.
In the case of the offset mapping shader, I would have made the offset map a seperate texture instead of putting it in the red channel if the normal map quality was the only factor in the decision. However, since I'd already determined that in my shaders the normal map stores both the normal and the offset, and since I wanted the DXT5 compressed technique to work the same as the uncompressed technique, and since I wanted the number of textures per shader to be as low as possible, I opted to do it this way instead. I don't get the maximum quality benefit (if I wanted that I would just use uncompressed) but the image quality is still much better than using standard DXT1, for example.
In the end, since HLSL is so totally flexible, each studio will have to determine for itself a set of standards for their project - such as what goes into the alpha channels, etc.
Another DXT5 normal map compression option that I've been considering since I added it to my shaders is the Z component. If I left the Z in the blue channel, the shader would be less expensive because I wouldn't have to derive that value. The main reason I've been considering this is that with my current scheme, the parts of the normal map that represent near verticle surfaces (normals perpendicular to face normal) get flattened and rounded out a bit. I can see this happening when I switch back and forth between the two techniques. (Have you switched back and forth between the techniques using my shader?) I haven't tested it yet but I believe it's happening because I'm deriving the Z value instead of using the pre-calculted one. If I use the blue channel Z value instead of deriving it the normal map compression would lose some quality but I might get normals that looked more like the original. What do you think?
Oh, I almost forgot. Do you know how to write a texture in 3Dc DDS format? I'm using the Nvidia Photoshop plugin to create my DDS format textures and it doesn't have support for 3Dc
In reply to your question about a program that creates the offset values, ATi's NormalMapper does something sort of like that, although it's not that useful unless you know hot to process the data. One of the flags that you can set when you use NormalMapper to generate a normal map is "y." Here's what the readme.txt file says about it:
"The NormalMapper can keep track of the displacement along the normal
while it is generating the map. It will write this to a .arg file if
the following option is set."
"y - keep track of displacement values"
Like I said, that's not that great unless you can write another app that processes the .arg file to create a height map.
Another option is to use Mike Bunnell's modified version of normal mapper. It writes out a displacement map along with a normal map when you use the -d flag. You can find it here:
http://www.seanomlor.com/mikeb/
The displacement map that it writes uses both the red and green channels for more precision. If you just want an 8 bit height map (like the ones that my shaders use) you'll need to scale the values of the red and green channel by half and then add them together. This could be done quickly with a Photoshop action. Another option would be to change the offset mapping shader so it accepts a red/green height style height map.
For a thrid option, 3DS Max 7 also does what you're looking for. When you create a normal map with "Render to Texture" Projection Mapping, one of the options for the normal map is "Render Height Map into Alpha Channel." I've used that feature before and it works pretty well. If you want to try it and need more details, send me and email and I'll describe it to you.
Hope this stuff helps!
Hi Ben, thanks for the replies. I have indeed tried swapping between the two techniques (compressed & uncompressed) in your normal mapping shader. Its true that leaving the z component in the blue channel would 'appear' to make the shader fun faster. But if you did it this was, you'd pretty much have to use uncompressed textures. Because otherwise DXT compression on all 3 channels would give horrible compression on the majority of normal maps. Now because a DXT 1 texture is 4-bits per pixel and a DXT 5 texture is 8-bits per pixel, you save a lot of bandwidth on the graphics card by using them. So I would estimate that even if you used 16-bit uncompressed textures (Red=5-bits, Green=6-bits, Blue=5-bits), overall things would still be slower because the graphics card has to fetch twice as much memory. Also graphics cards have texture caches, so smaller memory sized textures are going to be faster. Precomputing things is good, but storing the z-component, doesn't seem to be worth it. The fact that 3DC compression exists, and that Doom 3 used the swizzled alpha channel approach suggests that storing two channels is faster, even though you have to derive the z component.
I think that storing the z component will make 90% of DXT compressed normal maps look worse.
With uncompressed maps...well I'd actually still not bother with the z component, and save the x and y components in one of the 16-bit 8:8 .DDS formats. This should provide exactly the same results, assuming that the RGB values in the normal map really are normalized, and therefore add up to 255.
Writing a texture in the 3DC format? Hmm in ATI's example code they are loading the texture as a TGA, and then converting it in the program itself. I suspect there must be ways to write the texture out as a 3DC map, but I haven't found them yet. If I do I'll post about it. Ideally 3DC would be added as one of the .DDS formats, but I guess this depends on what nVidia do.
- Daniel
Thanks for the link to Michael Bunnell's NormalMapper Extension. I still haven't had a chance to play with this yet, because I've been re-writing code to use DirectX code to use shaders instead of the Fixed Function Pipline.
But what he's done seems to be exactly what I was looking for.
I'm aware that Max 7 could do a Z-Render because you mentioned it, but I didn't know about the "Render Height Map into Alpha Channel." option. It seems useful for things like cobblestone floor offset maps, and anything that is essentially sitting on a plane.
But for the majority of objects you'd want to be able to actually write out more then what the camera can see, just like your doing for the normal map of your head :)
I thought that the results were very impressive for just 632 polygons. What I'd really like to see is a eight(yes eight!) vertex low polygon keyboard, with normal mapping plus offset mapping at the same time, now that Michael Bunnel has made this possible. I think it would well, and really prove the power of these techniques. Your up for the challenge? :)
I think the results would be dramatic, and the prescence of an offset map would be useful for the future too. Displacement mapping and certain self shadowing techiniques both require an offset map.
I know that displacement mapping seems expensive, because all the vertices that get added, but in a game you optimize around this, based on distance, for example:
At 0-8 meters, some objects could be displacement and normal mapped
At 8-20 meters they'd be offset and normal mapped.
At 20-40 meters they'd be just normal mapped.
At anything beyond 40 metres you could often turn normal mapping off competely, and just have ether per pixel lighting or even interpolated vertex lighting I would think.
The great thing about the techniques that your using are that they are all map based, and so its much easier to apply Level Of Detail schemes.
Like I said I'm very curious to see an offset + normal mapped eight vertex keyboard. There must be high polygon keyboard meshes lying around the internet...
- Daniel
Daniel -
Just so I make sure you understand, the "Render Height Map into Alpha Channel" feature of Max 7 works together with RenderToTexture and Projection Mapping features and is all done in tangent space projected from the low res to the high res model. This feature does do what you're looking for. It works on full 3D models, not just flat planes. It does the same thing that Mike Bunnell's additions to NormalMapper do, except it's just 8 bit. When the rays are cast to measure the normals from the high res model to put in the normal map for the low res model, the length of the rays are also recorded and stored in the alpha channel. I've used it and it works pretty well.
That being said, I've found that offset mapping on 3d characters doesn't work very well in general. This is mostly due to the fact that the effect breaks down if the surface that you're viewing is more than about 30 degrees away from perpendicular to the view vector. So, for example, if you apply my cobblestone normal map and offset map to a sphere using the offset mapping shader, the effect looks decent in the middle of the sphere, but the farther you go toward the edge of the sphere, the worse it looks. This is why we've mostly seen offset mapping being used on stone walls in games like FarCry and in the Unreal3 tech demos - because with a flat wall there is a smaller chance that the viewer will see the effect from a more parallel angle. I think offset mapping is best used on evironments and that other techniques should be used on characters.
I've done some experiements with normal mapping on objects that have only 8 verts. (Refering to your keyboard challenge.) They all turned out looking really bad. The reason for this is that you also only have eight normals. The normals point in very extreme angles because they're an average of the three face normals that all come together at 90 degree angles. This is a very bad case for a normal map and will always result in poor lighting.
If you use 24 normals instead so that each face of the box has its own set of four, it looks a little better but you end up with hard seems at the edges of the faces, which totally gives away the illusion of normal mapping.
It's better to bevel the edges of your box slightly so that the normals don't average to such extreme angles.
Hey :)
Thanks for the info about parallax maps on characters, I had wondered about this. I'd tried the sphere test myself, but I put the offset quite low, 0.04. Another problem with the sphere is of course the fact that a square texture can't be mapped well on to a sphere. On a cylinder it looks better.
Thanks for the info about Max7's "Render Height Map into Alpha Channel", when I get a bit more time I'll try it out. I'd missunderstood about its capabilities. What your saying about offset mapping being used for environments more then characters makes sense - its good advice.
As far of the 8 vertex cube problem goes, I know what you mean :). I had my programmer hat on when I wrote the mail, and in the exporter I use from Max, and 8 vertex cube will automatically become 24 vertices. So when I said 8 vertices, I mean 8 'Max' vertices. This sort of thing first started manifesting itself in the days of Gouraud Shading, causing exactly the problems you've described.
But all is not lost. In terms of Object space normal mapping you don't actually need to store the vertex normals - all the normals in the map are in the correct place already.
With Tangent space normal mapping its a little more complex. Yes the game can store one normal per vertex, and then average them to find the 'face normal'. You also need a tangent and a bi-tangent(often known as the bi-normal) although the later can be derived from the first two. But storing these things per vertex is actually unneccessary, since for tangent space normal mapping all we are interested in is 'face data'. To be efficient, the vertices would only store their positions, and uv coordinate sets. The face normal, face tangent, and face bi-tangent would be stored in a separate 'vertex stream'. So for a cube, you'd have eight vertices, 6*2 = 12 face normals, 12 face tangents, and 12 face bi-tangents.
Overall in memory this would be (vertex positions + face normals + face tangents + face bitangents = (8 * 3) + (12 * 3) + (12 * 3) + (12 * 3) = 132 floats = 528 bytes.
Whearas the 24 vertex approach would be vertex positions + vertex normals + vertex tangents(24 * 3) + (24 * 3) + (24 * 3) = 216 floats = 864 bytes.
Also the second approach would require the bi-tangent(bi-normal) to be derived in the vertex shader, making it slower as well as bigger.
The second approach definitely benifits from a Shader Model 3.0 feature - being able to set vertex frequency. Under Shader Model 2.0, it would be more involved :)
The key concept here is that if the code can be made to store face normals instead of vertex normals, then 8 vertices is enough, and you wouldn't need to bevel the edges of the cube either because the normal and offset maps should take care of that - they deal with the internal filled area. No matter how bevelled the edges of the cube are, the sides are always going to have a 'flat' outline. The exception to this is of course the corners.
To be fair though there are situations where storing vertex normals instead of face normals would be preferable. On a cobblestone beach, face normals would actually make things look more facetted, wheras single vertex normals that were deliberately half way between where the face normals of where their adjacent polygons would be, would make things smoother. This is because in the pixel shader the vertex values of the three corners of the triangle get interpolated between, so the normal maps and offset maps would 'curve' into each other at the edges of the polygons.
This is all getting very theoretical isn't it? I'll shut up now LOL
- Daniel
Wow, Ben. I didn't know you had this blog going. Joel Styles pointed me here, from a shader thread of his on polycount (cool translucency shader, you should check it out).
This is all extremely informative, makes me wish I had a lot more time to play with this stuff.
I recently played with nm compression, and came to the conclusion that DXT5nm has too many artifacts in places like creases or smooth gradients. Per some guidance from Doug Rogers over at Nvidia, I copied the norm.Y data into all three RGB channels, and stuck norm.X in the alpha.
Here's an example I made, you might have seen it around on CGChat or Polycount. I'm going to try leaving R and B blank, see if that improves things.
But I'm steering now towards using the L8A8 DDS format, which stores 2 8bit channels instead of the 4x4 block DXT compression. A little bigger, but no nastiness.
Also I've been checking out Nvidia's Mipster jscript for Photoshop... they have a really cool normal map mip tool that helps fix some of the aliasing problems I was seeing. The PDF that comes with it is a good read too.
Thanks again. And thanks to the other commenters too, good reads.
Hi Eric! Good to hear from you again. Thanks for stopping by my blog. I haven't really "advertised" it anywhere. I just started blogging because sometimes typing things out helps me organize my own thoughts about a subject. It's cool that others come here to read my stuff.
Thanks for the info on normal map compression. I implemented the method that Doug Rogers explained in all of the HLSL shaders on my web site. In the tests that I did, the DXT5 "swizzled" normal maps had no obvious artifacts and looked very nice when compared with any other form of DX or palettized compression. I haven't used the L8A8 format. That sounds like a good alternative if you're willing to spend a little more memory for a little more image quality. In the end it all comes down to priorities and each company has to decide if more texture memory to play with or better image quality is more important.
I haven't tried mipster yet. My demo version of Photoshop CS2 expired so now I need to pony up the $600 to get the full version. CS2 is definatly worth it. Especially with native support for the raw format the my Canon digital EOS uses and also support for OpenEXR, HDR, etc. It's now possible to use the Nvidia plugin to create 32bit floating point cube maps in DDS format! Wow! Anyway - yeah, I want to use Mipster. I just gotta get the software first.
Post a Comment
<< Home