Cubemaps: the salt of computer graphics

Cubemaps: the salt of computer graphics

Rate This
  • Comments 13

"The eating was one of the most horrible things. Excepting the first two days after I entered the asylum, there was no salt for the food." - Nellie Bly

Salt is a strange thing. In minute quantities, it makes almost any food taste better. In medium quantities, it adds moreishness to snacks and makes people want to consume beer. In large quantities, it is a deadly poison.

Cubemaps serve the same role in the land of 3D graphics. They are a cheap and easy way to make almost any model look richer and more interesting, but I don't think enough people are aware of how to use them.

 

What is reflection mapping?

Imagine if you took a picture of the environment surrounding an object. You could then map this image onto the object, dynamically calculating texture coordinates based on the camera position, so your object appears to be reflecting the scene around it.

Simple.

And yet incredibly powerful.

If you use a crisp, high resolution reflection image, you get a shiny, polished, chrome surface. Great as a special effect, but easily overdone, like those oversalted pretzels they sell at baseball games.

The magic happens if you choose a blurry reflection image, and only blend a small percentage of this with the existing material color. Here lies the recipe for subtle visual complexity and deliciousness.

 

What is a cubemap?

The trouble with a statement like "take a picture of the environment surrounding an object" is the environment stretches out in many different directions. How can you encode all these possible directions into a single texture? Mapmakers have been struggling with this problem for years.

In the world of realtime graphics, we use cubemaps instead of regular 2D textures. You can think of a cubemap as a collection of six square images. They are often drawn like this:

but the six images (called "faces") are actually separate.

To see how this provides a complete environment, imagine if you printed out this picture, then cut out the white corner areas. You could now fold it into a box, making three vertical folds, then folding the top and bottom flaps over to make a sealed cube. This cubemap was created in such a way that the edges of each face will join up seamlessly.

 

How can I create cubemaps?

There are three main ways: one expensive, one difficult, and one that nobody knows about.

The expensive way is to render them out at runtime. Construct a RenderTargetCube, then draw your environment six times, with the camera facing along each axis in turn. This produces a very accurate and dynamic reflection map (often used in racing games where you can see the car windows reflecting the environment as it scrolls past), but drawing the environment so many extra times doesn't come cheap!

The difficult way is to manually construct a set of six face images, then use the DirectX Texture Tool to combine them into a cubemap DDS file. Good luck getting the edges of each face to join up seamlessly...

The way I recommend is a custom content processor which warps a regular 2D image into a cubemap. For instance the cubemap I showed above was created from this photo of downtown Seattle:

Purists are probably recoiling in horror at this point. "But that photo doesn't contain a complete environment! There simply isn't enough info there to make this work!"

Sure. But reflection maps don't need to be exact. They should be subtle and complex and interesting, but most importantly, easy to create, so you can put them on all your objects and try out lots of different images to see what works best.

Once you have an automated cubemap generation processor, it is trivial to experiment with different reflection maps. For instance it is the work of a moment to turn this photo of a glacier on Mount Rainier:

into this cubemap:

 

I'm sold. Where do I get this incredible processor?

You already have it.

To reuse this in your own projects, you just need to copy the CubemapProcessor.cs file.

 

What should I use as the source image?

Experiment!

It doesn't have to be an actual picture of the actual scene that surrounds your actual object.

For instance a dark image dappled with bright blobs will give the impression of a complex environment with many small light sources, far more cheaply than you could properly calculate that many lights.

An image that is mostly dark but has a couple of thin bright lines creates an underwater caustic effect.

Blurring the reflection image makes the effect more subtle and prevents objects from looking too shiny.

In MotoGP, the artists created one static reflection map per level, by the simple technique of taking a screenshot of that level! People would see sky and trees reflected in the exhaust pipe of the bikes, and notice that the color of these trees always properly matched the level. Because this was just a static reflection map, the position of the reflected trees was never right, but this was too subtle for anyone to care.

Half-Life 2 takes the same idea one step further. They also precalculate reflection maps using screenshots of each level, but they create many cubemaps per level, and render using the closest one to your current position, so the reflection map can change as you move around the world.

For the bike selection screen in the MotoGP menu system, we wanted the bikes to look as good as possible, but there wasn't any particular environment for them to reflect. A cubemap containing trees and sky looked silly in that context. I tried maybe 20 different photos to see what worked best, and if I remember right, eventually went with a blurry shot of the keyboard and mouse from my desk! I have no idea why that looked good, but for some reason it did.

  • Hi mate,

    what's the status on using the cubemapProcessor with GS 3.0. I'm getting a whole heap of errors, resultant from the .Pipeline namespace not existing in Microsoft.Xna.Framework.Content.

  • Ah. I've worked out what's going on. Pretty straight forward. I need to add a reference to Content.Pipeline because it isn't referenced by default for game projects since it's meant to be for custom importers and the like.

  • Hello

    It would be nice if you could explain for a total noob how to use the CubemapProcessor source code to generate cubemaps from your own photo/texture.

    I can't even figure out where to find the generated cubemaps after building and running the code.

    I'm only interested in using this to generate cubemaps for a game I'm working on (not XNA).

    I've tried to google for instructions on how to use it for this purpose but I'm lost.

    Any help you could offer would be appreciated.

  • When you build using the Content Pipeline, the output data is stored in a .xnb file which can be read into your game using ContentManager.Load.

    If you want to output the data in some other format, you could take the image manipulation code from that sample, but change it to save the data out in whatever other format you want, rather than running as part of the Content Pipeline.

  • I'm kinda lost here. For the project on xna creator's club: http://creators.xna.com/en-US/sample/custommodeleffect

    Can you explain the following question a bit?

    How are the two projects related together?

    (I couldn't find how you used the project CustomModelEffectPipeline in the CustomModelEffect project, there's no reference or anything. However if I delete the pipeline project, a build error would pop up say "Cannot find content processor 'EnvironmentMappedModelProcessor'" in saucer.fbx)

    Thanks very much!

  • sunjinchao1: the Content sub project (which is nested inside the main CustomModelEffect game project) has its own set of references. These references are used at build time for creating content, as opposed to the main game references which are used at runtime while the game is executing. If you look in the Content project references, you will see that it references the CustomModelEffectPipeline project.

  • Oh, I got it! That's why you have all the classes inside CustomModelEffectPipeline project inherited the ContentProcessor classes. So that they can get picked up without notice.

    One more question :)

    How can we do cube map when the enviroment is changing? e.g. In a racing game, if the car moves from block to block in the street, or there is another car right beside it.

    Shouldn't the texture be changed on the fly?

    In this case, should we generate the cube texture from the scene dynamically rather than from a single picture?

  • I do not understand how they are connected either.

  • If you want the reflections to change in realtime, then yes, in that case you need to generate the cubemap in realtime.

    In many cases this is not necessary, so a static cubemap can give good results. But there are certainly situations where doing true dynamic reflections is important.

  • Hi, I'm new to XNA. I need a tool to generate png/jpg/dds cubemap given some jpg file. How do I achieve that using CubemapProcessor? How to save the result to file?

  • codo: the Content Pipeline is designed to output data in .xnb format. If you want your output to be in some other format, you should not use the Content Pipeline.

    You could certainly use the same image manipulation algorithm as the CubemapProcessor in our sample (you have the source code so you can see how it works) but you would need to make your own system for loading and saving the data either side of this transformation.

  • Sorry, what is his .xnb formatand why The pipeline can not produce other output?

  • > what is his .xnb format

    It is the output format from the XNA Framework Content Pipeline. You can read an overview of the Content Pipeline architecture here: http://msdn.microsoft.com/en-us/library/bb447745.aspx

    > why The pipeline can not produce other output?

    Because the pipeline is only designed to create .xnb files.

Page 1 of 1 (13 items)
Leave a Comment
  • Please add 5 and 8 and type the answer here:
  • Post