MIP Mapping What is MIP Mapping? "MIP mapping" refers to the technique of precomputing anti-aliased texture bitmaps as seen from far away, and using them in a texture mapped renderer. Each subsequent image in the MIP map is one quarter of the size of the previous (e.g. if the original image is 64x64, then the second image in the MIP map is 32x32, and the third 16x16, and so on). The word "MIP map" comes from the latin multum in parvo -- many things in a small place. A MIP mapped image requires 4/3 of the storage of the original (1 + 1/4 + 1/16 + ...).
For example, take a close look at Doom sometime. In particular, pay close attention to the textures at the far end of a brightly lit room. Notice how they shift and sparkle as you move? That shifting is the result of aliasing -- erroneous images generated by sampling at too low a frequency. What is happening is that each screen pixel is much larger than a distant texture map pixel. So much larger that it covers dozens or hundreds of texture map pixels. In real life, the analog optics of your eyes will tend to blur images that are very far away so that the colors seem to blend together. But in a standard (non MIP mapped) texture mapper, a single texture pixel is used to color in each screen pixel, with the result that some colors are accentuated when they should softly fade away. Here is a sample scene rendered two ways, once without MIP mapping, and once with MIP mapping:
void map2(BYTE* original, BYTE* destination,
int w, int h)
BYTE* dst = destination;
The Filter function can perform whatever
kind of filtering
BYTE SumPix(BYTE* src, int w, int h, int
n)
Sample MIP Map class code:
The idea is to select the MIP map
image for which the texel size most closely matches the screen pixel size
for the current polygon. For example, if for a given polygon a single screen
pixel covers five texels, you should select the 1/4 size MIP map image.
In order to calculate which image
to use, we must first calculate the size of a texel with respect to a screen
pixel. Consider the 3D to 2D projection transform:
An even more accurate method for
rendering with MIP maps is to select two MIP map images for each
polygon and interpolate between the two images at each pixel. If you are
writing a ray tracer you probably want to explore this technique, but it
is too expensive for games without hardware support. The Nintendo Ultra
64 is reputed to support this feature in hardware.
Although this is a wasteful layout of the
images in a single file (MIP maps require 1.333 times the original image,
and this one uses 2.0), run length encoding compression in the PCX file
format makes the wasted space almost negligible.
One interesting side effect of MIP mapping
that is visible in this example is the blurring of apparent light sources
in the reduced images. As more and more pixels are averaged together, the
"light bar" column in the center of this wall texture gets smeared with
the color of the frame and the wall itself. In practice, it's not all that
bad, but it doesn't look very realistic.
In real life, light sources exhibit an
effect called "halation" (from the word "halo") which makes them appear
bigger than they really are. If desired, this effect can be approximated
in the filter function by giving more weight to bright source texels than
dark ones.
Hosted by Irth Networks -
http://www.irth.net |