I used to be a Computer Science major but I've pretty much decided that it's not for me, even though I'm still pretty into computers. Now time for some more Russian Calculus :lol
Not gonna do AA/AF because Schafer's got some nice pictures up there.
Bump vs Normal Mapping:
Once again they aren't the same or trying to do the same thing.
Bump Mapping uses a gray scale texture image to go along with some geometry. Each pixel in the texture (called a Bump Map
) stores a distance the geometry is supposed to be from where the geometry actually is. So when lighting calculations are done, instead of using the position of the geometry, you used a new position based on where the light hit the geometry, and the distance outwards stored in the pixel on the bump map associated with that part of the geometry.
The end result is the illusion of more geometry than there actually is. However, the illusion breaks down the further out the virtual geometry is supposed to be, making the technique most useful for simulating geometry close to the actual geometry. Also since you only have one hight value to work with, it is subject to the limitations of a function: no multiple distances for one area, so you can't have overlapping virtual geometry.
These limitations come together to make this technique most useful for simulating small disturbances in the surface geometry, aka a bump. Hence the name.
NORMAL MAPPING:
The reason Normal Mapping is generally associated with dump mapping is that they both use textures to simulate detail that isn't there, and that bump maps tend to look like crap without normal mapping.
In normal mapping you store an RGB texture to go along with the model that contains the XYZ vector data for different points on the model. (FYI a vector is a group of 3 numbers and a direction, in this case the XYZ coordinates of a piece of geometry or a piece of virtual geometry if we are bump mapping). Now you'll notice that the lighting on an objects surface changes depending on it's orientation (if you start rotating it the lighting changes), that orientation is call a NORMAL.
Normally the only place you store the Normal for an object is at it's vertices's, because in older rendering that's the only points on a models geometry you knew about. In Normal Mapping you calculate what the normal would be between vertices's on the model while your making it, and store that information into a texture (the Normal Map). So now when you do lighting calculations for your model, instead of looking up the Normal at every vertices and trying to algorithmically guess at what the color should be between them, you can now do a texture look up to approximate what the normal should be for that point on the model and use that for lighting. This method of lighting is called Per Pixel Lighting