• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Help me GAF, I don't speak graphics

mr jones said:
OK, but what would be the visual difference of an orange model using Normal mapping, and an orange model using Parallax mapping.

Hell, even I don't know anything about that stuff. I always thought parallax had to do with multiple 2 dimensional backgrounds moving in the same direction at different speeds to similate depth...

Thats parallax scrolling.
 

Fatghost

Gas Guzzler
Supersampling AA vs Multisampling AA Which makes the nicer effect and which has the biggest hit on performance?
 

thetrin

Hail, peons, for I have come as ambassador from the great and bountiful Blueberry Butt Explosion
EphemeralDream said:
2.5D – The usage of 3D models in the foreground with sprites used for background. Examples: NSMB, Maverick Hunter X, MMPU, etc. Correct? How was Donkey Kong Country done?

This isn't entirely true. 2.5D refers to the use of polygonal graphics in both the foreground and background, compared to the lack of "depth" in the movement. A good example would be Valkyrie Profile: Silmeria. While the game uses polygons for the entire world, the character can only move on a two dimensional plane, except when entering doors.
 

Vaporak

Member
I used to be a Computer Science major but I've pretty much decided that it's not for me, even though I'm still pretty into computers. Now time for some more Russian Calculus :lol

Not gonna do AA/AF because Schafer's got some nice pictures up there.

Bump vs Normal Mapping:
Once again they aren't the same or trying to do the same thing.

Bump Mapping uses a gray scale texture image to go along with some geometry. Each pixel in the texture (called a Bump Map :p) stores a distance the geometry is supposed to be from where the geometry actually is. So when lighting calculations are done, instead of using the position of the geometry, you used a new position based on where the light hit the geometry, and the distance outwards stored in the pixel on the bump map associated with that part of the geometry.

The end result is the illusion of more geometry than there actually is. However, the illusion breaks down the further out the virtual geometry is supposed to be, making the technique most useful for simulating geometry close to the actual geometry. Also since you only have one hight value to work with, it is subject to the limitations of a function: no multiple distances for one area, so you can't have overlapping virtual geometry.

These limitations come together to make this technique most useful for simulating small disturbances in the surface geometry, aka a bump. Hence the name.

NORMAL MAPPING:
The reason Normal Mapping is generally associated with dump mapping is that they both use textures to simulate detail that isn't there, and that bump maps tend to look like crap without normal mapping.

In normal mapping you store an RGB texture to go along with the model that contains the XYZ vector data for different points on the model. (FYI a vector is a group of 3 numbers and a direction, in this case the XYZ coordinates of a piece of geometry or a piece of virtual geometry if we are bump mapping). Now you'll notice that the lighting on an objects surface changes depending on it's orientation (if you start rotating it the lighting changes), that orientation is call a NORMAL.

Normally the only place you store the Normal for an object is at it's vertices's, because in older rendering that's the only points on a models geometry you knew about. In Normal Mapping you calculate what the normal would be between vertices's on the model while your making it, and store that information into a texture (the Normal Map). So now when you do lighting calculations for your model, instead of looking up the Normal at every vertices and trying to algorithmically guess at what the color should be between them, you can now do a texture look up to approximate what the normal should be for that point on the model and use that for lighting. This method of lighting is called Per Pixel Lighting
 

thetrin

Hail, peons, for I have come as ambassador from the great and bountiful Blueberry Butt Explosion
The Friendly Monster said:
That isn't a word. :(.

I almost stopped listening to EGM Live this week without finishing it, because Intahar kept saying "unaccessible." That's not a word, guys! It's inaccessible.
 

Pimpbaa

Member
Fatghost said:
Supersampling AA vs Multisampling AA Which makes the nicer effect and which has the biggest hit on performance?

Supersampling AA renders the screen at a higher resolution internally and then scales it down before displaying it which anti-aliases everything on the screen even jaggies on alpha textures(you can simulate this with photoshop or something by drawing a diagonal line and then scaling that picture down to a lower resolution). This is a much bigger hit on performance than multisampling. I'm not entirely sure how multisampling works, but I know it only affects the edges of polygons and doesn't touch anything else (unless you use nvidia's alpha texture anti-aliasing along with it or transparency anti-aliasing as they call it).
 

Dahbomb

Member
For that guy asking about 2D/3D motion blur, someone a while back brought it up for next-gen game play. Apparently a lot of games in the previous generations used 2D blur, where they basically blur the screen and not the model itself. 3D blur is where the whole model is blurred, so if you were to freeze frame the game and revolve the model around, the blur could be seen in any angle. The same cannot be said for 2D motion blur.

That is what I THINK it is, I have no way to prove this. Someone stated that games like Lost Planet used 3D motion blur and it is a true next-gen graphical upgrade, and then I brought up that games like DMC1 had motion blur (when you jumped up and shot with E&I, there was a slight motion blur). He countered that by saying that DMC1 has 2D motion blurring on certain attacks. I didn't know how to respond to it back then hence why I am asking now. I don't even know if what he said was true... =/

Anyway thanks to the other guys who answered my questions. I will be reading it more thoroughly the second time around. :)
 

Vaporak

Member
Dahbomb said:
For that guy asking about 2D/3D motion blur, someone a while back brought it up for next-gen game play. Apparently a lot of games in the previous generations used 2D blur, where they basically blur the screen and not the model itself. 3D blur is where the whole model is blurred, so if you were to freeze frame the game and revolve the model around, the blur could be seen in any angle. The same cannot be said for 2D motion blur.

That is what I THINK it is, I have no way to prove this. Someone stated that games like Lost Planet used 3D motion blur and it is a true next-gen graphical upgrade, and then I brought up that games like DMC1 had motion blur (when you jumped up and shot with E&I, there was a slight motion blur). He countered that by saying that DMC1 has 2D motion blurring on certain attacks. I didn't know how to respond to it back then hence why I am asking now. I don't even know if what he said was true... =/

Anyway thanks to the other guys who answered my questions. I will be reading it more thoroughly the second time around. :)

No Problem, as far as 2D vs 3D motion blur is concerned, I have no clue. As far as I knew all blur was 2D based as it's a post processing effect done on the frame buffer.
 

Bojangles

Member
mr jones said:
OK, but what would be the visual difference of an orange model using Normal mapping, and an orange model using Parallax mapping.

Hell, even I don't know anything about that stuff. I always thought parallax had to do with multiple 2 dimensional backgrounds moving in the same direction at different speeds to similate depth...

Imagine something with a little more depth, like a crater (from a rocket explosion perhaps).

Instead of having to deform the geometry of the wall you hit, you can put a square on the wall with the explosion texture.

Ok, so now we're standing in front of a wall, looking at a decal of a crater. If the explosion decal was "bump mapped" then a light on our right would make one side of the crate brighter (the side on our left, becaus the normal map says it's directly facing the light) , and one side dimmer (the side on our right, because it's deeper, and the normal map tells the renderer that the light wouldn't hit it..

If we strafe left and right, the light hasn't moved, and the crater hasn't moved, so it would mostly look the same, but the normal map would allow the lighting to update realistically.

In reality, if the crater had depth, and we strafed to the left, we should see less of the left side of the crater, and more of the right side. The is what parallax maping does. It does it by using a depth map and some fancy math to "warp" the texture so that when painted on the flat surface, it paints the parts of the texture we would have seen if there was actual depth there.

http://en.wikipedia.org/wiki/Parallax_mapping has some good pictures, take a look at the different views of the crater, and remember that it's actually modelled as a flat surface.
 
Top Bottom