• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Soul Calibur Legends [Wii] new screens

I try to reserve judgement when I see screenshots of Wii games as they tend to never look good, even if they are doctored to some degree. The true litmus test for me is seeing the game in motion, and hopefully having an understanding of how the controls will work. Metroid Prime 3 looked kind of average going by the screenshots, but when it's in motion...holy shit. I found the same to be true of Dragon Quest Swords.

So, the screenshots for Soulcalibur Legends look like they're from a PS2 game, fine, let's see it in motion...

Scrubking said:

...aaannnddd it looks terrible.

Sounds like they're going to have a versus mode, so maybe not all is lost.
 

Threi

notag
ok i might as well point out my comments on these bullshots.

they look allright, but a tad low poly. that wouldn't be much of a problem if there are lots of enemies on screen, but there might not be =\

the real problem though is the bland artstyle. That is worth more criticism than the "ZOMG IVY'S ASS IS TEH JAGGY"
 

nightside

Member
iirc it's confirmed to be 60fps and that's good.

some shots look good, some simply not. but i guess dev team wanted to focus on controls and framerate more then graphics. so, there could be more room to improvements..better texture, a bit of bump mapping, good lighting and this game will look great.

and..how can we talk about "limitations" when prime 3 show that could be done much much more?
 

Pimpbaa

Member
The lighting is really killing the look of this game (I don't mind the lower polycount from the main series, that is to be expected).
 

LCGeek

formerly sane
Popstar said:
I don't understand why devs are still using per-vertex lighting (gouraud shading) on the Wii when several titles show that it's perfectly capable of doing per-pixel lighting.

It's one of the big differentiators between the titles that look make the Wii look a step up in power and those that don't. It's not that hard.

That would require effort just like using the TEV but hey it's namco and they have a history of this.
 

fernoca

Member
LCGeek said:
That would require just like using the TEV but hey it's namco and they have a history of this.
Exactly. A shader is essentially a computer program executed on special environment.
Although shaders were introduced for graphics related tasks which still hold a major part of their applications, shaders can also be used for more generic computation, just as generic programs can be used to compute arbitrary data. As the computational power of GPUs continue to raise faster than conventional CPUs, the interest in shader programming catalizes more and more attention. This actually requires to rethink algorithms or problems to fit the stream processing paradigm.
The goal of this article is to provide a look at the most important concepts concerning shaders in most important APIs such as OpenGL and Direct3D. The reader is assumed to be proficient with 3D graphics, a graphics API and fourth generation shading pipelines.
Shaders alone control a large part of the working of a programmable graphics pipeline and thus the final appearance of an object however, they are not the only entities involved in defining an accurate behaviour. The actual resources being used as well as settings of other pipeline stages does still have a great influence on the final result.

Generic shader
A generic shader consumes multiple inputs to produce multiple outputs. Inputs can be constant between invocations or variant.A generic shader replaces a specific stage of the shading pipeline with an user-defined program to be executed on need - thereafter, kernel. Shaders generally run in parallel with limited inter-communication between different executions - thereafter instances - usually limited to simplified first-derivative computation and cache optimizations. Being basically a sequence of operations, kernels are defined using special programming languages tailored to match the needs for explicit parallelization and efficiency. Various shading languages have been designed for this purpose.

Depending on the stage being replaced, a shader fetches specific data while its produced output is fetched to successive stages. Input data is typically read-only and can be categorized in two main types: uniform input does hold constant values between different kernel instances of the same draw call. The application can set the value of each uniform with ease between different draw calls but there's no way to change a unifom value on a per-instance basis. Uniform values can be loaded by calling specific API functions.
samplers are special uniforms meant to be used to access textures. Typically, sampler identifiers themselves specify a texture sampling unit in the pipeline (to be used for texture-lookups operations) which is then bound to a texture. Samplers are usually employed by kernels similarly to objects. The intended usage model presents some differences depending on the shading language being used.
varying input is typically the result of a previous computational stage, sometimes bound to some special, context-dependant semantics. For example, vertex positions are typical varying inputs for vertex shaders (named attributes in this context), pixel texture coordinates are typical varying inputs to pixel shaders.
The output is ideologically always varying (although two instances may actually output the same value). Fourth generation shading pipelines allow to control how output interpolation is performed when primitives are rasterized and pixel shader's varying input is generated.


Vertex shader
Vertex shaders consumes datasets (called "vertices" for easiness), apply specified operations and produce a single resulting dataset.A vertex shader replaces part of the geometry stage of a graphics pipeline. Vertex shaders consume vertices filled by the Input Assembly stage by appling the specified kernel "for each vertex". The result, which usually include an affine transform, is then fetched to the next state - the Primitive Assembly stage. A vertex shader always produces a single transformed "vertex" and runs on a vertex processor.

Producing vertex position for further rasterization is the typical task of the vertex shader

Note the current meaning of "vertex" may or may not match the intuitive idea of a vertex. In general, it is better to think at a "vertex" as the basic input data set. This is especially important for generic processing, in which a vertex may hold attribute which does not map to any "geometrical" meaning.

Although vertex shaders were the first hardware accelerated shader type with a high degree of flexibility (see GeForce3, Radeon R200), their feature set was considerably different from other stages for a long time. Even if the exposed instruction set can be considered unified, the performance characteristics of vertex processing units can be considerably different from other execution units. Historically, branching has been considerably more efficient and flexible on vertex processors. Similarly, dynamic array indexing was possible only on vertex processors up to fourth generation pipelines.

Geometry shader
Geometry shaders consumes whole primitives, which are assemblies of individual vertices and output a primitive stream.Geometry shaders replace a part of the geometry stage subsequent to Primitive Assembly stage and prior to Rasterization. Differently from other shader types, which replaced well-known tasks, the notion of a geometry shader have been only recently introduced to realtime systems so they currently don't map to anything possible before. Additionally, the problem being solved is conceptually very different so a generic geometry shader will be considerably different from a typical shader (both vertex and fragment).

That's just what I think.
 

lordmrw

Member
JDSN said:
They need to thighten up the lighting on level three.


That's basically what's hurting the look of this game. Everything has a really flat look to it because there doesn't seem to be a lighting engine in place, if they even bother to add one.
 
The other thing hurting this game is that is just plain looks bad. In some shots you can clearly see paper thin modeling on arms and legs of some enemies.
 

tanasten

glad to heard people isn't stupid anymore
The trailer looks more than fine. If they make a great gameplay method, with the sword slashing of the Wiimote, this is going to be a must.
 

DreD

Member
I think it looks ok, but those screens show very bland environnement and the color palette seems limited. Putting some more vibrant colors in there would defenitly improve the look of the game.
 

Threi

notag
fernoca said:
Exactly. A shader is essentially a computer program executed on special environment.
Although shaders were introduced for graphics related tasks which still hold a major part of their applications, shaders can also be used for more generic computation, just as generic programs can be used to compute arbitrary data. As the computational power of GPUs continue to raise faster than conventional CPUs, the interest in shader programming catalizes more and more attention. This actually requires to rethink algorithms or problems to fit the stream processing paradigm.
The goal of this article is to provide a look at the most important concepts concerning shaders in most important APIs such as OpenGL and Direct3D. The reader is assumed to be proficient with 3D graphics, a graphics API and fourth generation shading pipelines.
Shaders alone control a large part of the working of a programmable graphics pipeline and thus the final appearance of an object however, they are not the only entities involved in defining an accurate behaviour. The actual resources being used as well as settings of other pipeline stages does still have a great influence on the final result.

Generic shader
A generic shader consumes multiple inputs to produce multiple outputs. Inputs can be constant between invocations or variant.A generic shader replaces a specific stage of the shading pipeline with an user-defined program to be executed on need - thereafter, kernel. Shaders generally run in parallel with limited inter-communication between different executions - thereafter instances - usually limited to simplified first-derivative computation and cache optimizations. Being basically a sequence of operations, kernels are defined using special programming languages tailored to match the needs for explicit parallelization and efficiency. Various shading languages have been designed for this purpose.

Depending on the stage being replaced, a shader fetches specific data while its produced output is fetched to successive stages. Input data is typically read-only and can be categorized in two main types: uniform input does hold constant values between different kernel instances of the same draw call. The application can set the value of each uniform with ease between different draw calls but there's no way to change a unifom value on a per-instance basis. Uniform values can be loaded by calling specific API functions.
samplers are special uniforms meant to be used to access textures. Typically, sampler identifiers themselves specify a texture sampling unit in the pipeline (to be used for texture-lookups operations) which is then bound to a texture. Samplers are usually employed by kernels similarly to objects. The intended usage model presents some differences depending on the shading language being used.
varying input is typically the result of a previous computational stage, sometimes bound to some special, context-dependant semantics. For example, vertex positions are typical varying inputs for vertex shaders (named attributes in this context), pixel texture coordinates are typical varying inputs to pixel shaders.
The output is ideologically always varying (although two instances may actually output the same value). Fourth generation shading pipelines allow to control how output interpolation is performed when primitives are rasterized and pixel shader's varying input is generated.


Vertex shader
Vertex shaders consumes datasets (called "vertices" for easiness), apply specified operations and produce a single resulting dataset.A vertex shader replaces part of the geometry stage of a graphics pipeline. Vertex shaders consume vertices filled by the Input Assembly stage by appling the specified kernel "for each vertex". The result, which usually include an affine transform, is then fetched to the next state - the Primitive Assembly stage. A vertex shader always produces a single transformed "vertex" and runs on a vertex processor.

Producing vertex position for further rasterization is the typical task of the vertex shader

Note the current meaning of "vertex" may or may not match the intuitive idea of a vertex. In general, it is better to think at a "vertex" as the basic input data set. This is especially important for generic processing, in which a vertex may hold attribute which does not map to any "geometrical" meaning.

Although vertex shaders were the first hardware accelerated shader type with a high degree of flexibility (see GeForce3, Radeon R200), their feature set was considerably different from other stages for a long time. Even if the exposed instruction set can be considered unified, the performance characteristics of vertex processing units can be considerably different from other execution units. Historically, branching has been considerably more efficient and flexible on vertex processors. Similarly, dynamic array indexing was possible only on vertex processors up to fourth generation pipelines.

Geometry shader
Geometry shaders consumes whole primitives, which are assemblies of individual vertices and output a primitive stream.Geometry shaders replace a part of the geometry stage subsequent to Primitive Assembly stage and prior to Rasterization. Differently from other shader types, which replaced well-known tasks, the notion of a geometry shader have been only recently introduced to realtime systems so they currently don't map to anything possible before. Additionally, the problem being solved is conceptually very different so a generic geometry shader will be considerably different from a typical shader (both vertex and fragment).

That's just what I think.

4y032q8.jpg
 

Azelover

Titanic was called the Ship of Dreams, and it was. It really was.
Looks awful. I'll be fine with it as long as it has good gameplay and they fix those weird looking shadows. Make them darker circles or something, just don't leave them like they are please.
 

DreD

Member
C- Warrior said:
If the game runs at 60 fps then I could excuse some of the graphical shortcomings, but if not -- then, it doesn't impress.

From IGN
We played the multiplayer cooperative mode and noticed that the framerate took a significant dip to have the split-screen. Where the single player adventure runs at a smooth as silk 60 frames per second, the two player split-screen mode's been lowered to 30. It's still acceptable and the action flows well in this lowered framerate, but it was definitely a noticeable sacrifice.
 

castle007

Banned
fernoca said:
Exactly. A shader is essentially a computer program executed on special environment.
Although shaders were introduced for graphics related tasks which still hold a major part of their applications, shaders can also be used for more generic computation, just as generic programs can be used to compute arbitrary data. As the computational power of GPUs continue to raise faster than conventional CPUs, the interest in shader programming catalizes more and more attention. This actually requires to rethink algorithms or problems to fit the stream processing paradigm.
The goal of this article is to provide a look at the most important concepts concerning shaders in most important APIs such as OpenGL and Direct3D. The reader is assumed to be proficient with 3D graphics, a graphics API and fourth generation shading pipelines.
Shaders alone control a large part of the working of a programmable graphics pipeline and thus the final appearance of an object however, they are not the only entities involved in defining an accurate behaviour. The actual resources being used as well as settings of other pipeline stages does still have a great influence on the final result.

Generic shader
A generic shader consumes multiple inputs to produce multiple outputs. Inputs can be constant between invocations or variant.A generic shader replaces a specific stage of the shading pipeline with an user-defined program to be executed on need - thereafter, kernel. Shaders generally run in parallel with limited inter-communication between different executions - thereafter instances - usually limited to simplified first-derivative computation and cache optimizations. Being basically a sequence of operations, kernels are defined using special programming languages tailored to match the needs for explicit parallelization and efficiency. Various shading languages have been designed for this purpose.

Depending on the stage being replaced, a shader fetches specific data while its produced output is fetched to successive stages. Input data is typically read-only and can be categorized in two main types: uniform input does hold constant values between different kernel instances of the same draw call. The application can set the value of each uniform with ease between different draw calls but there's no way to change a unifom value on a per-instance basis. Uniform values can be loaded by calling specific API functions.
samplers are special uniforms meant to be used to access textures. Typically, sampler identifiers themselves specify a texture sampling unit in the pipeline (to be used for texture-lookups operations) which is then bound to a texture. Samplers are usually employed by kernels similarly to objects. The intended usage model presents some differences depending on the shading language being used.
varying input is typically the result of a previous computational stage, sometimes bound to some special, context-dependant semantics. For example, vertex positions are typical varying inputs for vertex shaders (named attributes in this context), pixel texture coordinates are typical varying inputs to pixel shaders.
The output is ideologically always varying (although two instances may actually output the same value). Fourth generation shading pipelines allow to control how output interpolation is performed when primitives are rasterized and pixel shader's varying input is generated.


Vertex shader
Vertex shaders consumes datasets (called "vertices" for easiness), apply specified operations and produce a single resulting dataset.A vertex shader replaces part of the geometry stage of a graphics pipeline. Vertex shaders consume vertices filled by the Input Assembly stage by appling the specified kernel "for each vertex". The result, which usually include an affine transform, is then fetched to the next state - the Primitive Assembly stage. A vertex shader always produces a single transformed "vertex" and runs on a vertex processor.

Producing vertex position for further rasterization is the typical task of the vertex shader

Note the current meaning of "vertex" may or may not match the intuitive idea of a vertex. In general, it is better to think at a "vertex" as the basic input data set. This is especially important for generic processing, in which a vertex may hold attribute which does not map to any "geometrical" meaning.

Although vertex shaders were the first hardware accelerated shader type with a high degree of flexibility (see GeForce3, Radeon R200), their feature set was considerably different from other stages for a long time. Even if the exposed instruction set can be considered unified, the performance characteristics of vertex processing units can be considerably different from other execution units. Historically, branching has been considerably more efficient and flexible on vertex processors. Similarly, dynamic array indexing was possible only on vertex processors up to fourth generation pipelines.

Geometry shader
Geometry shaders consumes whole primitives, which are assemblies of individual vertices and output a primitive stream.Geometry shaders replace a part of the geometry stage subsequent to Primitive Assembly stage and prior to Rasterization. Differently from other shader types, which replaced well-known tasks, the notion of a geometry shader have been only recently introduced to realtime systems so they currently don't map to anything possible before. Additionally, the problem being solved is conceptually very different so a generic geometry shader will be considerably different from a typical shader (both vertex and fragment).

That's just what I think.

please don't tell me that you wrote all that :O
 

rakka

Member
IIRC they're aiming for 60fps.

Not even applying AA to the shots saves them from looking terrible.

Where's mah lighting?
 

GDGF

Soothsayer
a Master Ninja said:
So is this true or not?

It's true.

Besides the lighting, the game needs more vibrant coloring all around. The colors are way too muted. If they fixed those two problems, everything would be groovy.
 

fernoca

Member
rakka said:
IIRC they're aiming for 60fps.

Not even applying AA to the shots saves them from looking terrible.

Where's mah lighting?
I don't know what's the big deal on lighting..I mean..Global illumination algorithms (as I like to call it) used in 3D computer graphics are commonly used to add realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene (indirect illumination).

Images rendered using global illumination algorithms often appear more photorealistic than images rendered using only direct illumination algorithms. However, they are also much slower to generate and more computationally expensive. A common approach is to compute the global illumination of a scene and store that information with the geometry, i.e., radiosity. That stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly.

Radiosity, ray tracing, beam tracing, cone tracing, path tracing, metropolis light transport, ambient occlusion, and photon mapping are examples of algorithms used in global illumination, some of which may be used together to yield results that are fast, but accurate.

These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene.

The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design.

In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power. Video demonstrating global illumination and the ambient color effect....

As simple as that.
 

Threi

notag
fernoca said:
I don't know what's the big deal on lighting..I mean..Global illumination algorithms (as I like to call it) used in 3D computer graphics are commonly used to add realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene (indirect illumination).

Images rendered using global illumination algorithms often appear more photorealistic than images rendered using only direct illumination algorithms. However, they are also much slower to generate and more computationally expensive. A common approach is to compute the global illumination of a scene and store that information with the geometry, i.e., radiosity. That stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly.

Radiosity, ray tracing, beam tracing, cone tracing, path tracing, metropolis light transport, ambient occlusion, and photon mapping are examples of algorithms used in global illumination, some of which may be used together to yield results that are fast, but accurate.

These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene.

The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design.

In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power. Video demonstrating global illumination and the ambient color effect....

As simple as that.
4y032q8.jpg


STOP SAYING WORDS.
 

fernoca

Member
Mr. Pachunga Chung said:
STOP SAYING WORDS.
Oh, come on Pachunga! :lol
What do you mean by "words"??
To me, a word is a unit of language that carries meaning and consists of one or more morphemes which are linked more or less tightly together, and has a phonetical value. Typically a word will consist of a root or stem and zero or more affixes. Words can be combined to create phrases, clauses, and sentences. A word consisting of two or more stems joined together is called a compound.

Depending on the language, words can sometimes be difficult to identify or delimit. While word separators, most often spaces, are commonplace in the written corpus of several languages, some languages such as Chinese and Japanese do not use these. Words may contain spaces, however, if they are compounds or proper nouns such as ice cream and the United States of America. Furthermore, synthetic languages often combine many different pieces of lexical data into single words, making it difficult to boil them down to the traditional sense of words found more easily in analytic languages; this is especially problematic for polysynthetic languages such as Inuktitut and Ubykh where entire sentences may consist of single such words. Especially confusing are languages such as Vietnamese, where spaces do not necessarily indicate breaks in words and boundaries must be determined by the context of the piece.

However, of all situations, the most confusing is those for oral languages, which potentially only offer phonolexical clues as to where word boundaries lie. Sign languages pose a similar problem as well, as does body language.

Official words, however, would be documented in a dictionary of whichever language you are categorizing it under.

In synthetic languages, a single word stem (for example, love) may have a number of different forms (for example, loves, loving, and loved). However, these are not usually considered to be different words, but different forms of the same word. In these languages, words may be considered to be constructed from a number of morphemes (such as love and -s).

In spoken language, the distinction of individual words is even more complex: short words are often run together, and long words are often broken up. Spoken French has some of the features of a polysynthetic language: il y est allé ("He went there") is pronounced /i.ljɛ.ta.le/. As the majority of the world's languages are not written, the scientific determination of word boundaries becomes important.

After all, there are five ways to determine where the word boundaries of spoken language should be placed:

Potential pause
A speaker is told to repeat a given sentence slowly, allowing for pauses. The speaker will tend to insert pauses at the word boundaries. However, this method is not foolproof: the speaker could easily break up polysyllabic words.
Indivisibility
A speaker is told to say a sentence out loud, and then is told to say the sentence again with extra words added to it. Thus, I have lived in this village for ten years might become I and my family have lived in this little village for about ten or so years. These extra words will tend to be added in the word boundaries of the original sentence. However, some languages have infixes, which are put inside a word. Similarly, some have separable affixes; in the German sentence "Ich komme gut zu Hause an," the verb ankommen is separated.
Minimal free forms
This concept was proposed by Leonard Bloomfield. Words are thought of as the smallest meaningful unit of speech that can stand by themselves. This correlates phonemes (units of sound) to lexemes (units of meaning). However, some written words are not minimal free forms, as they make no sense by themselves (for example, the and of).
Phonetic boundaries
Some languages have particular rules of pronunciation that make it easy to spot where a word boundary should be. For example, in a language that regularly stresses the last syllable of a word, a word boundary is likely to fall after each stressed syllable. Another example can be seen in a language that has vowel harmony (like Turkish): the vowels within a given word share the same quality, so a word boundary is likely to occur whenever the vowel quality changes. However, not all languages have such convenient phonetic rules, and even those that do present the occasional exceptions.
Semantic units
Much like the abovementioned minimal free forms, this method breaks down a sentence into its smallest semantic units. However, language often contains words that have little semantic value (and often play a more grammatical role), or semantic units that are compound words.
In practice, linguists apply a mixture of all these methods to determine the word boundaries of any given sentence. Even with the careful application of these methods, the exact definition of a word is often still very elusive.


So I don't know what are you making such a big deal out of this..I mean, you want it even more simple?. :D


Okay, I'll stop..but geeze..so many people focusing on the technical aspects of those lame pictures, when everyone knows that Wii games usually look bad in pictures, as ALL the previews of Metroid Prime 3 has been saying.
 
Hey, Mr. Pachunga Chung, come post that picture again. Man, it gets me every time!

And yes, I understand that we're talking about the Wii here, and the Wii's horsepower cannot launch rocket ships, but as has been (extremely wordily) stated by fernoca it could stand to do a little better. I also understand that we're nitpicking graphics when the Wii is about gameplay and not graphics, blah de blah, but I think it's alright to do that in a thread about screenshots. When we can play the game outside of an airplane hanger in L.A., then we'll talk about the gameplay.
 

rakka

Member
fernoca said:
I don't know what's the big deal on lighting..I mean..Global illumination algorithms (as I like to call it) used in 3D computer graphics are commonly used to add realistic lighting to 3D scenes. Such algorithms take into account not only the light which comes directly from a light source (direct illumination), but also subsequent cases in which light rays from the same source are reflected by other surfaces in the scene (indirect illumination).

Images rendered using global illumination algorithms often appear more photorealistic than images rendered using only direct illumination algorithms. However, they are also much slower to generate and more computationally expensive. A common approach is to compute the global illumination of a scene and store that information with the geometry, i.e., radiosity. That stored data can then be used to generate images from different viewpoints for generating walkthroughs of a scene without having to go through expensive lighting calculations repeatedly.

Radiosity, ray tracing, beam tracing, cone tracing, path tracing, metropolis light transport, ambient occlusion, and photon mapping are examples of algorithms used in global illumination, some of which may be used together to yield results that are fast, but accurate.

These algorithms model diffuse inter-reflection which is a very important part of global illumination; however most of these (excluding radiosity) also model specular reflection, which makes them more accurate algorithms to solve the lighting equation and provide a more realistically illuminated scene.

The algorithms used to calculate the distribution of light energy between surfaces of a scene are closely related to heat transfer simulations performed using finite-element methods in engineering design.

In real-time 3D graphics, the diffuse inter-reflection component of global illumination is sometimes approximated by an "ambient" term in the lighting equation, which is also called "ambient lighting" or "ambient color" in 3D software packages. Though this method of approximation (also known as a "cheat" because it's not really a global illumination method) is easy to perform computationally, when used alone it does not provide an adequately realistic effect. Ambient lighting is known to "flatten" shadows in 3D scenes, making the overall visual effect more bland. However, used properly, ambient lighting can be an efficient way to make up for a lack of processing power. Video demonstrating global illumination and the ambient color effect....

As simple as that.

You should mention your copypasta source :p

Global illumination - Wikipedia, the free encyclopedia
http://en.wikipedia.org/wiki/Global_illumination
 
the game looks nice too me. I don't know why people are putting this game down? ...I know why.....it because it on the Wii and not on the PS3 or 360 and this is GAF.:D
 

rakka

Member
fernoca said:
http://i17.tinypic.com/4ouddao.jpg

Whos he?

the game looks nice too me. I don't know why people are putting this game down? ...I know why.....it because it on the Wii and not on the PS3 or 360 and this is GAF.

I'm pleased that its on the Wii.

It's just that the game itself doesn't look good.
 

jts

...hate me...
sonic4ever said:
the game looks nice too me. I don't know why people are putting this game down? ...I know why.....it because it on the Wii and not on the PS3 or 360 and this is GAF.:D
Hey are you 2 persons on the same computer?
 

jts

...hate me...
sonic4ever said:
???? what are you talking about???
THe guy next to you that answered your question directly in your post while you were not looking!

OMG HE HAS A KNIFE
 

jts

...hate me...
You're right, I'm sorry. It's 5 AM here where I live and I better get some sleep :(

cya
 
Because you asked a question and the answered that question on that same line. :D[/QUOTE]

I was making a joke....maybe not a good one. The game looks good but GAF is always putting down graphics in games. They are never happy with anything. I am right now playing Tenchu Z and having fun with it, but if I went by GAF it would be the worst game ever.
 
Top Bottom