• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

z0m3le

Banned
Okay, I'll spell this out a bit differently.

Tessellation in the modern graphics sense (whether it's carried out on a tessellator or in software or by hand or whatever) refers to taking a polygon and breaking it up into more polygons.

The Legend of Zelda: The Wind Waker does not use tessellation in this sense of the word.

The language that led people to believe that Wind Waker had tessellation was a reference to a "tessellated water plane." However, this was misinterpreted. The reference to a "tessellated water plane" uses a mathematical definition of "tessellation" which basically refers to any representation of something with a mesh of polygons. The "tessellated water plane" is just a surface made up of a large number of polygons. There is no "tessellation" operation happening on the geometry. The water plane was not created by having a processor on the Gamecube take a plane and break it up in real-time. "Tessellated water plane" just refers to the fact that the water plane happens to be made up of a bunch of polygons.

Sorry to interrupt, but isn't it more apt to just mention that GPU7 has a tessellator? and will almost certainly be using it for tessellation, and is likely based on AT LEAST R700's Gen 2 AMD tessellator if not a Gen 3 (enabling DX11 tessellation hardware compatability?)

I mean it is great to learn about this stuff, although I believe coldblooder mentioned this pages back with pictures from the game (wind waker) showing it in action but it is sort of just treading on old topics, eating up space in the discussion of course I've brought up the power efficiency stuff in the past but it seems we are back to 8GFLOPs per watt, which is something I feel is because that sort of thing is being over looked and needs to be pointed out again.

fourth storm: I read your theory again, it does answer why the Shader blocks would be so big and if the ALUs are individually larger the entire chip might be sucking down more wattage. In this case you might be right, obviously my post above was directed at the idea of 160ALU GPU sucking down too much power, this however could make sense. We really wouldn't know, but a question I have is that having 160 ALUs still limits the polygon count however that was before taking into account the TEV modifications, are you suggesting that somehow it is getting close to the 500million polygons that 360's Xenos can perform? or maybe you think however GCN produces twice the polygons per clock is being done here? I find that a pretty big bottle neck because obviously porting 360 games might not require a 500 million polygon count, but the less you have available from that number, the harder it is going to be thanks to all the other work the chip has to perform.
 
Sorry to interrupt, but isn't it more apt to just mention that GPU7 has a tessellator? and will almost certainly be using it for tessellation, and is likely based on AT LEAST R700's Gen 2 AMD tessellator if not a Gen 3 (enabling DX11 tessellation hardware compatability?)

I mean it is great to learn about this stuff, although I believe coldblooder mentioned this pages back with pictures from the game (wind waker) showing it in action but it is sort of just treading on old topics, eating up space in the discussion of course I've brought up the power efficiency stuff in the past but it seems we are back to 8GFLOPs per watt, which is something I feel is because that sort of thing is being over looked and needs to be pointed out again.

fourth storm: I read your theory again, it does answer why the Shader blocks would be so big and if the ALUs are individually larger the entire chip might be sucking down more wattage. In this case you might be right, obviously my post above was directed at the idea of 160ALU GPU sucking down too much power, this however could make sense. We really wouldn't know, but a question I have is that having 160 ALUs still limits the polygon count however that was before taking into account the TEV modifications, are you suggesting that somehow it is getting close to the 500million polygons that 360's Xenos can perform? or maybe you think however GCN produces twice the polygons per clock is being done here? I find that a pretty big bottle neck because obviously porting 360 games might not require a 500 million polygon count, but the less you have available from that number, the harder it is going to be thanks to all the other work the chip has to perform.

Just something to clarify:

the tri-setup for the Xbox 360 is 500 Million poly/sec That number is not dependent on the ALUs, but just the architecture and the GPU clockspeed (setup 1 poly/cycle @ 500MHz). The PS3's GPU's is an older architecture and apparently doesn't not set up polygons as well (1 poly/2 cycles)

Blu had a nice post about that trisetup number:

Just to clarify: that's the triangle setup rate (aka trisetup) - the rate at which the rasterizer can process individual triangles and send them down to the interpolators. The significance of this rate is that no matter what the vertex/geometry/tessellation shaders do, they cannot produce more triangles than what the trisetup can handle. BUT that does not mean that those shading units always produce vertices and/or topology at this rate! IOW, the trisetup rate is merely a cap of the pipeline in its ability to handle triangles, not the rate in every given case - a particular case can be much lower than that.

As for the performance, people who are arguing that the Wii U only has 160 ALU are also assuming that the ALUs are more efficient than the ones in current-gen systems.
 

z0m3le

Banned
Just something to clarify:

the tri-setup for the Xbox 360 is 500 Million poly/sec That number is not dependent on the ALUs, but just the architecture and the GPU clockspeed (setup 1 poly/cycle @ 500MHz). The PS3's GPU's is an older architecture and apparently doesn't not set up polygons as well (1 poly/2 cycles)

Blu had a nice post about that trisetup number:



As for the performance, people who are arguing that the Wii U only has 160 ALU are also assuming that the ALUs are more efficient than the ones in current-gen systems.

The Radeon series from at least r700 to r900 were limited in a similar manner, meaning Wii U's ALUs could only produce a maximum of 225 million polygons if it shares this limitation with the r700 family. My point in this is that while Xenos might not "always" hit 500 million polygons, it is less limited, Wii U can not produce as many polygons if this limitation is true and would also mean that there is less resources to perform comparably with 360, making porting much harder.
 
I've seen the Wind Waker example thrown around when talks about "tesselation" surface in Wii U related threads. I would like to ask what's the point of that example? Ask because is not an example of the "adaptive tesselation" typically discussed in relation to the DX11 or any modrn equivalent API feature set.

So why is this example constantly referenced? If going by the term "tesselation", well haven't we seen multiple examples of tesselated surfaces even in very ancient APIs and games? Even the way polygonal rendering is handling fits with the term tesselation. Might be wrong but it'll be nice is someone well informed could share some light in the matter. Thanks.
 
The Radeon series from at least r700 to r900 were limited in a similar manner, meaning Wii U's ALUs could only produce a maximum of 225 million polygons if it shares this limitation with the r700 family. My point in this is that while Xenos might not "always" hit 500 million polygons, it is less limited, Wii U can not produce as many polygons if this limitation is true and would also mean that there is less resources to perform comparably with 360, making porting much harder.
What limitation are you referring to? Where did that "225 million" number came from?
 

Rolf NB

Member
The setup rate is irrelevant. 1080p60 is 120M pixels per second. Why would you even want to render two polygons for every pixel you display on average?

(or make it eight polygons per pixel for 720p30)
 

krizzx

Junior Member
So when you asked if tesselation would be possible on a fixed function unit like the TEV you weren't asking about the modern implementation of tesselation made available in DX11 that gamers and the industry are so excited about? What were you asking about?

You need to realise the TEV isn't anything special nowadays. Programmable shaders are the future and they aren't lacking any 'magic' or some such.

Fixed function shaders will always have one advantage over programmable shaders. "Efficiency".

They can do more than a modern shader using the same amount of horse power can do. The range of effects may be limited and ease at which they are achieved may be lower, but that is irrelevant to what i'm looking at. What i"m looking at getting the most performance for your watts. Kind of like how a Voodoo 3dfx cards could achieve performance that a card with slightly higher specs but more standard/modern(for the time) tech could not.


Its like the difference between programming in assembly and programming in Java. Java is easier to use and produces faster results but it wastes tons of resources. Doing things in assembly is time consuming but allows you to achieve performance that simply isn't possible in Java. Another aspect is that you have so many different types of cards in modern PC that assembly(which is individual chip limited) would not allow code portability. That is not an issue with consoles since they all have the exact same tech.

I want a setup that can squeeze every hertz. Going this route, you could achieve an energy efficient, low cost console that could perform on the level of consoles that boasts much bigger numbers or better. This seems to be Nintendo's goal, form what I can tell.

As for tessellation, I'm only interested in the visual result. How it is achieved doesn't make much difference to me. If I"m not mistaken tessllation as a visual style existed long before DX11 or computer implementations in general. I'm sure there are more way's those results can be achieved than by DX11 standards. That is why I am curious about whether or not you can achieved them on fixed functions units.

This reminds me of how people said the Gamecube couldn't do normal mapping, bump mapping, or bloom when it was first released because it didn't use modern shaders, yet Rogue Leader did all of those things at launch.
 

Popstar

Member
Fixed function shaders will always have one advantage over programmable shaders. "Efficiency".
This isn't true. In fact, the opposite is the case.

You can trace this in the evolution of GPU hardware. Programmable vertex shaders were available before pixel shaders because as the possible combinations of lighting, texture coordinate generation and such increased having custom hardware for each case taking up die space and power while idle was a poor idea. Later fixed-function APIs were already run on programmable vertex hardware because it was more efficient, so why not just expose it directly?

Looking at hardware for pixel shading you went from large hard-coded equations like the one in the TEV that just were just a waste of space when doing a simple replace, to programmable hardware which broke things down into smaller operations.

You then went from separate vertex and fragment hardware where one could sit idle depending on the workload, to unified shaders capable of dynamically allocating the load.

The change from the older VLIW5 architecture to the current has also in part been driven by a desire to better allocate resources and leave less hardware idle.
 

HTupolev

Member
I'm sure there are more way's those results can be achieved than by DX11 standards.
Of course. CPUs can carry out pretty much anything, and that was true even around their inception. You could go back to the 1950s and write a program that tessellates polygons that could run on computers back then.

A 1950's software tessellator would run like crap, of course.

That is why I am curious about whether or not you can achieved them on fixed functions units.
Actually, from what I understand, modern GPU tessellation does tend to involve fixed-function tessellators. That's been true at least since Microsoft and AMD decided to duct-tape a tessellator to the beginning of a DX9-esque pipeline for the Xbox 360.

//==================

This reminds me of how people said the Gamecube couldn't do normal mapping, bump mapping, or bloom when it was first released because it didn't use modern shaders, yet Rogue Leader did all of those things at launch.
People who thought the Gamecube wasn't going to be able to do bump mapping were aiming weirdly low; simple forms of diffuse bumpmapping very much predate programmable shaders. I don't know when they were first supported in hardware, but I do that that the Voodoo2 had bumpmap functionality. AFAIK, doing basic diffuse bumpmapping wasn't even all that exotic on Gamecube; it had texture operations specifically designed to support it.

Seems like where the Gamecube started running into walls was where people wanted to start mixing techniques to produce more sophisticated lighting models; AFAIK there aren't exactly a whole lot of Gamecube games which have tons of surfaces that simultaneously apply bump maps to diffuse lighting, specular lighting, and reflection maps.
 
A

A More Normal Bird

Unconfirmed Member
As for tessellation, I'm only interested in the visual result. How it is achieved doesn't make much difference to me. If I"m not mistaken tessllation as a visual style existed long before DX11 or computer implementations in general. I'm sure there are more way's those results can be achieved than by DX11 standards. That is why I am curious about whether or not you can achieved them on fixed functions units.

What do you mean by the bolded? Tessellation just means an arrangement of geometric shapes that fit perfectly (no gaps or overlapping) and thus can be repeated to infinity. As Refreshment.01 pointed out, when people are talking about tesselation in computer graphics, they're talking about adaptive tesselation which increases polygon density in real time. This requires a tessellator on the GPU. HTupolev corrected you in that TWW doesn't have tessellation in the way that you seemed to believe it did, but you said you weren't talking about that kind of tessellation.

The Wii-U has a tessellator, the TEV did not.
 

krizzx

Junior Member
What do you mean by the bolded? Tessellation just means an arrangement of geometric shapes that fit perfectly (no gaps or overlapping) and thus can be repeated to infinity. As Refreshment.01 pointed out, when people are talking about tesselation in computer graphics, they're talking about adaptive tesselation which increases polygon density in real time. This requires a tessellator on the GPU. HTupolev corrected you in that TWW doesn't have tessellation in the way that you seemed to believe it did, but you said you weren't talking about that kind of tessellation.

The Wii-U has a tessellator, the TEV did not.

No, I'm talking about producing the same results through means other than a tessellator. I'm aware of the difference between artistic tessellation and polygon tessellation, but being that they are both based on the same principle, could you not augment shaders to visually produce the same result? What I have in mind would be similar to steep parallax mapping only with more tangible implementation. I'm guessing it more farfetched than I imagined going by these responses.

This isn't true. In fact, the opposite is the case.

You can trace this in the evolution of GPU hardware. Programmable vertex shaders were available before pixel shaders because as the possible combinations of lighting, texture coordinate generation and such increased having custom hardware for each case taking up die space and power while idle was a poor idea. Later fixed-function APIs were already run on programmable vertex hardware because it was more efficient, so why not just expose it directly?

Looking at hardware for pixel shading you went from large hard-coded equations like the one in the TEV that just were just a waste of space when doing a simple replace, to programmable hardware which broke things down into smaller operations.

You then went from separate vertex and fragment hardware where one could sit idle depending on the workload, to unified shaders capable of dynamically allocating the load.

The change from the older VLIW5 architecture to the current has also in part been driven by a desire to better allocate resources and leave less hardware idle.
What you are describing is capability. I'm aware that modern shaders are more capable. What I'm looking at is how much power would be needed to achieve the same effect on a fixed function shader as opposed to a modern.

Say if you used a GPU with the same specs as Hollywood only with Shader Modal 1.1 instead of the TEV, would shading on the level of Mario Galaxy 2 be achievable with the same performance/power draw/hardware cost? http://guide2games.org/wp-content/uploads/2008/02/super-mario-galaxy2.jpg http://images4.wikia.nocookie.net/_...s/3/3a/Super_Mario_Galaxy_2_Screenshot_89.jpg http://i.imgur.com/27O3Q.jpgshading in that game was near the level of major PS3/360 yet the GPU only had a fraction of the power. That screams better efficiency to me. I look at the shading in that game and wonder what could be achieved by a TEV with 4 times as many stages and a GPU with even more horsepower behind it.


If I recall correct, the GC's TEV pretty much allowed it to produce texture effect with hardly any impact on graphical performance, which is what led to Rebel Strike being able to run 20 million polygons at 60 FPS with all of shading it had. To me, fixed function shaders seem like natural better choice for gaming hardware.
 

joesiv

Member
Another aspect is that you have so many different types of cards in modern PC that assembly(which is individual chip limited) would not allow code portability. That is not an issue with consoles since they all have the exact same tech
unless you want multi platform games. Nintendo needs to make developers lives easier rather than harder... What if apple made all iOS developers code in assembly (your analogy), would they get much support in this day and age? The apps would be super efficient.... But you'd have none of them...
 
A

A More Normal Bird

Unconfirmed Member
No, I'm talking about producing the same results through means other than a tessellator. I'm aware of the difference between artistic tessellation and polygon tessellation, but being that they are both based on the same principle, could you not augment shaders to visually produce the same result? What I have in mind would be similar to steep parallax mapping only with more tangible implementation. I'm guessing it more farfetched than I imagined going by these responses.

Right, so you meant something with similar visual results to tessellation that doesn't actually transform the mesh? I'm going to say it's farfetched as well. Some of the big advantages of tessellation are it's potential benefits to dynamic LOD, the way it can reduce artist workload if the model is designed right and the fact that it doesn't fall apart when viewed at certain angles or cause texture stretching. I can't see any 2D texture mapping procedure matching those benefits, though if some of the more knowledgeable posters in this thread know of any I'd like to hear it.

What you are describing is capability. I'm aware that modern shaders are more capable. What I'm looking at is how much power would be needed to achieve the same effect on a fixed function shader as opposed to a modern.

Say if you used a GPU with the same specs as Hollywood only with Shader Modal 1.1 instead of the TEV, would shading on the level of Mario Galaxy 2 be achievable with the same performance/power draw/hardware cost? http://guide2games.org/wp-content/up...io-galaxy2.jpg http://images4.wikia.nocookie.net/__...shot_89.jpgThe shading in that game was near the level of major PS3/360 yet the GPU only had a fraction of the power. That screams better efficiency to me.

Popstar wasn't talking just about capability. You mentioned efficiency in on-screen results, but Popstar meant efficiency in terms of die space, processor utilisation etc...
 

HTupolev

Member
I'm aware of the difference between artistic tessellation and polygon tessellation
The fact that you're using the descriptions "artistic" and "polygon" to separate the concepts suggests to me that you actually aren't aware of the difference I'm talking about. The "mathematical" sort of tessellation usually involves polygons.

The difference is whether we're talking about a real-time process in computer graphics which takes a surface and represents it as a mesh of polygons, or the status of being represented as a mesh of polygons. i.e. "this hardware is going to tessellate this surface" versus "this set of polygons form a tessellation of this surface/this is a tessellated surface."

could you not augment shaders to visually produce the same effect? What I have in mind would be similar to steep parallax mapping only with more physical properties.
If all you want to do is apply offsets to vertices of an already-tessellated flat surface to create a brick wall (in most cases I'm not sure why you'd want to do this as opposed to just having a brick wall model, but whatever), that's easy to do in shaders; applying position offsets is one of the main applications of vertex shaders.

If you want to dynamically LOD the geometry on said brick wall to be very geometrically simple at a distance but have enough polygons to convincingly render bricks up close, you're going to have to be able to do enough tessellation that you'd want a fixed-function tessellator unit; trying to dynamically generate so many polygons on CPU (or, heavens forbid, on some unfortunate non-tessellation shader in the GPU) would be a clunky mess.

//===================

would shading on the level of Mario Galaxy 2 be achievable with the same performance/power draw/hardware cost?shading in that game was near the level of major PS3/360
What? SMG2 looks great, but it's lighting model isn't very sophisticated at all. There's a few neat things, like the highlights at oblique angles... but even that's more artistically clever than technically mind-blowing.

You could argue that a lot of original Xbox games have more complex lighting models, let alone PS360 titles.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
The Radeon series from at least r700 to r900 were limited in a similar manner, meaning Wii U's ALUs could only produce a maximum of 225 million polygons if it shares this limitation with the r700 family. My point in this is that while Xenos might not "always" hit 500 million polygons, it is less limited, Wii U can not produce as many polygons if this limitation is true and would also mean that there is less resources to perform comparably with 360, making porting much harder.
AFAIK, every ATI desktop architecture since Xenos up until GCN has been 1 tri/clock (the mobile Yamato is 0.5 tri/clock). GCN is higher - IIRC 2 tri/clock but don't quote me on that.

The setup rate is irrelevant. 1080p60 is 120M pixels per second. Why would you even want to render two polygons for every pixel you display on average?

(or make it eight polygons per pixel for 720p30)
It's not about average - it's about peek rates. When those are hit the trisetup part of the pipeline becomes a bottleneck.

This isn't true. In fact, the opposite is the case.

You can trace this in the evolution of GPU hardware. Programmable vertex shaders were available before pixel shaders because as the possible combinations of lighting, texture coordinate generation and such increased having custom hardware for each case taking up die space and power while idle was a poor idea. Later fixed-function APIs were already run on programmable vertex hardware because it was more efficient, so why not just expose it directly?

Looking at hardware for pixel shading you went from large hard-coded equations like the one in the TEV that just were just a waste of space when doing a simple replace, to programmable hardware which broke things down into smaller operations.

You then went from separate vertex and fragment hardware where one could sit idle depending on the workload, to unified shaders capable of dynamically allocating the load.

The change from the older VLIW5 architecture to the current has also in part been driven by a desire to better allocate resources and leave less hardware idle.
Generally, fixed function does have better power efficiency for a given task (assuming idle parts can be power-gated), but the problem with that is silicon growth getting out of hand - it's not viable to have fixed silicon for every possible (sub)task from a given domain, even if the domain is as narrowed as doing certain graphics algorithms.
 

z0m3le

Banned
AFAIK, every ATI desktop architecture since Xenos up until GCN has been 1 tri/clock (the mobile Yamato is 0.5 tri/clock). GCN is higher - IIRC 2 tri/clock but don't quote me on that.


It's not about average - it's about peek rates. When those are hit the trisetup part of the pipeline becomes a bottleneck.


Generally, fixed function does have better power efficiency for a given task (assuming idle parts can be power-gated), but the problem with that is silicon growth getting out of hand - it's not viable to have fixed silicon for every possible (sub)task from a given domain, even if the domain is as narrowed as doing certain graphics algorithms.

That was what I thought blu, thanks.

As for the last part, it would be interesting to develop hardware that used fixed function in place of the most demanding tasks and a lite set of programmable shaders for everything else... silicon wise it would look unbalanced of course, but from a performance perspective it would be quite interesting to see something like that.

krizzx: I think what you are talking about would be interesting, but it would still need modern programmable shaders, depending on your performance targets you wouldn't need many but the real problem here is that you hard limit developers a bit. Still you'd see very high performance as long as bandwidth could keep up. Of course chip size would be a nightmare.
 

Earendil

Member
That was what I thought blu, thanks.

As for the last part, it would be interesting to develop hardware that used fixed function in place of the most demanding tasks and a lite set of programmable shaders for everything else... silicon wise it would look unbalanced of course, but from a performance perspective it would be quite interesting to see something like that.

krizzx: I think what you are talking about would be interesting, but it would still need modern programmable shaders, depending on your performance targets you wouldn't need many but the real problem here is that you hard limit developers a bit. Still you'd see very high performance as long as bandwidth could keep up. Of course chip size would be a nightmare.

Eventually your chip would be the size of a waffle.
 

HTupolev

Member
Depends on what fixed functions you need. Of course the new AMD GPU is over 700mm, so size might not be the biggest factor stopping something like this.
It is if you want a nice single-die solution. Pure cost of printing a given amount of silicon isn't the only issue; as die sizes increase, so does the proportion of defective units. Hence why the "700mm^2" GPUs out there are clunky dual-GPU solutions; nobody wants to print 700mm^2 dies.

There are some ways to improve yields at the cost of die space utilization, but that is itself a somewhat messy and non-ideal situation to deal with.
 

krizzx

Junior Member
That was what I thought blu, thanks.



krizzx: I think what you are talking about would be interesting, but it would still need modern programmable shaders, depending on your performance targets you wouldn't need many but the real problem here is that you hard limit developers a bit. Still you'd see very high performance as long as bandwidth could keep up. Of course chip size would be a nightmare.

I was looking at it from the angle of the uniqueness that game consoles can poses.

Embedding the components directly in the mortherboard or some other customization can be implemented. It wouldn't necessarily have to be on the GPU or in a cluster to work would it?

Though, if I'm not mistaken, wasn't the Wii's TEV stronger than the one in the Gamecube at a smaller size? I remember reading that it had the number of pipelines doubled from 8 to 16 and maybe a few other things. The design itself could simply be improved to the point where it can do more at a smaller size just as Expresso having 3 more powerful cores yet being even smaller than the previous chip in the series.

How much difference would there be to a chip having 160 shaders of the type being theorized now and 160 fixed function shaders?
 

HTupolev

Member
I was looking at it from the angle of the uniqueness that game consoles can poses.

Embedding the components directly in the mortherboard or some other customization can be implemented. It wouldn't necessarily have to on the GPU or in a cluster to work would it?
You might save on die cost by splitting off the functions, but I'm totally lost as to how you plan to run your traces on the motherboard. With over 100 shader processors that all need to be able to access and pass data to the extra functions...

Elegance is nice.

Though, if I'm not mistaken, wasn't the Wii's TEV stronger than the one in the Gamecube at a smaller size? I remember reading that it had the number of pipelines doubled from 8 to 16 and maybe a few other things. The design itself could simply be improved to the point where it can do more at a smaller size just as Expresso having 3 more powerful cores yet being even smaller than the previous chip in the series.
I think you're mistaking architectural improvements in the GPU with advancements in process node. The Wii's GPU isn't physically smaller than the Gamecube's because it has a more streamlined architecture; it's physically smaller because the individual transistors are much smaller and thus take up much less die space.

According to the Console Die Sizes thread on beyond3d, the original Xbox (whose components are built on a 180nm process) has 60M GPU transistors on a 128mm^2 GPU die. The Xbox 360 (whose components were originally built on a 90nm process) has 232M GPU (not counting the daughter eDRAM) transistors on a 182mm^2 die.

That's almost a 3X increase in transistor density over that generational leap.

The change in process node from the Flipper in the Gamecube to the Hollywood in the Wii AFAIK is similar. This would seem to imply that Wii's Hollywood is actually a substantially larger GPU architecture than Gamecube's Flipper.

How much difference would there be to a chip having 160 modern shaders and 160 fixed function shaders?
What are these fixed function shaders, and what resources do they have access to? What architecture are the modern shaders using. What else is on the GPU? What is the system being used for?

You are seeking a straightforward answer that doesn't exist.
 

z0m3le

Banned
The Wii U and most modern GPUs use fixed function shaders still, the most obvious is Tessellators, to expand on this with other fixed function shaders could make sense, especially for things like lighting which is where modern programmable shaders spend a lot of time. Something like some sort of Ray Tracing unit would be interesting too but I don't know enough about the technique to say if that is possible. Still any modern GPU would still require programmable shaders to do a wide range of techniques that are standard across the industry.

Also about the 700+mm die, I am afraid I was working on bad rumor/intel of malta (AMD's 7990) being a single GPU, however titan while not being 700mm+ is 550mm+.
 

goomba

Banned
Sounds like a typical Nintendo Chip. Tons of custom metal, lots of WTF and in the end surprisingly good looking games.

Indeed , the GameCube was ridiculed as underpowered at first as its specs listed "6-12 million polygons a second" whereas ps2 and Xbox listed 5 - 10 times that amount.

The difference was the GameCube numbers were from real world games and even exceeded yet the ps2/Xbox were theoretical and games never came close.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Though, if I'm not mistaken, wasn't the Wii's TEV stronger than the one in the Gamecube at a smaller size? I remember reading that it had the number of pipelines doubled from 8 to 16 and maybe a few other things. The design itself could simply be improved to the point where it can do more at a smaller size just as Expresso having 3 more powerful cores yet being even smaller than the previous chip in the series.
It is stronger but no pipelines were doubled - the fillrate increase was all from the clock bump (4 pipes @ 162 = 648 MPix/s, 4 pipes @ 162 * 1.5 = 972MPix/s). TEV got some enhancements, though, but those affected its flexibility (i.e. an extra argument to the shading equation) and added new capabilities (e.g. depth textures, working aniso), but no units were plainly doubled.
 
It is stronger but no pipelines were doubled - the fillrate increase was all from the clock bump (4 pipes @ 162 = 648 MPix/s, 4 pipes @ 162 * 1.5 = 972MPix/s). TEV got some enhancements, though, but those affected its flexibility (i.e. an extra argument to the shading equation) and added new capabilities (e.g. depth textures, working aniso), but no units were plainly doubled.

I didn't know that. Thanks for that info.
 

Hermii

Member
Im probably mr obvious but I thought of something you guys probably immedietly realized.

Latte = Coffe thinned out with milk = low caffeine concentration.
Espresso = Extra strong coffee.

In other words the names of the chips indicate its a gpu centric console.
 
Im probably mr obvious but I thought of something you guys probably immedietly realized.

Latte = Coffe thinned out with milk = low caffeine concentration.
Espresso = Extra strong coffee.

In other words the names of the chips indicate its a gpu centric console.

Well, the names aren't official. And it's actually the other way round: Espresso is the CPU.
 

krizzx

Junior Member
Now we have some new fodder for the lighting.

http://www.ign.com/videos/2013/05/03/shadow-of-the-eternals-teaser-trailer

That lava scene really had some good lighting. That definitely looks next gen to me for those who keep trying to denounce the next gen capabilities of the GPU. The detail on the character cloths and faces also speaks for a much higher polygon throughput.

20130504112543greenshot.png
20130504113542greenshot.png

20130504113608greenshot.png
20130504113830greenshot.png

20130504114501greenshot.png
20130504115117greenshot.png

Now to wait for people to come up with every angle they can to dismiss it.
 

krizzx

Junior Member
assuming this isnt just PC footage? :p

Generally, when a game is exclusive to "1" console and the PC, the console is the lead development platform. Cursed Mountain is a good example. There is also a lot of aliasing in the hair that likely wouldn't be there if this was a PC build.
 

jeffers

Member
Generally, when a game is exclusive to "1" console and the PC, the console is the lead development platform. Cursed Mountain is a good example. There is also a lot of aliasing in the hair that likely wouldn't be there if this was a PC build.
fair enough. i wasnt following the thread on this but another consideration: if this was the demo silicon was siphoning money to make, it absolutely wouldn't be on the wiiu time wise? but dunno if this argument got disproved/approved or anything.
 

Xun

Member
Now we have some new fodder for the lighting.

http://www.ign.com/videos/2013/05/03/shadow-of-the-eternals-teaser-trailer

That lava scene really had some good lighting. That definitely looks next gen to me for those who keep trying to denounce the next gen capabilities of the GPU. The detail on the character cloths and faces also speaks for a much higher polygon throughput.



Now to wait for people to come up with every angle they can to dismiss it.
If that's from the Wii U version of the game, it's impressive.

The lava looks great, and certainly gives off a next-gen vibe.
 
I'm skeptical, to say nothing of Wii U's capabilities. Does Dyack even have dev kits? A concept this early could very well be started on PC.
 

krizzx

Junior Member
I'm skeptical, to say nothing of Wii U's capabilities. Does Dyack even have dev kits? A concept this early could very well be started on PC.

I really couldn't say. Conrete details have just started to surface but it has been announced that it is Wii U and PC exclusive. Would be odd for them to announce such a thing without even having so much as a dev kit.

http://nintendoeverything.com/120807/first-shadows-of-the-eternals-details-confirmed-for-wii-u/
http://nintendoeverything.com/12083...ow-of-the-eternals-details-footage-next-week/

This is all of the details so far.
 

MDX

Member
Factor5
The TEV pipeline is completely under programmer control, so the more time you spend on writing elaborate shaders for it, the more effects you can achieve.


Question: If the TEV pipelines was under the complete control of the programmers, how can it then be called fixed function? Seems to be a hybrid of sorts.
 
Now we have some new fodder for the lighting.

http://www.ign.com/videos/2013/05/03/shadow-of-the-eternals-teaser-trailer

That lava scene really had some good lighting. That definitely looks next gen to me for those who keep trying to denounce the next gen capabilities of the GPU. The detail on the character cloths and faces also speaks for a much higher polygon throughput.



Now to wait for people to come up with every angle they can to dismiss it.

While i see nothing on these shots Wii U couldn't handle, i wouldn't get my hopes up too high that this is Wii U footage.


This looks like a middle ground between HD twins and Next Gen. It actually looks very good for being developed on a small budget.

Basically where Wii U is. But i as i said above... Wait and see.
 

Meelow

Banned
Now we have some new fodder for the lighting.

http://www.ign.com/videos/2013/05/03/shadow-of-the-eternals-teaser-trailer

That lava scene really had some good lighting. That definitely looks next gen to me for those who keep trying to denounce the next gen capabilities of the GPU. The detail on the character cloths and faces also speaks for a much higher polygon throughput.



Now to wait for people to come up with every angle they can to dismiss it.

Very awesome if that's Wii U footage, makes me excited about what heavy budget games on Wii U can look like.
 
Now we have some new fodder for the lighting.

http://www.ign.com/videos/2013/05/03/shadow-of-the-eternals-teaser-trailer

That lava scene really had some good lighting. That definitely looks next gen to me for those who keep trying to denounce the next gen capabilities of the GPU. The detail on the character cloths and faces also speaks for a much higher polygon throughput.



Now to wait for people to come up with every angle they can to dismiss it.
Everyone likes gifs!
sotelavaswxuzh.gif
 

Popstar

Member
Question: If the TEV pipelines was under the complete control of the programmers, how can it then be called fixed function? Seems to be a hybrid of sorts.
Hardware like the TEV in the GameCube/Wii or the register combiners in the original Xbox/GeForce are perhaps best described as 'configurable' rather than fixed-function.

Instead of having separate absolutely fixed hardware for every texture env mode – replace, decal, modulate, blend, etc. – you figure out a "super-equation" that can emulate all the others with the proper inputs and have hardware for that instead.

So let's say you need two different texture env functions, add and multiply. Instead of having two pieces of hardware, one for add (A + B) and one for multiply (A * B), you make a single piece of hardware that does (A * B + C). Now if you want to add you set A to 1 and pass B and C the numbers you want to add. If you want to multiply you pass A and B the numbers to multiply and set C to 0.

Of course once you have hardware like this there's no point in limiting yourself to just add and multiply when the hardware is capable of more. So you expose this directly to the programmer and end up with stuff like the TEV and register combiners.

(This is a simplified example of course, if you want to see all different base texturing modes that started this consolidation take a look at old OpenGL 1.1 documentation)
 

gemoran4

Member
Actually looks fairly good. The lava shot was particularly impressive and in general it looks really nice, especially when they probably haven't had a big budget (seeing as they don't have a publisher and are looking for $1.5 million in crowd-funding, which isn't a huge amount for console game development).

Though I do have to wonder if this is pc footage or if its actual Wii U footage.
 
Don't go nuts with this but those particles vaguely remind me of the UE4 Elemental demo. Is this game running on the Unity 4 Pro Engine?
Of course, I know the thread is for discussion, just wanted to point that scene out. There's been no word on what engine its using though, we'll know more on Monday.
 

Oblivion

Fetishing muscular manly men in skintight hosery
Really, you guys think the lava looks impressive? Simply showing for a second, I can't see what makes it noteworthy.
 
Status
Not open for further replies.
Top Bottom