• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

krizzx

Junior Member
Ah, I found an interesting photo. Its not a die shot but I'm sure it can be of some use.

Radeon%205550%20Block%20Diagram.jpg
200909231041351309.jpg


Fab process: 40 nm
Core Speed: 550 Mhz
Processing power (single precision): 352 GigaFLOPS
Polygon throughput: 550M polygons/sec
Unified Shader: 320(64x5)
Memory bandwidth: DDR3: 24.5 – 28.8 GB/s (isn't the Wii U's RAM clocked at exactly half that? That would make it 28.8 with both used simultaneously. Just a thought)
Texture Mapping Units: 16
Render Output Units: 8
32 Z/Stencil ROP Units
8 Color ROP Units
ATI Eyefinity multi-display technology - Three independent display controllers
Maximum board power: 39 Watts

I also recall people suggesting it could be an e6760 base a while back. Its also 40 nm. What is the likelihood of that I wonder.

Also, maybe that thing the Toki Tori devs found that saved them 100 MB of memory was HDR texture compression.

Can someone tell me what "Order-independent transparency" means for a game?
 

Chronos24

Member
That pic shows DX11 capable? I remember an interview some time back with a developer saying that the gpu was capable of DX11 "equivalent" features. So then with what we've seen here with the die photo, gpu card estimated in the possible 5000/6000 series feature set, then DX11 features are theoretically possible. That being said then we should eventually see some very technically and visually impressive games.
 

prag16

Banned
Ah, I found an interesting photo. Its not a die shot but I'm sure it can be of some use.



I also recall people suggesting it could be an embedded base a while back. What is the likelihood of that I wonder.

Nice find. Seems to match up relatively well...
 
Memory bandwidth: DDR3: 24.5 – 28.8 GB/s (isn't the Wii U's RAM clocked at exactly half that? That would make it 28.8 with both used simultaneously. Just a thought)
Actually double the supposed bandwidth would amount to 25.6 GB/s. This going by the memory chip specifications.

Anywho, I've thought about it (they're four chips, they could have a dual channel thing going on in there doubling the 12.8 GB/s throughput up) but I've strayed from suggesting it because bigger tech heads haven't twice now (when it was first brought up and now with the chip die shots), I'm guessing probably for a reason.

I don't really imagine this thing being clocked at a perfect 800 MHz data clock, nintendo being anal as it is with clock balancing; no way GPU is sitting at 550 MHz with 800 MHz DDR3, having dual channel could allow them to go lower and keep a higher bandwidth still. The worst case scenario of it being clocked at 550 MHz would translate to 8.8 GB/s x2, so 17.6 GB/s. In single channel of course, 12.8 GB/s is already low as it is so going lower doesn't really feel like an option.
Also, maybe that thing the Toki Tori devs found that saved them 100 MB of memory was HDR texture compression.
I don't know about that.
Can someone tell me what "Order-independent transparency" means for a game?
It's something most games do not take advantage of.

Order independent transparency is tied to deferred shading and it's simply an DirectX 11 implementation to avoid the multipass rendering method (depth peeling) in order to render transparencies; this spares a lot of memory and bandwidth otherwise wasted on multiple passages, buffer writes and sorting and blending subsequent created layers.

The advantage of deferred shading lies in the fact that representing multiple light sources in game is supposed to be cheaper that way, for current gen it has serious bandwidth and framebuffer needs so it's usually avoided and deferred lightning is used instead (you can read about that on the wiki link I provided above).

This is (very) handy of course, but developers have been backstepping that for years, so it's absence wouldn't be a dealbreaker.

Trine 2 reportedly uses it; if Wii U supports these shortcuts that would have helped them get quite an edge over current gen systems I guess. That would be a fun thing to confirm with Frozenbyte/Trine 2 developers though, as that is specifically a DirectX 11 feature, R7xx didn't have it.
 

deviljho

Member
Is Matt still around (guy with the Darkwing Duck avatar) - doesn't he have a devkit? Also, I haven't seen Fourth Storm around either. I know thraktor said he's been busy.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Order independent transparency is tied to deferred shading and it's simply an DirectX 11 implementation to avoid the multipass rendering method (depth peeling) in order to render transparencies; this spares a lot of memory and bandwidth otherwise wasted on multiple passages, buffer writes and sorting and blending subsequent created layers.
Order-independent transparency shares some similarities with deferred shading (namely that it takes a resolve-style after-pass), but is in no way tied to the latter. What it is tied to is a GPU feature (a sampling op) which allows the shader to access the individual sub-samples in multi-sample textures. Said GPU op was first introduced in the R700 circa dx10.1, IIRC.
 
Order-independent transparency shares some similarities with deferred shading (namely that it takes a resolve-style after-pass), but is in no way tied to the latter. What it is tied to is a GPU feature (a sampling op) which allows the shader to access the individual sub-samples in multi-sample textures. Said GPU op was first introduced in the R700 circa dx10.1, IIRC.
I stand corrected.
 
just a thought, when it comes to chip speed labels haven't in the past been occassions where some manufacturers have simply sold the same chip but labeled with diffent speeds just so they can charge different prices? and thus then lower clock labelled chips worked fine at faster speeds? could it be possible this has happened with the memory chips in wii u?
 
just a thought, when it comes to chip speed labels haven't in the past been occassions where some manufacturers have simply sold the same chip but labeled with diffent speeds just so they can charge different prices? and thus then lower clock labelled chips worked fine at faster speeds? could it be possible this has happened with the memory chips in wii u?
Labeling memory chips down happens, but using deliberately labeled down chips at higher clocks makes no sense from a corporate point of view.

Nintendo couldn't possibly blame providers for malfunctioning chips, basically; and said chips wouldn't be guaranteed to be able to achieve said frequency. It's a no-no situation with not enough gain incentive to outbalance the risks.

See, stuff that don't pass as 1066/1333 MHz RAM get's labeled down as well providing it's useable, just like it happens with CPU's, if yields are good said part not being able to go higher is a rare occurrence, but it's still not guaranteed to do so.

Plus, they're buying 1066 MHz chips too and DDR3 technology/stock has no shortages to make them go lower for availability reasons.
 
Labeling memory chips down happens, but using deliberately labeled down chips at higher clocks makes no sense from a corporate point of view.

Nintendo couldn't possibly blame providers for malfunctioning chips, basically; and said chips wouldn't be guaranteed to be able to achieve said frequency. It's a no-no situation with not enough gain incentive to outbalance the risks.

See, stuff that don't pass as 1066/1333 MHz RAM get's labeled down as well providing it's useable, just like it happens with CPU's, if yields are good said part not being able to go higher is a rare occurrence, but it's still not guaranteed to do so.

Plus, they're buying 1066 MHz chips too and DDR3 technology/stock has no shortages to make them go lower for availability reasons.

unless perhaps nintendo have been supplied with lower labelled chips on the cheap but have been guarantees they work just as well at faster speeds? not saying it has happened just a bit bit of far fetched thinking on my part
 
unless perhaps nintendo have been supplied with lower labelled chips on the cheap but have been guarantees they work just as well at faster speeds? not saying it has happened just a bit bit of far fetched thinking on my part
That's the only way I can see it happening, like Nintendo requests 1066 MHz chips and gets supplied these.

But like I said, really makes little sense; if they were using some top of the line part that has little orders (and hence most production is able to run like that but gets labeled down because most buyers aren't asking for it, Nintendo asking for it could be surprising so they might as well re-test existing chips and provide them as is). But the 1066 MHz process is very mature now and should represent most of the sales/production so that holds little ground; to make things worse that slower Hynix RAM part is not even low power and has a higher speed variant (ending in AFR instead of MFR).
 

krizzx

Junior Member
Hmmm, I wonder. If they distributed the os amongst all for RAM sticks then allowed game data to be loaded to all 4 simultaneously, how would that enhance performance?

Though, I suppose that is all off topic. I fairly certain that the HD5550 is the bases for the Wii U's GPU. Nothing else makes sense. I wonder why so many people are against it though?
 

Earendil

Member
Hmmm, I wonder. If they distributed the os amongst all for RAM sticks then allowed game data to be loaded to all 4 simultaneously, how would that enhance performance?

Though, I suppose that is all off topic. I fairly certain that the HD5550 is the bases for the Wii U's GPU. Nothing else makes sense. I wonder why so many people are against it though?

I'm fairly certain that would cause more problems than it would solve.
 
just a thought, when it comes to chip speed labels haven't in the past been occassions where some manufacturers have simply sold the same chip but labeled with diffent speeds just so they can charge different prices? and thus then lower clock labelled chips worked fine at faster speeds? could it be possible this has happened with the memory chips in wii u?
Still on this subject, this happened:

iFixit said:
After some dainty desoldering, the module's EMI shield is removed to surprisingly reveal a slightly different Texas Instruments CC2560A (than advertised), that appears to not include BLE support.

Word on the street was that Pebble had BLE functionality just waiting to be activated with a firmware update, but we can't find evidence of the hardware to back up this hidden potential.
Source: http://www.ifixit.com/Teardown/Pebble+Teardown/13319/2

Pebble Engineers said:
The Bluetooth chips TI sent to Panasonic were labeled CC2560 but have been flashed with the firmware (and BT LE support) of a CC2564. That's why the module was labeled PAN1316.

Many chip vendors make silicon consistent between product lines but simply flash different firmware to enable features. Our chips were labeled CC2560 because TI asked us if we wouldn't mind using them with CC2564 firmware to speed up our order. Pebble most definitely has Bluetooth LE support, though it has not yet been enabled in our operating system.
Source: http://www.reddit.com/r/pebble/comments/1a7yu1/pebble_most_definitely_has_bluetooth_le_support/

Nothing that wasn't said before but speeding up deliveries is the only scenario were label not corresponding to the use could make sense.
 

Oblivion

Fetishing muscular manly men in skintight hosery
Is Latte confirmed to be capable of DX11 graphics? I've seen a lot of reports essentially confirming at least DX 10.1, but not DX11.
 

chaosblade

Unconfirmed Member
If what I've gathered from all the tech discussion is accurate, it seems like the GPU is loosely based on a DX10.1 part, but should still support everything important in the DX11 featureset, though some things might be handled differently (like tessellation).
 

Schnozberry

Member
With the API differences and the customizations to the hardware, it's impossible to really get a feature to feature comparison with DirectX.
 

Oblivion

Fetishing muscular manly men in skintight hosery
200909231041351309.jpg


Okay, that's interesting. Shin'en said that Wii-U is capable of tesselation, which according to that chart DX 10.1 isn't capable of.
 

chaosblade

Unconfirmed Member
DX featuresets aren't the end-all of what a GPU can do. GPUs can support features that aren't part of the DX featureset yet, and tessellation is an example of that. It's been in AMD GPUs for a while now (to varying degrees of usefulness), even going as far back as the 360 GPU.

Plus what Schnozberry said.
 
DX featuresets aren't the end-all of what a GPU can do. GPUs can support features that aren't part of the DX featureset yet, and tessellation is an example of that. It's been in AMD GPUs for a while now (to varying degrees of usefulness), even going as far back as the 360 GPU.

Plus what Schnozberry said.
Actually further back.

The water in Wind Waker is using a form of tessellation. Far as I remember at least.
 

Schnozberry

Member
Actually further back.

The water in Wind Waker is using a form of tessellation. Far as I remember at least.

It was a hardware function on ATI's R200 in the Gamecube. It was called TruForm on PC. It was never accepted into DirectX or OpenGL as a feature. Some PC games used it, but it was generally something that was patched in after the fact.

Kind of primitive today, but it was god damn magic in 2001.
 

Nachtmaer

Member
Okay, that's interesting. Shin'en said that Wii-U is capable of tesselation, which according to that chart DX 10.1 isn't capable of.

AMD had a tessellation unit in their GPUs long before it became a standard in DX11. Even the 360's GPU had a tessellation unit but it never got used.
 
I believe it was off the shelf like that, multiplier is locked so it would mean messing with FSB otherwise. Notice that kit has the same FSB number for both 1 and 1.1 GHz configurationsI don't know about that, but it certainly is.

It's a Nintendo trend, they prefer to go lower to keep the FSB/cpu clock differential/multiplier at bay and under the number of 3x (this reduces bottlenecking) and they'll avoid running stuff too hot at all means.

This architecture also kinda loses it's power effectiveness if clocked too high, as illustrated by this PPC750 CL table:



Nintendo probably went for a relation of performance versus consumption for every part of the system. Also notice Wii's Broadway 729 MHz rating stands in the middle of that official sheet.

Interesting.

They'll be the odd ones out again in terms of architecture; the other two are going x86. Would porting be an issue?
 
Thats not quite as impressive if you see this:

http://i.imgur.com/vVobx.jpg

Wind Waker in general did some jaw dropping tech stuff:

http://www.neogaf.com/forum/showthread.php?t=488290

And now we have a "Gamecube Next" with Wii U

Can't wait for Nintendo to push Wii U.
that was most likely not tesselation, and could be things like continuous LOD.

It's also possible to do tesselation on CPU, that's how Truform did it, hence people later on just forking the source code and messing with it.
Oh, I see. I thought most devs never used it because it soaked up too much power. Then again, my point mostly was that the Wii U could have a tesselator without it necessarily being a DX11 GPU.
The Wii U has a tesselator.

Not necessarily being a DX11 part.
Interesting.

They'll be the odd ones out again in terms of architecture; the other two are going x86. Would porting be an issue?
Probably not as big on an issue as it is porting from current gen consoles to it.

Architecture doesn't matter as much as it matters that these cpu are more similar between them regarding predictable performance. Most compiling software can do that change just fine, virtually in a transparent way. But of course being the same architecture/part would hold benefits.
 

tipoo

Banned
Anywho, I've thought about it (they're four chips, they could have a dual channel thing going on in there doubling the 12.8 GB/s throughput up) but I've strayed from suggesting it because bigger tech heads haven't twice now (when it was first brought up and now with the chip die shots), I'm guessing probably for a reason.


The 12.8GB/s comes from the total throughput of each RAM module. Dual channel would not double that, as the total is already the number being discussed. Dual channel works differently on PCs, you have sticks with multiple banks of memory on themselves each stick with an IO interface, you gain bandwidth through each set of banks on each stick being addressed on a separate channel, with the Wii U you just have the four banks.

I hope that made sense, that's why dual channel isn't a consideration on the Wii U, the banks of memory can't magically go past the maximum theoretical throughput.

It's like if you're looking at the total theoretical FLOPS of a CPU, and then you say "well won't it be more because it's multicore?". No, since you're already looking at the total it can do in theory.


I don't really imagine this thing being clocked at a perfect 800 MHz data clock, nintendo being anal as it is with clock balancing;

I think the conjecture on that way back when it was being discussed was that Nintendo upped the GPU clock speed fairly last minute, and before that it would have been balanced with the RAM.
 

tipoo

Banned
200909231041351309.jpg


Okay, that's interesting. Shin'en said that Wii-U is capable of tesselation, which according to that chart DX 10.1 isn't capable of.

Tesselation went all the way back to the Xenos 7 years ago, just because DirectX didn't implement it until later doesn't mean the physical chips couldn't have it. I'm fairly sure the HD 4000 series had tesselation, even without the proper DirectX support.
 
Tesselation went all the way back to the Xenos 7 years ago, just because DirectX didn't implement it until later doesn't mean the physical chips couldn't have it. I'm fairly sure the HD 4000 series had tesselation, even without the proper DirectX support.

yes the 4000 series did definitely have tessellation
 
that was most likely not tesselation, and could be things like continuous LOD.

It's also possible to do tesselation on CPU, that's how Truform did it, hence people later on just forking the source code and messing with it.The Wii U has a tesselator.

Not necessarily being a DX11 part.Probably not as big on an issue as it is porting from current gen consoles to it.

Architecture doesn't matter as much as it matters that these cpu are more similar between them regarding predictable performance. Most compiling software can do that change just fine, virtually in a transparent way. But of course being the same architecture/part would hold benefits.

Just a feeling they'll look for any excuse not to develop for them as much if they can't manage a sizable lead.
 

krizzx

Junior Member
Thats not quite as impressive if you see this:

vVobx.jpg


Wind Waker in general did some jaw dropping tech stuff:

http://www.neogaf.com/forum/showthread.php?t=488290

And now we have a "Gamecube Next" with Wii U

Can't wait for Nintendo to push Wii U.

Ah, so it was true. The GC and Wii GPU's were capable of tessellation.

I heard that the all ATI GPU from the 2000 forward had tessellation units, but I didn't know if it was true for the ones Nintendo used because the GCs was done by artx and the Wii's as an enhanced version of the GCs.
 
Ah, so it was true. The GC and Wii GPU's were capable of tessellation.
Crash Team Racing on PSone had a form of Tessellation:

Naughty Dog said:
At the heart of our LOD system was our proprietary mesh tessellation/reduction scheme, which we originally developed for Crash Team Racing and radically enhanced for Jak & Daxter.
Source: http://www.gamasutra.com/view/feature/131394/postmortem_naughty_dogs_jak_and_.php?print=1

That doesn't mean PSone (or PS2) supported Tessellation, in fact they most certainly didn't.


I'll also drop this here, Radeon 8500 tessellation/truform methodology could run anywhere:

Orkin said:
To the folks above with NVidias, you'll note that the author specifically says "TRUFORM enabled" (i.e. this will be ATI only).
Even though it uses ATI's algorithm (based off an example program they put out), it's all done in software so it'll work on anything. Assuming you have a fast enough CPU to handle what was originally intended to be done on a GPU...
Source: http://www.emutalk.net/threads/2711...cement-TruForm?p=270091&viewfull=1#post270091

Everyone could use it providing you forked and disassembled their implementation, and there was specific hardware Truform implementation, yes; but it was to my knowledge integrated straight into the T&L engine, not as a separated unit.

And GC architecture most certainly didn't share anything with it.
I heard that the all ATI GPU from the 2000 forward had tessellation units, but I didn't know if it was true for the ones Nintendo used because the GCs was done by artx and the Wii's as an enhanced version of the GCs. I wonder if Tessellation was used in Rebel Strike when they achieved the record holding 20million polygons 60fps?
Radeon 8500 was designed before ArtX was bought, former ArtX was later instrumental in the R300 project (Radeon 9500/9700) who dropped Truform hardware support altogether.

As for anyone wanting to implement truform on the GC, they could, by software; but they most likely didn't, a tessellation implementation on Wind Waker must not be taken as granted too, as previous discussions brought to the table that there are competing methodologies to achieve the same end result.
 

krizzx

Junior Member
Well, then lets talk tessellation.

This should be the "baseline" of what the Wii U GPU is capable of.
http://www.youtube.com/watch?v=EL72EmeZ1Vg

Of course, going by my estimates and developer comments, it should be able to pull of a bit more than that. It makes me question exactly what devs are doing with all of that GPU power. That is definitely beyond what last gen GPUs were capable of, but we are seeing no signs of it.

The closest thing we've seen to that is the X demo from Monolith soft.
http://www.youtube.com/watch?v=ue5DSuDbTgw


I'd say that Monoliths X is only the tip of the iceberg for the Wii U's graphics.

Since there are so few game out that were built from the ground up for the Wii U. I am making Nintendo Land(a game that clearly didn't even attempt to push the Wii U's graphics) the current benchmark for Wii U games. http://media.gamerevolution.com/images/games/wiiu/nintendo-land/nintendo-land_027.jpg
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
So ive been think about somethings recently. i know there was talk of fixed function because of the unknown logic parts of the GPU. It seems to me(correct me if im wrong and if im just reaching on this one) that the lighting seems to be VERY good in all of the Wii U games. playing launch games like Zombi U and recently NFSMWU there are parts in those games where the lighting is AMAZING and really stands out from the rest of the game. is it possible lighting could be a fixed functions on this crazy customized GPU... is there any way we could tell?
I don't know why some people are so obsessed with fixed-function lighting. Better lighting can be achieved with higher-performance programmable shaders just as well.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
im not obsessed it just seems if there was a fixed function lighting could be one of them. just seems easy to achieve great lighting on Wii U.
One of the main reasons for the adoption of programmable shaders in GPUs was to achieve great lighting. Better-performing shaders are a much more plausible explanation of any possible occurrences of said great lighting, than 'fixed function lighting', whatever one might think that to signify. Apropos, even 3DS' fixed-function pixel-shading pipeline relies on a fully-programmable vertex shader pipeline.
 
A

A More Normal Bird

Unconfirmed Member
Is this using direct x 11.

http://im39.gulfup.com/O6kGN.jpg

Because it's showing good details.
No. That's a moderately detailed texture and a normal map. The Wii-U does not use DirectX. API is not necessarily an indicator of power or visual fidelity. Crysis 3 is DirectX11 only on PC but runs on consoles with DirectX9 feature-sets, but even if the 360/PS3 had DirectX11 era chips (with the same power as they do currently) the game wouldn't look much different running on them. That said, the Wii-U should probably be almost feature compatible with DirectX11.
 

krizzx

Junior Member
I don't know why some people are so obsessed with fixed-function lighting. Better lighting can be achieved with higher-performance programmable shaders just as well.

"Higher-performance" shaders? Not the last time I checked.. Fixed function are higher performance. That is the only reason to use them.

Using a fixed function you can do the same thing as a modern shader in a fraction of the processing time and at a fraction of the resource cost. That is how the Wii was able to pull off 360/PS3 level shading in some games like Mario Galaxy 2, and how Rebel Strike on the GC was able to pull off so many textures and lighting effects(normal mapping, dynamic shadows, high level lighitng, various particle effect etc) while still pushing 20 million polygons at 60 FPS. Something that nothing other console that gen came close to achieving in a real game.

Modern shaders are just easier to implement and cost less to use. It also all that modern developers are trained on.

Is this using direct x 11.

http://im39.gulfup.com/O6kGN.jpg

Because it's showing good details.
That looks like either normal or pararllax mapping, albeit really HQ. That could be done DX9 feature sets. The shading quality and effects like tessellation would signify.

Of course, the Wii U doesn't use Direct X anything as Microsoft owns it. It uses "equivalent" OpenGL graphics. That is why I say DirectX11 "level" graphics. Watch the video I posted above for a look at the more advanced features the Wii U should be capable of.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
"Higher-performance" shaders? Not the last time I checked.. Fixed function are higher performance. That is the only reason to use them.
You entirely missed my point. I'm not saying shaders are higher performance than fixed-function. I'm saying that better lighting does not need to be implemented via fixed-function. Better lighting (compared to some hypothetical reference lighting) can be achieved via better-performing shaders (compared to the GPU that produced the reference lighting), and that would be a much more plausible explanation for observing better lighting nowadays. The second most-plausible explanation would be better lighting algorithms. Saying that 'GPU X produces better lighting than GPU Y, ergo X must be using fixed-function lighting' is an astounding leap of reason, which sits towards the very bottom of the list of plausible explanations for observing better lighting. I hope I'm clearer now.
 

krizzx

Junior Member
You entirely missed my point. I'm not saying shaders are higher performance than fixed-function. I'm saying that better lighting does not need to be implemented via fixed-function. Better lighting (compared to some hypothetical reference lighting) can be achieved via better-performing shaders (compared to the GPU that produced the reference lighting), and that would be a much more plausible explanation for observing better lighting nowadays. The second most-plausible explanation would be better lighting algorithms. Saying that 'GPU X produces better lighting than GPU Y, ergo X must be using fixed-function lighting' is an astounding leap of reason, which sits towards the very bottom of the list of plausible explanations for observing better lighting. I hope I'm clearer now.

You refer to them as "better-performing shaders". You could have fooled me. Also, they do not perform better once again. "Better lighting" can be done on "better fixed function" units. The limits of programmable shaders are not present in fixed function. You can define how you want the lighting to look and work with a much lower resource cost.

Of course, the reason for better lighting in Wii U games is obviously the result of higher direct X equivalent graphics and an overall stronger GPU.

It will be interesting to see the tesselator at work give what was demonstrated in the Froblins tech demo.
 
A

A More Normal Bird

Unconfirmed Member
You refer to them as "better-performing shaders". You could have fooled me. Also, they do not perform better once again. "Better lighting" can be done on "better fixed function" units. The limits of programmable shaders are not present in fixed function. You can define how you want the lighting to look and work with a much lower resource cost.

Of course, the reason for better lighting in Wii U games is obviously the result of higher direct X equivalent graphics and an overall stronger GPU.

It will be interesting to see the tesselator at work give what was demonstrated in the Froblins tech demo.

I'm pretty sure that's exactly what blue mean by better shaders. Higher shader model, faster units, more of them etc...
 
You refer to them as "better-performing shaders". You could have fooled me. Also, they do not perform better once again. "Better lighting" can be done on "better fixed function" units. The limits of programmable shaders are not present in fixed function. You can define how you want the lighting to look and work with a much lower resource cost.

Of course, the reason for better lighting in Wii U games is obviously the result of higher direct X equivalent graphics and an overall stronger GPU.

It will be interesting to see the tesselator at work give what was demonstrated in the Froblins tech demo.
...and they would be better performing than those found in PS360 without having to be fixed function... No?
 

HTupolev

Member
...and they would be better performing than those found in PS360 without having to be fixed function... No?
Actually, maybe. Not all shader processors are exactly the same, and neither are the ways in which arrays of shader processors are linked together. GPU design hasn't had the sorts of paradigm shifts that it did between previous generations, but architectures *have* changed.
 

krizzx

Junior Member
...and they would be better performing than those found in PS360 without having to be fixed function... No?

To a degree, yes. Though even if they were the exact same shaders, the larger RAM volume and high processing speed would still allow for better shading. It would be nice if Nintendo kept the fixed function units though. They would be able to push the shading capability to levels that even the PS4 and Durango wouldn't be able to go as you would be able to define the effects yourself.
 
To a degree, yes. Though even if they were the exact same shaders, the larger RAM volume and high processing speed would still allow for better shading. It would be nice if Nintendo kept the fixed function units though. They would be able to push the shading capability to levels that even the PS4 and Durango wouldn't be able to go as you would be able to define the effects yourself.

what the...

the delusions never really stop, do they.
 
Status
Not open for further replies.
Top Bottom