• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
You should know "Gflop" performance across generation of GPU cores does not scale like that. It how at 176 glfop wiiu gpu can out perform x360....
Of course I know, do you need a disclaimer though?

I know that you know that I know that.


As for Wii U being 176 GFlop machine, good luck sticking that as a fact, truth is we simply don't know, which is why I treated the double GFlop performance number as a what if scenario instead of saying that's the case; you should do the same instead of going around pissing on graves with self proclaimed "facts", learn to tip toe for gods sake.


And, yes GFlop efficiency varies in regards to the architecture, but that's also because of the feature set; which isn't really being used to Wii U's advantage over ports. If you're clashing heads at doing something the brute way, it's gonna amount to roughly the same cost regardless of the "accelerations" and "shortcuts" you have in place; which is why shit like resolution increase doesn't come magically regardless of how you increased the GPU "GFlop efficiency", yet it is effectively "GFlop dependent".

That's obvious to me.
 
Your math sucks balls.

PS3 is 180 GFlops
X360 is 240 GFlops

PS4 at 1.84 TFlops is 10 times a PS3 alright, but... the ratio doesn't scale as you're implying it to in regards to something that doubles PS3/X360 performance.


If Wii U is/was 360/480 GFlops (double the aforementioned platforms) that means

(2xPS3) 360 GFlops x5 = 1.8 TFlops

(2xX360) 480 GFlops x3.8 = 1.824 TFlops

So, worst case scenario based on your citation is 5 times (big difference to 8x), best case scenario 3.8 times; if the Wii U was performing roughly double in regards to PS360.


But PS4 is not the lowest denominator of this gen, that'll be XBone (easily), so said difference for the lowest denominator could actually be

(2xPS3) 360 GFlops x3.4 = 1.224 TFlops

(2xX360) 480 GFlops x2.5 = 1.2 TFlops

Best case scenario 2.5 times, worst case scenario 3.4 times.

Thanks for the unnecessary insults...
 

Donnie

Member
Apophis2036

He's "wow"ing at your reasoning, more specifically the multiplication which didn't make any sense.

You just said you think PS4 is 10x PS3 and that WiiU is 2x PS3. That means in your own opinion PS4 is 5x WiiU. So you're disagreeing with yourself now.

Also in what possible way could a 8 core 1.6Ghz Jaguar CPU be claimed to be the equivalent of 24 1.2Ghz Espresso cores (8x WiiU's CPU)? Seriously think about it.
 
3 Core CPU @ 1.2GHz vs 8 Core CPU @ 1.6GHz.

2GB's of DDR3 RAM @ 17GB/s + 32MB's of eDRAM @ 70GB/s vs 8GB's of GDDR5 RAM @176 GB's.

176 GFLOP DX10.1 GPU vs 1.8TFLOP DX11 GPU.

PS4 being around 8x WiiU in performance sounds about right to me.

Why even bother try to move the goal post, when it's pretty clear what you meant.
 

USC-fan

Banned
As for Wii U being 176 GFlop machine, good luck sticking that as a fact, truth is we simply don't know, which is why I treated the double GFlop performance number as a what if scenario instead of saying that's the case; you should do the same instead of going around pissing on graves with self proclaimed "facts", learn to tip toe for gods sake.
Wii-u is a 176 gflop machine. That not even a debate at this point. You may hope it something else but that goes against every fact we have.
 
3 Core CPU @ 1.2GHz vs 8 Core CPU @ 1.6GHz.

2GB's of DDR3 RAM @ 17GB/s + 32MB's of eDRAM @ 70GB/s vs 8GB's of GDDR5 RAM @176 GB's.

176 GFLOP DX10.1 GPU vs 1.8TFLOP DX11 GPU.

PS4 being around 8x WiiU in performance sounds about right to me.

Clockrates, core counts, and flops aren't the end all of system performance, Just look at the AMD FX series vs the Intel core series, as for flops, sony even claimed 1.8 tflops for the RSX at one point

Edit: before anyone gets any stupid ideas, no, i'm not saying the wii u is on par with the ps4 and xbox one
 
Apophis2036

He's "wow"ing at your reasoning, more specifically the multiplication which didn't make any sense.

You just said you think PS4 is 10x PS3 and that WiiU is 2x PS3. That means in your own opinion PS4 is 5x WiiU. So you're disagreeing with yourself now.

Also in what possible way could a 8 core 1.6Ghz Jaguar CPU be claimed to be the equivalent of 24 1.2Ghz Espresso cores (8x WiiU's CPU)? Seriously think about it.

So given the massive gap in CPU, GPU and RAM amount as well as RAM bandwidth how many time more powerful would you say PS4 is vs WiiU ?, give me an estimate for Wii U vs XBO as well while your at it.
 

Donnie

Member
Wii-u is a 176 gflop machine. That not even a debate at this point. You may hope it something else but that goes against every fact we have.

Man I'm sorry but when someone trys to claim an opinion or theory as undebatable fact its that person who's hoping rather then facing reality. The only fact here is there is no fact that proves what you're claiming. Its a legitimate opinion but an opinion none the less.

Its so you to relentlessly claim something that's unproven as fact though :) Sometimes I question whether you're actually serious or if its all just for fun.
 
While we are short of official confirmation, I firmly believe we should accept 176 GFLOPs unless a developer comes out and says otherwise. Upon an in depth analysis of all the blocks on the die, one realizes that it really can't be any other way. All the evidence is there.
 
So given the massive gap in CPU, GPU and RAM amount as well as RAM bandwidth how many time more powerful would you say PS4 is vs WiiU ?, give me an estimate for Wii U vs XBO as well while your at it.

Gotta love bullshit performance multipliers, the fact is they use different architectures, so there is no across the board performance multiplier, the gap could be fairly close in some things, and shockingly far in others, but unless someone here has the dev kits, it's all just speculation.
 
Man I'm sorry but when someone trys to claim an opinion or theory as undebatable fact its that person who's hoping rather then facing reality. The only fact here is there is no fact that proves what you're claiming. Its a legitimate opinion but an opinion none the less.

Its so you to relentlessly claim something that's unproven as fact though :) Sometimes I question whether you're actually serious or if its all just for fun.

Physics are on his side tbh considering WiiU is severely limited by it's 33w PSU.

176GFLOPs is far more likely than 352. With the way modern GPU's scale it's probably a tad more powerful than the 360's GPU at 176 with more modern effects (DX10.1 vs DX9 equivilant) as seen in ZombiU and Pikmin 3 (lighting, shadow, water, fire, explosions, depth of field effects). Nintendo's extremely talented animators along with their well known artstyle will help shorten the gap as well of course.

None of that excuses the pitiful hardware inside a $350 console released at the end of 2012 though.

I'm out of the thread until more 'facts' arrive.
 

Donnie

Member
While we are short of official confirmation, I firmly believe we should accept 176 GFLOPs unless a developer comes out and says otherwise. Upon an in depth analysis of all the blocks on the die, one realizes that it really can't be any other way. All the evidence is there.

Well nothing was said that made me think any differently about the concerns I posted regarding the shader unit size. So while I appreciate your posts to me its a range of GFLOPS until confirmed one way or the other (realistically 176 or 281).

To be honest unless I've missed something I don't see how the theories you posted can be considered so solid as to claim it can't possibly be any other way. I mean I accept they're reasonable and very possible, even very probable if you want, but not definite.
 
While we are short of official confirmation, I firmly believe we should accept 176 GFLOPs unless a developer comes out and says otherwise. Upon an in depth analysis of all the blocks on the die, one realizes that it really can't be any other way. All the evidence is there.

If that was true, why even entertain the idea that a dev can confirm otherwise?
 

Donnie

Member
Physics are on his side tbh considering WiiU is severely limited by it's 33w PSU.

176GFLOPs is far more likely than 352. With the way modern GPU's scale it's probably a tad more powerful than the 360's GPU at 176 with more modern effects (DX10.1 vs DX9 equivilant) as seen in ZombiU and Pikmin 3 (lighting, shadow, water, fire, explosions, depth of field effects). Nintendo's extremely talented animators along with their well known artstyle will help shorten the gap as well of course.

None of that excuses the pitiful hardware inside a $350 console released at the end of 2012 though.

I'm out of the thread until more 'facts' arrive.

Power usage only rules out certain high GFLOP numbers, its pointless to look at when it comes to the range we're talking about (for instance nobody would argue that 281GFLOPS is ruled out by that wattage).
 
Thanks for the unnecessary insults...
Hey, if my math sucked my balls I'd be happier now!

balls-they-aren-39-t-going-to-suck-themselves-lady.jpg
Wii-u is a 176 gflop machine. That not even a debate at this point. You may hope it something else but that goes against every fact we have.
I'm not saying it is or it isn't, I'm saying we don't know (me, you, everyone looking at GPU x-rays and not knowing what to make of it for sure because it's so proprietary), hence: not debating it just not jumping on any bandwagons.
While we are short of official confirmation, I firmly believe we should accept 176 GFLOPs unless a developer comes out and says otherwise. Upon an in depth analysis of all the blocks on the die, one realizes that it really can't be any other way. All the evidence is there.
I'm staying on no man's land on that one, we toggled so much between theories up until now that I don't think we're all that close to taking conclusions like that, "everything changed" quite a few times up until now; so what makes this "it must be" conclusion different than the prior ones. We've been grasping at straws, we're grasping at straws.

Not trying to rain on your parade, mind you. I'm utterly bullcrap at looking at dies, and I think you're one of the few that can actually conjure in regards to what's going on in there; quite honestly I just have a bone to pick with USC-fan and the way he always runs with worst case scenarios.

And I still think he misses the point more often than not and seems to simply repeat what other people said (and he liked/fits well into the worst case scenario void).


Maybe it's the way it's flaunted, or maybe that's just prejudice from his contributes and attitude from way back when. But this thread is not about, me, him or people in general, so please carry on.
176GFLOPs is far more likely than 352.
I don't think 352 GFlops is all that likely either.
Well nothing was said that made me think any differently about the concerns I posted regarding the shader unit size. So while I appreciate your posts to me its a range of GFLOPS until confirmed one way or the other (realistically 176 or 281).

To be honest unless I've missed something I don't see how the theories you posted can be considered so solid as to claim it can't possibly be any other way. I mean I accept they're reasonable and very possible, even very probable if you want, but not definite.
You get me, thanks.
 

prag16

Banned
are you saying higher than that.... or thats too high?
Probably means too high, based on the die shot, the power envelope, and the games we've seen.

What's this 281 theory? Where'd that come from? I thought 256 was the other main possibility.
 
but isnt the fact that its custom change anything. like didnt the guy (from chipworks) that LOOKS AT GPU FOR A LIVING said he couldnt recognize it with any other AMD GPU (correct me if im wrong) its all speculation and opinion at this point. it comes down to the games and how they look. like you said unless a proven dev making games with credibility comes out and leaks information i would take it as unknow and could range from 176 on the low end of things up to 320 or more on high side of things.

It doesn't really come down to the games, because I doubt anyone can tell how many shaders a system has by just looking at the games. Maybe we can make a broad statement about a system's power, but the specifics of how that breaks down into GFLOPs, RAM bandwidth, etc is beyond the abilities of most mortals (maybe all).

Jim made that initial statement due to the lack of AMD branding on the die. What I don't think he realized is that the AMD branding is on the heat spread.

Well nothing was said that made me think any differently about the concerns I posted regarding the shader unit size. So while I appreciate your posts to me its a range of GFLOPS until confirmed one way or the other (realistically 176 or 281).

To be honest unless I've missed something I don't see how the theories you posted can be considered so solid as to claim it can't possibly be any other way. I mean I accept they're reasonable and very possible, even very probable if you want, but not definite.

The fact that different foundries can achieve vastly different densities is quite enough explanation for the shader unit size. Renesas are just not as specialized as TSMC in this area. The last couple pages of the Beyond3D thread are a pretty good summation, with a good example of how much density can vary even within one foundry when different layout techniques are employed.

If that was true, why even entertain the idea that a dev can confirm otherwise?

Just the covering for the extremely remote possibility that Renesas threw everything we know about SIMD engine ratios and register allotments out the window. It's not gonna happen, though.

Hey, if my math sucked my balls I'd be happier now!

balls-they-aren-39-t-going-to-suck-themselves-lady.jpg
I'm not saying it is or it isn't, I'm saying we don't know (me, you, everyone looking at GPU x-rays and not knowing what to make of it for sure because it's so proprietary), hence: not debating it just not jumping on any bandwagons.I'm staying on no man's land on that one, we toggled so much between theories up until now that I don't think we're all that close to taking conclusions like that, "everything changed" quite a few times up until now; so what makes this "it must be" conclusion different than the prior ones. We've been grasping at straws, we're grasping at straws.

Not trying to rain on your parade, mind you. I'm utterly bullcrap at looking at dies, and I think you're one of the few that can actually conjure in regards to what's going on in there; quite honestly I just have a bone to pick with USC-fan and the way he always runs with worst case scenarios.

And I still think he misses the point more often than not and seems to simply repeat what other people said (and he liked/fits well into the worst case scenario void).


Maybe it's the way it's flaunted, or maybe that's just prejudice from his contributes and attitude from way back when. But this thread is not about, me, him or people in general, so please carry on.I don't think 352 GFlops is all that likely either.You get me, thanks.

I realize USC-Fan has rubbed many the wrong way in these threads. I know I'll get slack for saying this, but he never bothered me much, and I give him credit for his contributions to the TDP discussion (just as I do you for your framebuffer analysis post way back). USC was always that irritating voice reminding us of how much the TDP threw a monkey wrench into all our theories, but in the end, even his guess was too high!

I realize I do not have the credentials to convince everyone, but I have seen enough evidence to call this one. The registers in the SIMDs and the true location of the TMUs (T1 and T2) betray the die's secrets. Latte is not so proprietary that it completely reinvents the core elements of AMD's Radeon tech. Look, I didn't want it to be the worse case scenario either, but in this case, the worst case is what it is. I'm not so arrogant that I just want my own personal theory to be taken as fact (and it's not even mine. I've learned pretty much everything from the contributions of other, more knowledgeable posters...except the TMU/L1 part). Rather, I'm so sure of it, that I just want people to know the truth and lay the topic to rest.
 

EDarkness

Member
i swear i just want to go and find a top AMD executive kindnapp them and demand that they release information on Wii U GPU. I dont understand from Ninendo standpoint. people buy games on how they look and play. not becuase of specs. just put Wii U specs out there yes specs dont make a game developers do and some are better at getting more oout of the "NUMBERS" than others... but its so ridiculous. and Iwata has the nerve to come out and say the Wii U being underpowered is misunderstood.... YOU GUYS are the reason its misunderstood!

Honestly, what difference would it really make?
 

japtor

Member
if they released the specs and EXPLAINED them.... then it would be on devs they can be held accountable. because no matter how well balanced designed the Wii U is when we get crappy ports no one is holding the devs accountable they are just going to say Wii U underpowered or crappy tech, or on par with ps360.
How does that help when most of the audience is completely oblivious when it comes to technology at that level of detail? Like even something basic like how Wii U has a 3 core 1.4ghz PPC vs 3 core 3.2ghz PPC in 360. For most of the world that doesn't know or care about nerdy minutiae about CPUs, all the explanation in the world won't matter. It'll all just be numbers and handwaving to them, it's all meaningless unless they see some pretty games cause that's what everyone* understands.

*except for blind people.
 

Mr_B_Fett

Member
The fact that different foundries can achieve vastly different densities is quite enough explanation for the shader unit size. Renesas are just not as specialized as TSMC in this area. The last couple pages of the Beyond3D thread are a pretty good summation, with a good example of how much density can vary even within one foundry when different layout techniques are employed.
.

The link on Beyond3D is talking about optimising layout for increased density and lower power consumption at limited clock speeds. This is practice that has gone on since the 90's and is exactly the scenario we have in the Wii U. Applying the much loved (and grossly misunderstood) Occam's razor: given the characteristics (low fixed clock etc) of the system and the custom layout, increased density is the logical answer. Doesn't have to be the correct answer, but it is the logical one.

I realize I do not have the credentials to convince everyone, but I have seen enough evidence to call this one. The registers in the SIMDs and the true location of the TMUs (T1 and T2) betray the die's secrets. Latte is not so proprietary that it completely reinvents the core elements of AMD's Radeon tech. Look, I didn't want it to be the worse case scenario either, but in this case, the worst case is what it is. I'm not so arrogant that I just want my own personal theory to be taken as fact (and it's not even mine. I've learned pretty much everything from the contributions of other, more knowledgeable posters...except the TMU/L1 part). Rather, I'm so sure of it, that I just want people to know the truth and lay the topic to rest.

Now your discovery that there are only 8TMUs is pretty convincing on the 160 front!
 
i swear i just want to go and find a top AMD executive kindnapp them and demand that they release information on Wii U GPU. I dont understand from Ninendo standpoint. people buy games on how they look and play. not becuase of specs. just put Wii U specs out there yes specs dont make a game developers do and some are better at getting more oout of the "NUMBERS" than others... but its so ridiculous. and Iwata has the nerve to come out and say the Wii U being underpowered is misunderstood.... YOU GUYS are the reason its misunderstood!
I think, for now we should you go with the 160sp number. We probably may not officially know what number is, and Nintendo may not even bother to state it in the documentation. What will be more interesting to understand is how the system is getting the results we are seeing. One reason I was skeptical about the 160sp number was due to me expecting a lower performance than what the Wii U is doing.


While I have disagreements with USC-fan in the past in some things, I do agree that seeing better performance out of a lower sp number is more interesting.
 

Reallink

Member
So assuming the 176gflop figure is accurate, in practical unbiased terms, where does it really stack up against PS360? A wash, slight improvements, weaker?
 

Riki

Member
So assuming the 176gflop figure is accurate, in practical unbiased terms, where does it really stack up against PS360? A wash, slight improvements, weaker?

Atari Plug and Play

Different architectures means it's not so easy to say.
 
So assuming the 176gflop figure is accurate, in practical unbiased terms, where does it really stack up against PS360? A wash, slight improvements, weaker?
It's a more modern GPU compared to the PS360, and it appears to have some significant modifications compared to the its r700 base. For example, it seems to have some parts that looks similar to the 2011 Brazos chipset from AMD's fusion line. Raw power aside, it will likely be more efficient than the current-gen chips so that it will outperform them even if its set at 176 GFLOPS.

In addition, its feature set is based on OpenGL 4.x (equalivant to dx10.1 with some dx11 features) so we should see some "next-gen features" that the Xbox1/PS4 can do, but to a more limited degree.

In summary:
Raw power: At least slightly beyond current-gen due to optimizations and efficiency.
Polygons: a little above current-gen due to a slightly higher clockspeed.
Shaders/special features: probably closer to the other next-gen consoles.
 

JordanN

Banned
Polygons: a little above current-gen due to a slightly higher clockspeed.
I don't think we've seen anything to indicate this.

Some games actually took out geometry (darksiders/tekken) others are on par or still below the best of PS3/360 and only one game promises to focus on geometry (Shinen wants to use tessellation in future games but what it looks has yet to be seen).

If anything, geometry seems to be the Wii U's least or non-existent specialty compared to other consoles when they launched.
 
I don't think we've seen anything to indicate this.

Some games actually took out geometry (darksiders/tekken) others are on par or still below the best of PS3/360 and only one game promises to focus on geometry (Shinen wants to use tessellation in future games but the horsepower behind it is n/a).

If anything, geometry seems to be the Wii U's least specialty now compared to other consoles when they launched.
Due to the state of Wii U development kits before launch, I wouldn't put so much weight on using launch titles to judge that. In either case, the tri-setup for Wii U is likely 550m/sec (1 triangle per cycle like all pre-GCN AMD GPUs) vs 500m for the 360 and 250m for the PS3. The PS3 relied on Cell to help keep up with the 360 in polygons. Wii U should be able to render slightly more polygons per frame with the GPU, but we have to consider that development issues with the other parts of the system (like the CPU and RAM optimizations) could also affect the game's overall performance.

As devs use more advanced shaders and/or use its GPGPU features, the polygon counts may dip lower.

You know, proper usage of tessellation is not simple. Even most PS4/Xbox1 games may not be using it since you have to design the models with tessellation in mind for the best performance. In the case of Shinen, it was not the case of horsepower since they admitted to basically use one CPU core and unoptimized shaders.
 

JordanN

Banned
You know, proper usage of tessellation is not simple. Even most PS4/Xbox1 games may not be using it since you have to design the models with tessellation in mind for the best performance. In the Shinen, it was not the case of horsepower since they admitted to basically use one CPU core and unoptimized shaders.
When I said Shin'en was using tessellation, I didn't mean to relate it to Nano Assault. I meant, they want to use it later but we don't know right now if it's going to look better than PS3/360 (because it's off into the future).
 
yfyxx.jpg


We've all seen this picture. Strict polygon count is getting some what meaningless. Shading, texture detail, lighting, and all that jazz are going to have much more of an impact than overall polygons, which might be a reason why the Wii U isn't a polygon monster.
 

JordanN

Banned
We've all seen this picture. Strict polygon count is getting some what meaningless. Shading, texture detail, lighting, and all that jazz are going to have much more of an impact than overall polygons, which might be a reason why the Wii U isn't a polygon monster.
PS4/XBO can do all that too and still have more polygons so I'm not buying that. It's more likely the console was never meant to be a significant leap over PS3/360.

Also, I don't think there was ever a time graphics were only about polygon count so harping on that is no different than harping on the PS2 to me.

Edit: That picture doesn't represent video games as a whole. Games can have several objects instead of one. No amount of lighting/texture can fix something that's too low poly.
 
yfyxx.jpg


We've all seen this picture. Strict polygon count is getting some what meaningless. Shading, texture detail, lighting, and all that jazz are going to have much more of an impact than overall polygons, which might be a reason why the Wii U isn't a polygon monster.
In fairness, Jordan was only comparing the polygon counts of a port from the PS360 game. Also, while dimishing returns may have already hit certain things like close-up views of one character model, you have to consider the polygon usage for the game's world and the number of characters on the screen at once too.
 

strata8

Member
3 Core CPU @ 1.2GHz vs 8 Core CPU @ 1.6GHz.

2GB's of DDR3 RAM @ 17GB/s + 32MB's of eDRAM @ 70GB/s vs 8GB's of GDDR5 RAM @176 GB's.

176 GFLOP DX10.1 GPU vs 1.8TFLOP DX11 GPU.

PS4 being around 8x WiiU in performance sounds about right to me.

FLOPS can't be reliably compared across different architectures. The GPUs in the PS4 and Xbox One are 3 generations ahead of Latte.

So given the massive gap in CPU, GPU and RAM amount as well as RAM bandwidth how many time more powerful would you say PS4 is vs WiiU ?, give me an estimate for Wii U vs XBO as well while your at it.

I made this spec comparison a few days ago:
3QQYYMC.png


The performance estimates aren't 100% accurate but they should be close enough.
 

wsippel

Banned
FLOPS can't be reliably compared across different architectures. The GPUs in the PS4 and Xbox One are 3 generations ahead of Latte.

I made this spec comparison a few days ago:
3QQYYMC.png


The performance estimates aren't 100% accurate but they should be close enough.
Espresso is no PowerPC 1.10, it's a more modern superset. Also, the whole point of a benchmark, compared to theoretical FLOPS and MIPS figures, is that you can actually run it on a given platform to see real world results. You're completely missing the point by making them up.

And we don't know what Latte actually is, we only know what it's based on. From looking at the dies, we're also pretty sure that the original design has been modified quite a bit, so that "three generations behind" statement doesn't make much sense, either. .
 

strata8

Member
Espresso is no PowerPC 1.10, it's a more modern superset. Also, the whole point of a benchmark, compared to theoretical FLOPS and MIPS figures, is that you can actually run it on a given platform to see real world results. You're completely missing the point by making them up.

And we don't know what Latte actually is, we only know what it's based on. From looking at the dies, we're also pretty sure that the original design has been modified quite a bit, so that "three generations behind" statement doesn't make much sense, either. .

Yeah, the estimates were made under the assumption that it's similar in performance to a PowerPC 750 (only with 3 cores) and a Mobility Radeon 4650. I didn't know the modifications made such a large difference.

Where did you get the 4 ROPs from?

Mistake. I read that the TMUs were halved and accidentally halved the ROPs as well. It did seem a bit strange when I put it next to AMD's Zacate APU, which has 4 ROPs for only 80 shader cores.
 

wsippel

Banned
Yeah, the estimates were made under the assumption that it's similar in performance to a PowerPC 750 (only with 3 cores) and a Mobility Radeon 4650. I didn't know the modifications made such a large difference.
I believe all Cinebench results for PowerPC come from old Apple G3s, which never officially supported the extended feature and instruction set of the CXe/ CL lines (as those additions and enhancements were designed for Nintendo). Espresso essentially combines the improvements of the CXe/ CL, like paired singles, L1d DMA and write gathering, with the large L2 of the GX line, and might or might not introduce some new stuff on top of that.

And I have no idea if the GPU modifications make a huge difference, either. Maybe they make absolutely no difference whatsoever. But that's exactly the problem: We don't know.
 
And I have no idea if the GPU modifications make a huge difference, either. Maybe they make absolutely no difference whatsoever. But that's exactly the problem: We don't know.

I see this said a lot and while it may be true, it comes off like a cop-out to me. There's no secret hidden power in there, it's a terribly underpowered gpu compared to the other two consoles. The fact that we don't know every detail about it is irrelevant.
 

wsippel

Banned
I see this said a lot and while it may be true, it comes off like a cop-out to me. There's no secret hidden power in there, it's a terribly underpowered gpu compared to the other two consoles. The fact that we don't know every detail about it is irrelevant.
Define "terribly underpowered". See the problem? Without knowing the details, you can't even do that. So no, it's not irrelevant at all. You can't even properly troll the thing as it is.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Wii-u is a 176 gflop machine. That not even a debate at this point. You may hope it something else but that goes against every fact we have.
So we finally have all the facts required to compute the theoretical FLOPS rating?

Hi guys. I see this thread has 'moved on' quite a bit.
 

LordGouda

Member
I see this said a lot and while it may be true, it comes off like a cop-out to me. There's no secret hidden power in there, it's a terribly underpowered gpu compared to the other two consoles. The fact that we don't know every detail about it is irrelevant.

And how do you know? Have you studied it? The fact that we don't know is why people are actually trying to understand what makes it tick. It isn't irrelevant just because you think it's nothing special.
 
Define "terribly underpowered". See the problem? Without knowing the details, you can't even do that. So no, it's not irrelevant at all. You can't even properly troll the thing as it is.

Come on? Your head is in the sand. The box has 1 gb of usable ram and a terrible cpu compared to X1 and PS4. The gpu is suddenly going to buck this trend? It's horribly underpowered (again, compared to PS4/X1, I'm not talking about PC here). Doesn't take 7000 posts to figure that out.
 
A bit more from the Project Cars WiiU-changelog. (source - have to be registered)

Thanks for posting that. I decided to trim the log to the more interesting ones since I feel a bit uncomfortable copying and pasting the statements from something you suppose to register from.

Build 531 (2/8/13)
WiiU:
* WiiU CommandBuffer(displaylist) support implemented (required for multi-threaded rendering)
* Add GX2Invalidate queueing for command-buffers

Build 527 (29/7/13)
WiiU:
* WIP WiiU Multithreaded shadow rendering. DX11 support for mult-threaded shadow rendering (via -DX11MT). Fix for build bot errors

Build 525 (25/7/13)
WiiU:
* WiiU SRGB handling fixed up. GUI and Renderer Colour levels now match PC
* WiiU F1 Debug Menu Rendering - direct to texture colour value fixes
* WiiU - Int shader parameter support. (fixes problem with glass shader/wipers)

Build 524 (24/7/13)
WiiU:
* WiiU: First pass of keyboard support
* WiiU deferred helper - use 11.11.10 HDR formats for phase3 targets and also place in MEM1
* WiiU CPU profiler support + moved injection to be in the pre post link step on the elf (not the rpx, as was previously the case)
* WiiU - implement shared RenderTarget/Texture support. Also fixed up support for Volume textures

Build 523 (23/7/13)

* Optimisation on the rendertask processing. (big help on WIIU)

Build 522 (23/7/13)

* WiiU various thread assignment fixes (moving to Core 0 or 2 to offload work from the main core). Physics thread/controller - add missing setup and set various PC define paths

Build 520 (19/7/13)
* WiiU Disable specular irradiance on WiiU.
* Misc other missing platform defs for WiiU
* WiiU fixup 11.11.10 HDR format support
* WiiU don't use linear textures for diffuse irradiance map
* WiiU Envmap uses 32bit HDR. Small deferred rendertargets are now in EDRAM
* WiiU FXCompilerWiiu pass '-m' to SLConvertor for Column major support.
* WiiU support for double speed Z only rendering
* WiiU GUI Text RenderTarget now uses EDRAM
* WiiU various minor renderer fixes (spot shadow size, release texture compile fix, rendertarget coherency)
* WiiU - Fix Primitive Culling (wasn't incorpoating translation)


I would ask for someone like blu and wsippel to check out some of these to give their analysis. Some things I noticed:

- GX2 is mentioned, which further support the leaked spec document for the Wii u sometime ago.
- CPU core 1 is definitely defined as the main core. It looks like they are still working on offloading tasks from it.
- The devs are using EDRAM.
- It seems as if the Wii U does support some dx11 features, though maybe not all of them.
I've talked with some people I trust as tech experts. Even though no one can say for sure... Those numbers don't add up. We have a long generation ahead of use and more info will trickle down. For me right now its an unknown until some concrete info comes out. I'm just about to enjoy the onslaught of games coming my way as a Wii U only gamer this gen.
Yes, I do believe there is more to this GPU than what we identified. However, I'm thinking that we should temporally drop that specific topic about the sp count until we gather more info or leaks.
 

chaosblade

Unconfirmed Member
Come on? Your head is in the sand. The box has 1 gb of usable ram and a terrible cpu compared to X1 and PS4. The gpu is suddenly going to buck this trend? It's horribly underpowered (again, compared to PS4/X1, I'm not talking about PC here). Doesn't take 7000 posts to figure that out.

What do other consoles have to do with this topic anyway? That is what is completely irrelevant. The purpose of this topic isn't and has never been about comparing it to those consoles (as much as some people want it to be), it's to try to see how it works and what makes it tick.
 

guek

Banned
If the 176 gflop number is accurate (not saying it is or isn't), it just further reaffirms the fact that gflops are a completely useless metric of comparison for gpus from different series.
 
Status
Not open for further replies.
Top Bottom