• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

joesiv

Member
Ok, to put things in perspective... if the chip is that expensive, why did Nintendo go that route? Wouldn't it have been cheaper to take a stock 4770 (RV740?) and a Wii GPU with?
It might be in the short term, but they'd still need to have all the other functionality to make it a Wii U, where would you put those? The easiest answer would be separate chips. This is why many consoles of yor would have so many chips on them:

PS2:
PS2_Slim_motherboard.jpg


versus the Gamecube:
gamecube_motherboard.jpg


Having more chips means more expense in the short term (and even more in the long term, as it's cheaper to shrink a single chip than deal with multiple chips), but is also more efficient at data communication (latency), and is possible to get higher bandwidth due to the ease of wider memory bus' on silicon rather than on mainboard (which also gets' cheaper as you shrink the die, but the mainboard wouldn't).

Nintendo loves to painstakingly create efficient consoles, not because they're the most powerful, but they fit their design philosophy, and also turn profits quicker than ones that are large and inefficient.

When you are developing a product that will be sold in mass quanities over numerous years, you look to the overall cost/profit of producing it, not just the day of release.


Also, the guy from Chipworks says it's an impressive piece of kit... in what sence?
Probably in terms of technology and customization involved.
 

Oblivion

Fetishing muscular manly men in skintight hosery
Sorry I haven't read through the whole thread since it's like 20+ pages, but going by the first page, Wii-U's GPU is weaker than we originally thought?
 

NotLiquid

Member
Sorry I haven't read through the whole thread since it's like 20+ pages, but going by the first page, Wii-U's GPU is weaker than we originally thought?

Depends on how reasonable your expectations were, but it's certainly more uniquely crafted than expected and seems to allow for some slightly above current-gen results that aren't very comparable to other GPU's on paper.
 

artist

Banned
Sorry I haven't read through the whole thread since it's like 20+ pages, but going by the first page, Wii-U's GPU is weaker than we originally thought?
Highly custom design, power wise still closer to this gen than next.

It's weaker if you believed the 1TF or 600GFlops figure.
 

ozfunghi

Member
Can someone clear this up. The 4MB of additional eDRAM... is it likely to have higher bandwidth, or faster response time (latency?), or both or neither?

And the SRAM, what can it be used for?
 
We all know the Nintendo titles will be great. IMO the only thing that's terribly relevant is whether it's going to be hard to port nextgen titles to. If Treyarch/Bioware/Bungie etc. have to make completely different engines, assets and levels to do ports, third-party support is going to suck. Was hoping to not have a repeat of this gen.
 
Being limited by latency doesn't have anything to do with power but with architecture. Back when GPUs where a bunch of fixed functions latency was not a factor, but now it surely is.

My point is DDR3 has relatively low latency. I don't see RAM being the limiting factor here on a 352 GFLOP GPU.
 

tipoo

Banned
Can someone clear this up. The 4MB of additional eDRAM... is it likely to have higher bandwidth, or faster response time (latency?), or both or neither?

And the SRAM, what can it be used for?

More tightly packed cells = lower latency from what I gather. I don't think bandwidth is similarly affected, that has to do with IO. At the least it would have lower latency, nothing solid on bandwidth yet.

The SRAM is interesting in that it's not towards the DDR3 interface making it in a very odd place for another cache, so it's possible it's used for something else, like a scratchpad between the CPU and GPU since it's close to the CPU end of things physically, but I'm just guessing there. Possibly it's there mostly for Wii emulation, but if devs can access it it could be used for something else, I think SRAM is usually even lower latency than eDRAM.
 

StevieP

Banned
We all know the Nintendo titles will be great. IMO the only thing that's terribly relevant is whether it's going to be hard to port nextgen titles to. If Treyarch/Bioware/Bungie etc. have to make completely different engines, assets and levels to do ports, third-party support is going to suck. Was hoping to not have a repeat of this gen.

The engines themselves should be able to run. It's the rest of the game that will require some work in order to be downported.
 
With 41.25% of 360 slims power draw, Wii U performs 46.67% faster in (gflops only)

That IS impressive. Just NOT the kind of Impressive, GAF likes.

And thats not taking the more advanced architecture into account. That should easily elevate it way further.
 

Oblivion

Fetishing muscular manly men in skintight hosery
Depends on how reasonable your expectations were, but it's certainly more uniquely crafted than expected and seems to allow for some slightly above current-gen results that aren't very comparable to other GPU's on paper.

Being limited by latency doesn't have anything to do with power but with architecture. Back when GPUs where a bunch of fixed functions latency was not a factor, but now it surely is.

Highly custom design, power wise still closer to this gen than next.

It's weaker if you believed the 1TF or 600GFlops figure.

Okay, at the very least, it IS better than Xenos like nearly all the prior reports said, right?
 

tipoo

Banned
With 41.25% of 360 slims power draw, Wii U performs 46.67% faster in (gflops only)

That IS impressive. Just NOT the kind of Impressive, GAF likes.

And thats not taking the more advanced architecture into account. That should easily elevate it way further.

"Impressive" is subjective, after 7 years I would just say that's expected. The PS360 had some very big inefficiencies, and their architectures have not been changed, just shrank down.

I mean, an ultrabook of today without the monitor would draw much less power than the PS360 in any incarnation too, while being more powerful (if it had HD4000 integrated graphics, rather than the cut down 2500 at least [I say that because Anand from Anandtech said the 2500 was half as strong as the 360 gpu])
 

Thraktor

Member
My point is DDR3 has relatively low latency. I don't see RAM being the limiting factor here on a 353 GFLOP GPU.

Not really. DDR3 latency is in the region of 50ns (plus whatever's added by the controller and on-die routing). That's over 27 cycles for a 550MHz GPU. That means that if you depend on one main memory call every ~30 cycles you're wasting half of the chip's performance waiting for data to arrive.

(That 27 is a very optimistic number, by the way. On Nvidia's CUDA GPUs, the latency to main [GDDR5] memory from the CUDA cores is 200-400 cycles.)
 

Madn

Member
I know I'm late but I still wanted to stay that what you guys are doing is amazing.
And obviously great thanks to Chipworks for their generosity and help!
I would love to help but I know next to nothing about tech, so I guess I'll just go back to lurk
 
So basically, we still don't know anything on how powerful this guy is, and apparently, considering that it's mostly a custom design, it's not going to be easy to find out either?

---
and man, for $2500, chipworks sure as hell got a good reputation and advertisement; doing nice things like they did, can easily pay off, kudos to them
 

Reallink

Member
It just sounds strange that Nintendo invested a lot of money in customizations in order to get a chip with questionable performance benefits compared to some "stock" chips that might actually cost less money and lead to a better performing and cheaper to produce system.

This is what baffles me personally, but i guess many questions will be answered once games built from the ground up to take advantage of the system are revealed. For now, we can only (mis)judge based on the game footage we have seen.

Well another wrench is Iwata's comments about having to outsource and collaborate with other developers because they don't understand HD development. What are the odds they actually had some brilliant master plan of super efficient secret sauce when they're basically on record stating they don't know what the fuck they're doing?
 

mrklaw

MrArseFace
GPU are also sensible to latency since they became programmable, not to speak if we enter the realms of GPGPU. If you have to perform multiple shaders, then the more the GPU is waiting for the data the less effectivity it has.

So they want to get the most use possible out of a weak GPU. I kind of understand that, but part of me thinks they could have had significantly more power without the edram, accepted slightly less efficiency and still have much better performance for the same price.

Nintendo do like their edram though. Maybe it's just a philosophy thing.
 

StevieP

Banned
Well another wrench is Iwata's comments about having to outsource and collaborate with other developers because they don't understand HD development. What are the odds they actually had some brilliant master plan of super efficient secret sauce when they're basically on record stating they don't know what the fuck they're doing?

Contrary to popular forum belief, Nintendo has some excellent engineers. They often butt heads with bean counters and the power consumption mafia, but they're still there.

Hell, MS and Sony have excellent engineers as well, and they bump heads with the very same departments to different levels in different generations.
 
It's pointless impressive. I'm selfish and so is a lot of gamers. We want power at a great price. Not power efficiency at a high price.
 

Thraktor

Member
More tightly packed cells = lower latency from what I gather. I don't think bandwidth is similarly affected, that has to do with IO. At the least it would have lower latency, nothing solid on bandwidth yet.

The SRAM is interesting in that it's not towards the DDR3 interface making it in a very odd place for another cache, so it's possible it's used for something else, like a scratchpad between the CPU and GPU since it's close to the CPU end of things physically, but I'm just guessing there. Possibly it's there mostly for Wii emulation, but if devs can access it it could be used for something else, I think SRAM is usually even lower latency than eDRAM.

SRAM certainly has lower latency than eDRAM (SRAM latency is generally a cycle or two, I think).
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
With 41.25% of 360 slims power draw, Wii U performs 46.67% faster in (gflops only)

That IS impressive. Just NOT the kind of Impressive, GAF likes.

And thats not taking the more advanced architecture into account. That should easily elevate it way further.

I wouldn't characterise the WiiU's performance as anyway impressive. In fact. For a new console that launched at the tail end of 2012. It's really unimpressive. They must have worked really hard to produce a console that was that gimped.
 

Hoo-doo

Banned
The low power draw is technically impressive, but to act like gamers should be patting Nintendo on the back for that achievement is just silly. Who's realistically going to care.
 

StevieP

Banned
There are many on this forum that continue to project our (and I by that I mean mine too) definitions of what constitutes engineering priorities with everyone else's, despite being constantly reminded that what we consider good as "graphics whores" may not be good in everyone else's in the world's definitions. It's fine to present an opinion, but to constantly parrot the same opinion over and over is unnecessary. We get it. You want raw power. Get something else. The way this console is engineered is entirely unimpressive from that perspective, but more than impressive from other perspectives.
 
Well another wrench is Iwata's comments about having to outsource and collaborate with other developers because they don't understand HD development. What are the odds they actually had some brilliant master plan of super efficient secret sauce when they're basically on record stating they don't know what the fuck they're doing?
Uh, you cant really use Iwata's comments about SW development to talk about the thought that went into designing Wii U as Nintendo have employees that focus only on HW design and engineering.
 

Schnozberry

Member
I wouldn't characterise the WiiU's performance as anyway impressive. In fact. For a new console that launched at the tail end of 2012. It's really unimpressive. They must have worked really hard to produce a console that was that gimped.

Yes, I'm sure Nintendo's engineers were kept awake at night with thoughts of how they could further cripple the hardware.
 

mantidor

Member
"Impressive" is subjective, after 7 years I would just say that's expected. The PS360 had some very big inefficiencies, and their architectures have not been changed, just shrank down.

I mean, an ultrabook of today without the monitor would draw much less power than the PS360 in any incarnation too, while being more powerful (if it had HD4000 integrated graphics, rather than the cut down 2500 at least [I say that because Anand from Anandtech said the 2500 was half as strong as the 360 gpu])

An ultra book also costs $700 or more (usually much more).
 

wsippel

Banned
So they want to get the most use possible out of a weak GPU. I kind of understand that, but part of me thinks they could have had significantly more power without the edram, accepted slightly less efficiency and still have much better performance for the same price.

Nintendo do like their edram though. Maybe it's just a philosophy thing.
I think it makes perfect sense. Why waste money and power on something you can't even fully utilize?
 
I wouldn't characterise the WiiU's performance as anyway impressive. In fact. For a new console that launched at the tail end of 2012. It's really unimpressive. They must have worked really hard to produce a console that was that gimped.

It is in relation to the power draw. Wether you agree with me or not... I really couldn't care less, sorry.
 

deviljho

Member
Well another wrench is Iwata's comments about having to outsource and collaborate with other developers because they don't understand HD development.

No. The issue is that they have many, many developers, not all of which are on the same footing. The comment about lacking development experience relates to the scale of their development house.
 

Hoo-doo

Banned
Probably about as many as those who fret endlessly about GFLOPS.

FLOPS have a direct impact on the visual spectacle a console will be able to display. A lot of people care.

A low power draw on the other hand, well, it might save you 70 cents a month! Score!
 

tipoo

Banned
An ultra book also costs $700 or more (usually much more).

Sure. I'm just saying chip architecture performance per watt has risen a lot since 2005.

Besides, that's with an expensive SSD, battery, monitor, keyboard&trackpad etc. The SSDs in most ultrabooks alone probably cost over 100 at least.
 

Darryl

Banned
Well another wrench is Iwata's comments about having to outsource and collaborate with other developers because they don't understand HD development. What are the odds they actually had some brilliant master plan of super efficient secret sauce when they're basically on record stating they don't know what the fuck they're doing?

nintendo is a huge fuckin company. that's a lot of people to retrain who have been working on essentially the same hardware for almost an entire decade now. this doesn't mean they can't make great hardware or that they don't have a lot of employees really at the cutting edge of development.
 

Reallink

Member
Uh, you cant really use Iwata's comments about SW development to talk about the thought that went into designing Wii U as Nintendo have employees that focus only on HW design and engineering.

My assumption was the two arms (HW and SW engineers) would be collaborating throughout the whole design process (i.e. we need this to do this, can you make it happen), but perhaps the SW side is so behind the curve, they weren't able to make any sensible suggestions.
 

CTLance

Member
Wow, one nights' sleep really does make a difference. Loads of new info to process, and chipworks going that extra mile yet again. Awesome fellas.

Just to be sure, are we still in the dark about the ARM core?
Cortex cores eat up a whole lot of register banks and such, judging by Tegra 2 (40nm?) and A5x (45nm?) pics. We are still talking dualcore Cortex family chips, right?

So... D? G? E? F? Which areas look like they could house a fullblown ARM core?

Also: If I may be so bold, adding a small scale to the core map picture may help a bit with identifying candidates of a known/guessed size.
 
Status
Not open for further replies.
Top Bottom