• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Rumor: Wii U final specs

Whatever customization have been made, we're still seeing it described as "DX10.1" level.

I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.

What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.
 
Whatever customization have been made, we're still seeing it described as "DX10.1" level.

I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.

What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.


Didn't the guy who leaked the info say it was beyond that?
 
Is the 1gb ram shared between system memory and graphics?

No. 1GB for system functions, 1GB for games.

That is, in fact, true.

Seconded.

Whatever customization have been made, we're still seeing it described as "DX10.1" level.

I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.

What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.

I don't think anyone who has an understanding of it believes that though. I do believe those same people believe they've addressed the inefficiency of said architecture in some fashion.
 

Sheroking

Member
That is, in fact, true.

I'll take your word.

Whatever customization have been made, we're still seeing it described as "DX10.1" level.

I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.

What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.

His claim had nothing to do with GPGPU (Bgassassin brought it up because he was notoriously contradictory about that), it was about the Wii U GPU not being capable of 600GFLOPS because of the limitations of a ~40w power consumption.
 
Didn't the guy who leaked the info say it was beyond that?

Probably in exactly the same way a R700 based card exceeds the DX10.1 specifications in some way.

I don't think anyone who has an understanding of it believes that though. I do believe those same people believe they've addressed the inefficiency of said architecture in some fashion.

Wouldn't it be cheaper and easier to just use a later, DX11 design from AMD than to spend the time and money rooting around in the core architecture to put a bandaid on the DX10.1 hardware? A DX11 design where, presumably, those inefficiencies have already been addressed? And if the operation of the part has been so sufficiently altered as to make an significant difference in GPGPU tasks, why do all the leaks still say, "yeah, it's basically like a R700"?
 

Kenka

Member
I have a question regarding the "3 enhanced Broadway cores" CPU theory.

First, Broadway (90 MHz; 729 MHz) itself is strongly rumoured to be an overclocked Gekko (180 Nm; 485 MHz), a CPU with PowerPC architecture. What "enhancements"exactly are we talking about ?
 
Wouldn't it be cheaper and easier to just use a later, DX11 design from AMD than to spend the time and money rooting around in the core architecture to put a bandaid on the DX10.1 hardware? A DX11 design where, presumably, those inefficiencies have already been addressed? And if the operation of the part has been so sufficiently altered as to make an significant difference in GPGPU tasks, why do all the leaks still say, "yeah, it's basically like a R700"?

Which leaks do you refer to?

Considering how long they've been working on the GPU it may not have been cheaper for what Nintendo may have been going for. After all didn't you just mention GCN and with that AMD only recently addressed those inefficiencies. And we both know Nintendo would be highly unlikely to go with something that new. So with that said the only real option is to modify a VLIW part from AMD. My take is that looking at when development started they chose that GPU, focused on improving the compute of the GPU, and "tacked on", at worst, capabilities above the 10.1-level due to dev input.

Does anyone one know GPU7's development was finalised?

Excluding further tweaks, late Dec./early Jan.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
It's still not a broadway or three broadways at that; that's dissing it.

For starters triple core is not so easy as to make 3 dies and glue them together with tape; for there are shared components if done right; it's a big evolution, otherwise we'd be calling the first Core Duo's (and Pentium M's in between) Pentium 3 Tualatin's.
Bah. I still call them Tualatins ;p

Also, bare in mind PPC 7xx development was phased out in favour of PowerPC e500 which unlike PPC 7xx has multi-core support. This, PowerPC e600 or even PowerPC 476FP seem more likely to be used seeing they're not deprecated and are more up to date; producing custom versions of them could be cheaper seeing they're in production. (PPC 7xx is also still in production, obviously; but such a different core with a core shrink would require dedicated investment)
You're slightly confusing the nomenclatures here. The eXXX series is Freescale's nomenclature (after they spawned from Moto), IBM never changed theirs - i.e. an IBM's G3 is still a ppc7xx, and a G4 - a ppc7xxx. Freescale's nomenclature is:
  • G2 -> e300
  • G3 -> e500 (actually, those are well modernized, and basically, not G3 anymore; there are even 64bit versions, but I don't think they ever got a SIMD unit).
  • G4 -> e600 (for a while those had their AltiVec amputated, but lately it's been reinstated)

Please note that those are only CPU architectures, the end products might be in various SoCs and trade names (QUICC, QorIQ or whatever moronic acronym somebody at FSL came up with these days).

Anyway, my point being, since Freescale was never a console part supplier (they did supply to Apple though), we should stick to AIM's/IBM's nomenclature.
 
It is a member of the R700 family, but it has been modified.

Modified how? Do you even know? Cause you didn't seem to even understand the thing about the CPU you were trying to leak. Was it changed in one of the many ways I just listed that don't fundamentally alter the performance per watt or GPGPU efficiency? If it was altered in such a way as to alter the performance characteristics of the R700 design, are we talking about a 5% gain? 10%? 15%? 25%? 50%? Because a lot of people want to take the word "modified" to mean 50% faster, but it could just as easily mean "we changed some I/O stuff to talk to the PPC cores".
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Modified how? Do you even know? Cause you didn't seem to even understand the thing about the CPU you were trying to leak. Was it changed in one of the many ways I just listed that don't fundamentally alter the performance per watt or GPGPU efficiency? If it was altered in such a way as to alter the performance characteristics of the R700 design, are we talking about a 5% gain? 10%? 15%? 25%? 50%? Because a lot of people want to take the word "modified" to mean 50% faster, but it could just as easily mean "we changed some I/O stuff to talk to the PPC cores".
They didn't just 'change some IO stuff to talk to the PPC cores'. Sorry, that's all I can say.
 

Kenka

Member
40 W for the overall package (console + communication to controllers) excluding heat waste leaves maybe at most 30 W for the GPU. In order to have a 600 GFLOPS output at this wattage, we are looking at a 20 GFLOPS/W energy efficiency. Is it possible to tell with much certainty that this is realistic.

Hell, how far can we go currently, 30 GFLOPS/W, 100 GFLOPS/W ?


edit: huh, what is this:

compute-efficiency-1.png


http://www.realworldtech.com/compute-efficiency-2012/

edit2: it seems it shows that the RV770 architecture is really efficient at computing (?) - a GPU is good... for computing ?
 
What's the power draw of 2GBs of RAM + WiFi + Wireless Controller video feed transmission?
Don't know. Would also be keen to know - as it helps fill in the blanks.

The overarching question is: Does the 40W figure preclude the 600 GFLOP number that's been floated.
Unfortunately I'm not familiar enough with optical drive wattage to make a comment. And with the GPU it's tough to say in that regard because we had that person from Tezzaron talking about TSMC and stacking so there is a possibility that ih he wasn't speaking hypothetically the GPU may not reach 30w to begin with.

Does this imply they can or can't have a 600 GFLOP GPU...?
 

2MF

Member
40 W ? I thought the power consumption of the console was 75 W, which should make it possible for the GPU to use about 40 W itself.

For reference, the HD 7750 is a 55 W, 819.2 TFlops card.
 
40 W ? I thought the power consumption of the console was 75 W, which should make it possible for the GPU to use about 40 W itself.

For reference, the HD 7750 is a 55 W, 819.2 TFlops card.

75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.

With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.
 

Kenka

Member
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.

With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.
tumblr_li4j1zdoxq1qckcsbo1_500.gif
 
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.

With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.

At 20W, at something similar to the E6760's 16.5 GFLOPs/W, that would put "GPU7" at like 330 GFLOPs...

Does anyone have equivalent figures for the RSX and Xenos?
 

Kenka

Member
I may be jumping to conclusion like an idiot who never learns patience but in my head it's each time more a certainty that Wii U is a Wii 2. 6x in performance gap compared to what is rumored for X360 and PS4 is really big (assuming the FLOPS count on both sides is comparable). I will never get ports from other consoles.

edit: I really need to see how the engraving process is related to energy efficiency, is there a linear/quadratic/exponential relation.
 
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.

With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.

its probably about 450-500glfops like originally rumored way back when(a few months ago around e3). Then the second rumor came out that said it was between 500-600gflops with it being closer to the 600gflop number. Then people seemed to just start assuming it was 600gflops somehow. Nothing was ever really credible and concrete along the way.

At 20W, at something similar to the E6760's 16.5 GFLOPs/W, that would put "GPU7" at like 330 GFLOPs...

Does anyone have equivalent figures for the RSX and Xenos?

there around 200-250 afaik.
 

2MF

Member
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.

What is the best source for this? I googled for a few minutes and some sources say "45 watts typical power usage, 75 watts max", while others say something like "45 watts idle, 75 watts under load".
 
What is the best source for this? I googled for a few minutes and some sources say "45 watts typical power usage, 75 watts max", while others say something like "45 watts idle, 75 watts under load".
Iwata, via Nintendo Direct apparently. 40W, actually.

According to Iwata http://www.neogaf.com/forum/showthread.php?t=492101&highlight=iwata+direct

The Wii U is rated at 75 watts of electrical consumption.
Please understand that this electrical consumption rating is measured at the maximum utilization of all functionality, not just of the Wii U console itself, but also the power provided to accessories connected via USB ports.
However, during normal gameplay, that electrical consumption rating won't be reached.
Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic.
This energy-saving specification has made this compact form-factor possible.
 

Kenka

Member
Now that I think of it, if the CPU requires 5 W of power, what would be its output? Roughly around 10-20 GFLOPS, right ?

How is that compared to Cell and Xenon ?
 
Do we know what type of screen the wii u game pad uses, eg ips etc.

One more thing, does the analog buttons click down, like how you sprint in COD?
 

Kenka

Member
Xenon is about 100GFlops. Cell is around 200GFlops.
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?

The world of tech is more and more fascinating by the minute.
 

2MF

Member
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?

The world of tech is more and more fascinating by the minute.

Peak GFLOPS is only a good measure of comparison within the same architecture. It's better than clock rate or number of cores, but doesn't always allow comparisons between different architectures.

All about the flops again :D

Just checking in ...

Thanks for your contribution.
 

The_Lump

Banned
Just to add to the mix here, the TDP given for gpus is also the max under load figure, right? We shouldn't really be discounting things based on their max TDP compared to wii u's minimum/basic power draw, right?
 
45 watts is the WiiU under load.

WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?

Wii's CPU was an overclocked version of the Gamecube CPU. It was single core and ran at something like 720 mhz. Xenon is 3 cores at 3.2 ghz, each with a really big vector unit for floating point calculations. Xenon's transistor and thermal budget was also massively larger than the Wii's.
 

ozfunghi

Member
Just to add to the mix here, the TDP given for gpus is also the max under load figure, right? We shouldn't really be discounting things based on their max TDP compared to wii u's minimum/basic power draw, right?

That is how i see it. And i also want to place a big question mark next to the 40 W statement of Iwata. How i see it, and how i think it is most sensible for him as head of Nintendo to share that, is in the sense that less = better. He already knows 40 or 50 or even 75 isn't going to persuade the tech analysts that it is superpowerful hardware. What he can do though, is come across as environmental friendly, cost efficient, energy saving etc... So IMO, he is likely to give a low estimate rather than a high. Also, if in the same line of thought, he is speaking of average consumption, i would guess he includes the amount of time you are in the WiiU menu, on MiiVerse, in the game menu etc... Maybe for each hour on the WIiU you are only stressing the GPU/CPU for 20 to 30 minutes.

So i think we are comparing low estimates (power consumption of WiiU) to high estimates (power consumption of GPU/CPU). This is of course pure speculation on my part, but it could explain for 5 to 10 W.
 
Just to add to the mix here, the TDP given for gpus is also the max under load figure, right? We shouldn't really be discounting things based on their max TDP compared to wii u's minimum/basic power draw, right?

The 40W figure appears to be in reference to during gaming, and dependent on what peripherals are attached.
 

tkscz

Member
Which leaks do you refer to?

Considering how long they've been working on the GPU it may not have been cheaper for what Nintendo may have been going for. After all didn't you just mention GCN and with that AMD only recently addressed those inefficiencies. And we both know Nintendo would be highly unlikely to go with something that new. So with that said the only real option is to modify a VLIW part from AMD. My take is that looking at when development started they chose that GPU, focused on improving the compute of the GPU, and "tacked on", at worst, capabilities above the 10.1-level due to dev input.



Excluding further tweaks, late Dec./early Jan.

My problem with this is that everyone keeps mentioning DX10.1, rather than saying OpenGL 3.3. Why is this bothering me? Well besides the obvious, Direct X is a Microsoft copyright and wouldn't be used on a Nintendo console anyway, OpenGL 3.3 backports from OpenGL 4. While I can't say if DX10.1 does this, I know that OpenGL 3.3 supports backporting, allowing it to "port" OpenGL 4 code down to weaker hardware, or as much as the weaker hardware can handle. If it were just that, then it would allow for a good enough amount of OpenGL 4 (DX11) features, but like you and ideaman have said many times, it has been tweaked and messed with many times, to the point there is no actual comparison for it with an off the shelf graphics card. So it could support more, or Nintendo might have been banking on that backporting.
 
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?

Just to make this clear: No, not even closely, at least not in relevant game code. Broadway has OoOE and a short pipeline, overall resulting in significantly more IPC. Three Broadway cores @ 1 GHz should already be enough to compete with Xenon at quite some tasks.

Imo: Forget the FLOPS when you are comparing differenct architectures. It just confuses people and leads to wrong conclusions. You can't put CPU performance into a single number.
 

The_Lump

Banned
The 40W figure appears to be in reference to during gaming, and dependent on what peripherals are attached.

45 watts is the WiiU under load.


I wouldnt say that's 100% accurate....

From chesemeisters Japanese translation:

Iwata: "Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic."

And remember Nintendo are trying to market this as energy efficient. They could have taken this figure whilst playing a virtual console nes game for all we know!
 

M3d10n

Member
In a console, where the CPU can freely access GPU data without going through a PCI-Express bus and buffers can be cast into different types without going through conversion, a DX10.1/openGL 3.3/SM4.1 GPU can be very useful at non-graphics tasks. I believe Havok's GPU acellerated (whichinclude cloth, hair and fluids) would work fine on it.
 
I wouldnt say that's 100% accurate....

From chesemeisters Japanese translation:

Iwata: "Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic."

And remember Nintendo are trying to market this as energy efficient. They could have taken this figure whilst playing a virtual console nes game for all we know!

Elecy bills have been a big talking point in the UK the last couple of years. I think it could be a USP with some value here when referred to alongside any potential next MS/Sony machine. I'm no Greenpeace warrior, but I do love t buy energy efficient devices and save where ever I can.
 

Nilaul

Member
I have an idea.. lets stop talking about spec, and wait till the hombrew scene finds out something more then guess working.
 

Donnie

Member
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?

The world of tech is more and more fascinating by the minute.

Flops are a very poor way to compare the performance of different CPU's.

As an example the i7 920 is theoretically capable of 42.5Gflops while Xenon is theoretically capable of 100Gflops. But in reality the i7 920 is several times better than Xenon.
 

FyreWulff

Member

Kenka

Member
Imo: Forget the FLOPS when you are comparing different architectures. It just confuses people and leads to wrong conclusions. You can't put CPU performance into a single number.

Flops are a very poor way to compare the performance of different CPU's.

As an example the i7 920 is theoretically capable of 42.5Gflops while Xenon is theoretically capable of 100Gflops. But in reality the i7 920 is several times better than Xenon.
Alright, explanations that indicate that the situation is more obscure and complex than imagined. I am out of the thread. Thank you guys to let me understand the topic a bit better.

edit: I edited your edit Donnie. Thanks to clarify things a bit more in detail.
 
Top Bottom