Most estimates I've heard have given the GPU 25-30W to work with. Besides, we know the GPU is clocked at 550Mhz, and we know how big it is. That logic is doing something, and it's doing it at 550MHz. Whether it's being used for computation (glfops) or something else won't have much effect on the energy consumption.
Its customized by Nintendo. It matches no known architecture going by OP.15w Thraktor. How?
And IF its as you say, where is the desktop equivalent?! I'd love to throw one of these in a media center box.
Man I thought I had pretty reasonable expectations and even then Nintendo missed all of them.
15w Thraktor. How?
Maybe in America... in Italy a Cappuccino is made by Espresso+Latte(milk)
Edit: nope even wikipedia in english says that espresso+milk=cappuccino
http://en.wikipedia.org/wiki/Cappuccino
By the way for the CPU we have to wait another 2-3 days right?
Its customized by Nintendo. It matches no known architecture going by OP.
15w Thraktor. How?
And IF its as you say, where is the desktop equivalent?! I'd love to throw one of these in a media center box.
I'm having a real hard time wrapping my hand around that kind of performance at that power draw.
This would make this the most efficient console ever created. I'm not sure I buy it.
I'm having a real hard time wrapping my hand around that kind of performance at that power draw.
This would make this the most efficient console ever created. I'm not sure I buy it.
I'm having a real hard time wrapping my hand around that kind of performance at that power draw.
This would make this the most efficient console ever created. I'm not sure I buy it.
It's not.It's the GameCube of this generation
Going clockwise from bottom left, you're basically looking at the render back ends (they're they last step before outputting to RAM.
The bottom middle I believe is geometry setup/command processor. Bottom right is the UVD/display controllers.
Most of the area to the top and left is ROPs. At the bottom right there's some video decode hardware, etc. There is certainly some miscellaneous logic on there, but it's a lot closer to 10-20% than it is to 70%.
The 33 watts is what the console uses at the wall. We can all measure that and this is the same for running any wiiu games. Been tested by many different sites.
Not sure if it's the same for all Wii U games is it, just no game and NSMBU?
It is
Thanks for the elaborate response
By what metric?
Thanks. Would there be any reason that separating them out would make sense to you from a design perspective?
There's certainly some such units, but it simply wouldn't make sense to me that they should take up that proportion of the die.
Not sure if it's the same for all Wii U games is it, just no game and NSMBU?
We find that the Wii U is drawing around 32 watts of power during gameplay and despite running our entire library of software, we only ever saw an occasional spike just north of 33w
By what metric?
I'm having a real hard time wrapping my head around that kind of performance at that power draw.
This would make this the most efficient console ever created. I'm not sure I buy it.
Maybe they aren't accurately reading the draw?
Dunno :/
Rendering the Wii U menu actually consumes almost as much power as playing Super Mario U. Watching a movie on Netflix consumes a bit less power, my guess is a lot of the 3D blocks are power gated leaving only the CPU cores and video decode hardware active.
Maybe they aren't accurately reading the draw?
Dunno :/
Well, they're pretty much isolated/independent functions.
By the supposed theoretical GFLOP performance and the really world performance thanks to fix functions and efficiency.
Gamecube's gpu was theoretically 8 gflops while Xbox was 3 times more, yet they were on par in real world application.
It's possible if they have a poorly calibrated wattmeter. Iwata said in a Nintendo Direct that the Wii U would generally consume 45w, but there is overhead there for accessory power via USB and the SD Card reader and such.
What kind of performance? We know it currently runs roughly equivalent PS3/360 games and we know the power draw. Unless it's already been maxed out by those early ports (which would also be unprecedented), it still has some theoretical performance gains to be had.
We may all be underwhelmed with its specs as they come to light, but maybe we should also be impressed by its efficiency.
No, don't get me wrong. Even at the theroized 176GFlops, I'm impressed with the results so far. The ports being as good as they are, given that, just makes what they built here that much more impressive IMO.
In any event, I've PMd Thraktor my clarification. Last thing I want is to seem like an unappreciative asshole.
It's not really wise to go down the road of "what ifs." There's no evident reason to doubt the figure.
It's theorized to be 352glfops now though :/
I don't understand how the ports can perform worse when given more gflops and more ram... Only explanation is rushed products made by weaker dev units.
More food for thought: Nintendo is unifying its development structure for console and handhelds. I always assumed this referred to the software development tools side of things. 3DS uses pretty modern fixed functions graphics. So besides keeping their knowledge about TEV useful even on Wii U (which is a given as the units for Wii BC is also usable in Wii U) its also in Nintendo's best interest having all fixed function graphics capabilities of the 3DS available on the Wii U as well, now in Full HD...
There's more to it than just GFLOPS and ram amount. Ports could be bottlenecked by the cpu and ram speed.
Same goes for your 'gamecube of this gen' comparison. gflops doesn't tell the whole story.
I may be crazy, but I think I may have zeroed in on the purpose of the 2 distinct eDRAM pools. The idea must have primarily sprung from the need for Wii BC and very low latencies. Follow me here...
You're trying to emulate the main pool of 24 MB 1t-SRAM from Flipper/Hollywood. Fine. You've got 32 MB of eDRAM to do that. That takes 3/4 of your allotted eDRAM. Now you've got 8 MB to emulate the additional 2 MB frame buffer and 1 MB texture RAM.
But you can't do it in 8 MB because that 1 MB of texture cache alone on Flipper was amazingly rigged to a 512-bit bus! This gave it not merely high (for then) bandwidth, but extremely low latency. If the 32 MB eDRAM is hooked up to a 2048-bit bus, then you're only left with a 512-bit bus left to share between the texture cache and the frame buffer.
Which leads to the proposed 4x1MB eDRAM modules on top. It's probably simply the smallest amount that Renesas could offer on a 512-bit bus.
I'm probably completely off, but it stands to be refuted.
If this is true, then wouldn't developers have 32+4MB of eDRAM to play with? Or am I missing something, completely?
Right folks, it's late here and I'm off to bed. I've cleaned up the OP a bit, and I expect lots of newly deciphered info for it by the time I wake up tomorrow
Well, a new, unknown architecture can be quite the "bottleneck", in particular if you're on a tight budget with limited time.The ram isn't a bottleneck anymore (actually never was) so try again (you can only mae a case for the cpu being a bottleneck of sorts).
That 352 isn't counting any proposed fixed-function assistance afaik. The number you quoted was a knee-jerk response thrown out before the chip had even begun to be analyzed properly.If it turns out that the fixed functions are an aid to the programmable shader units in the Wii U...
...is there a documented way of measuring performance of something so customized?
Honest question.
You should realize why would nintendo have an interest in doing so, perhaps a changing market most don't realize is about to change a lot of dynamics. For nintendo shrinking things and merging both ends and then making products that tie in to that is much better than the status quo. Let them grow power from there anything else we end up with earlier situations and tons of money loss.
Yes, it would basically be like a modern Flipper. Like most other modern chips, bandwidth and size have increased much more than latency has decreased. However, the latency is still much reduced and obviously a focus.
I just realized this. The smaller block of EDRAM is apparently of a greater density and higher bandwidth than the larger block. Why would that 4MB be higher speed than the larger pool of 32MB?--especially if it was put in there [primarily] because of the Wii emulation?
Well from the Q&A answer on top of the investor presentation stuff it definitely seems like it was referring to just the software side to me. I wouldn't take that as having anything necessarily to do with hardware, other than portable hardware becoming more capable and these days (i.e. supporting the same shaders or whatever 3D APIs and stuff as the console hardware). I figure they're working on a middleware or some standardized higher level dev setup to work with to make development faster and more portable down the line.More food for thought: Nintendo is unifying its development structure for console and handhelds. I always assumed this referred to the software development tools side of things. 3DS uses pretty modern fixed functions graphics. So besides keeping their knowledge about TEV useful even on Wii U (which is a given as the units for Wii BC is also usable in Wii U) its also in Nintendo's best interest having all fixed function graphics capabilities of the 3DS available on the Wii U as well, now in Full HD...
From the investor stuff it sounds like the finalized dev kits weren't out until around Q3 so I imagine a lot of stuff was rushed, on top of the hardware being completely different. Standard crappy launch stuff basically, albeit without the usual generational big performance overhead to easily mask issues.It's theorized to be 352glfops now though :/
I don't understand how the ports can perform worse when given more gflops and more ram... Only explanation is rushed products made by weaker dev units.
Might be possible for a dev to make their own benchmarks to test this or that perhaps, or ideally there'd be some documented performance metrics for everything in the documentation so they know what to expect rather than having to test things out themselves.If it turns out that the fixed functions are an aid to the programmable shader units in the Wii U...
...is there a documented way of measuring performance of something so customized?
Honest question.
I just realized this. The smaller block of EDRAM is apparently of a greater density and higher bandwidth than the larger block. Why would that 4MB be higher speed than the larger pool of 32MB?--especially if it was put in there [primarily] because of the Wii emulation?
I just realized this. The smaller block of EDRAM is apparently of a greater density and higher bandwidth than the larger block. Why would that 4MB be higher speed than the larger pool of 32MB?--especially if it was put in there [primarily] because of the Wii emulation?
It's a higher speed for its size, not overall. But it should be extremely low latency even relative to the 32 MB MEM1.