I'm ultra skeptical about the register file using anything but (multiported) SRAM. But I already disclaimed any proficiency in the subject, so I'll shut up : )
Isn't something like a VM pretty much a worst case scenario for something like the SPEs (or PPE for that matter)?It's from a doctoral thesis. Data and Type Optimizations in Virtual Machine Interpreters. One section describes optimizing a Java interpreter for the Cell BE. It doesn't seem to be online anymore, but the paper Optimization Strategies for a Java Virtual Machine Interpreter on the Cell Broadband Engine is available and seems to form the core of the SPE optimization section.
More relevant to Wii U discussion is probably the comparison of the PPE to a 2.4GHz Core 2 processor. Despite running at a lower clockspeed, the Core 2 shows itself 2x-6x faster. This is a good demonstration of a common task in many game engines a scripting language which should profile considerably different on the Wii U than the previous HD consoles.
Yes, and many games have a large amount of code written in them all the way back to QuakeC (and probably further).
If you're putting something on SPU in the first place it's probably well-contained and parallelized. Branch-heavy code like a VM which is jumping all over the place in both memory and instructions is a really poor choice.
Code reasonably well-suited for SPU smokes Xenon/PPU, but even if it doesn't your priority is often getting it off PPU to start with. Because you will always have more free SPU time than free PPU time... even code that runs significantly slower on SPU because of retarded DMA or whatever can be worth moving to SPU.
I really only posted that chart because of Argyle's exaggerated claim. In the context of the Wii U the comparison of the Core2 to the PPE is more relevant as I mentioned above. I don't want to derail the thread with too much SPE talk that isn't relevant to Wii U discussion.
Why would you need an extra 1.5MB of cache for command lists? Particularly when you have an entire 32MB of CPU/GPU-addressable eDRAM pool?I would also guess that the core with the extra cache is intended to generate the command list for the GPU. It's the only work I can think of that you could reasonably expect to have to perform in every game.
Why would you need an extra 1.5MB of cache for command lists? Particularly when you have an entire 32MB of CPU/GPU-addressable eDRAM pool?
Why would you need an extra 1.5MB of cache for command lists? Particularly when you have an entire 32MB of CPU/GPU-addressable eDRAM pool?
...I thought that the 32MB of eDRAM was for GPU-access only.
Digital Foundry did some power draw tests. Not sure if it offers any new insight.
"We find that the Wii U is drawing around 32 watts of power during gameplay and despite running our entire library of software, we only ever saw an occasional spike just north of 33w."
Thanks. I assume that if Wii U was being pushed to the max it'd go higher?
Thanks. I assume that if Wii U was being pushed to the max it'd go higher?
One thing that did stand out from our Wii U power consumption testing - the uniformity of the results. No matter which retail games we tried, we still saw the same 32w result and only some occasional jumps higher to 33w. Those hoping for developers to "unlock" more Wii U processing power resulting in a bump higher are most likely going to be disappointed, as there's only a certain amount of variance in a console's "under load" power consumption.
http://www.neogaf.com/forum/showpost.php?p=44966628&postcount=646 Just to bring up this post from a few pages back, I had a question...
If 1 Xenon core only out performed Broadway by 20%, than could this be why Wii U's CPU cores were originally clocked at 1GHz? (a clock speed increase of 37%) wouldn't this than put the Wii U's 3 cores at 1.24GHz with as good or better performance clock for clock as Broadway, quite a bit beyond Xenon (Broadway would match or beat a Xenon core at only 875MHz?) so more like a 1.5x (I know we love those multiples) or 50% faster than Xenon according to the emulator developer from the quoted post above?
Obviously for certain tasks that are SIMD heavy would still likely fall short on Wii U, but for general computing of game logic and what not, wouldn't Wii U's CPU cores reasonably faster? or was the developer missing some other key component of Xenon's power that was unlocked later on?
I wonder how much stuff is in the Gamepad.
It obviously runs the TV remote stuff independently of the Wii U itself, so how much could this firmware be expanded upon to offer more functionality?
I hope the TV remote becomes more customisable in the future to control receivers, or toggle various remote configs.
Read the article
No real idea, other than maybe it was something to do with the typical memory access pattern for this work. Just trying to think about why they'd bother to even make it asymmetrical. Do you have any ideas on why they would even do this?
Another possibility is that is the core that the OS will preempt, and games don't have access to all of the cache (to keep the OS thread from screwing with the cache for the game thread).
http://www.neogaf.com/forum/showpost.php?p=42312104&postcount=1Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic.
I wonder how much stuff is in the Gamepad.
It obviously runs the TV remote stuff independently of the Wii U itself, so how much could this firmware be expanded upon to offer more functionality?
I hope the TV remote becomes more customisable in the future to control receivers, or toggle various remote configs.
The measured power consumption seems weird, looking at the hardware presentation back in September:
http://www.neogaf.com/forum/showpost.php?p=42312104&postcount=1
28 - 33W isn't "roughly 40W" - it's roughly 30W. What's with the 30% difference? What if it's related to Arkam's statements that initial devkits were running at 1GHz/ 400MHz? Maybe games designed for firmware 1.x run at lower clock speeds and no currently released game or application has the chips running at the 1.25/ 550 clocks Marcan discovered? That difference in clock speed could potentially explain the 10W difference.
Those 10 watts should make a monstrous difference.
Those 10W are one third of the overall power consumption. That is quite a difference.Those 10 watts should make a monstrous difference.
28 - 33W isn't "roughly 40W" - it's roughly 30W. What's with the 30% difference? What if it's related to Arkam's statements that initial devkits were running at 1GHz/ 400MHz? Maybe games designed for firmware 1.x run at lower clock speeds and no currently released game or application has the chips running at the 1.25/ 550 clocks Marcan discovered? That difference in clock speed could potentially explain the 10W difference.
Read the article
Those 10W are one third of the overall power consumption. That is quite a difference.
Still a rather small amount of power. I mean, if you had a 200 watt power supply and were not utilizing 30%, the performance lost is far more than say, 10 watts, even if the percentage is the same.
The measured power consumption seems weird, looking at the hardware presentation back in September:
http://www.neogaf.com/forum/showpost.php?p=42312104&postcount=1
28 - 33W isn't "roughly 40W" - it's roughly 30W. What's with the 30% difference? What if it's related to Arkam's statements that initial devkits were running at 1GHz/ 400MHz? Maybe games designed for firmware 1.x run at lower clock speeds and no currently released game or application has the chips running at the 1.25/ 550 clocks Marcan discovered? That difference in clock speed could potentially explain the 10W difference.
No the power usage lost is more, the performance lost is the same, 30%.
What is 30% of 30, what is 30% of 200? You are losing 2 Wii U's worth of performance in the latter scenario.
Yes, 30% of 200 is more than 30% of 30. No shit Sherlock. It's still 30%. Same relative difference. In fact, the relative difference in this latter case would be higher, as it's all for the chipset. Periphery stays the same, obviously. If we assume 10W for the periphery, that leaves 20W/ 190W for the actual chipset (I'm not using your exact example as it's backwards: x - y% + y% != x). One third more power overall means 30W just for the chipset in the former case (+50%), 256W in the latter (+35%).Still a rather small amount of power. I mean, if you had a 200 watt power supply and were not utilizing 30%, the performance lost is far more than say, 10 watts, even if the percentage is the same.
Yes, 30% of 200 is more than 30% of 30. No shit Sherlock.
I think it's great that they did that, as I said earlier in the thread I was very unsure of whether the system did more aggressive power gating or not, so it was hard to draw any conclusions from just 3 measurements. Now we see that there does seem to be a pretty hard cap on the power draw.Digital Foundry did some power draw tests. Not sure if it offers any new insight.
"We find that the Wii U is drawing around 32 watts of power during gameplay and despite running our entire library of software, we only ever saw an occasional spike just north of 33w."
Yeah, I agree with all of this.Their logic here is somewhat flawed. If every game draws 32W, and even the Internet browser draws 32W, then the logical deduction would be that there is very little in the way of power gating, or other mechanisms to reduce power draw when at less than full load. You can't really make any deductions from it as to whether there's more performance to be squeezed from the console.
I also would find it incredibly unlikely that current games are running on anything less than the full 1.25GHz/550MHz clocks.
The alpha stuff is the most puzzling remaining aspect of Wii U performance to me at this point. One potential explanation that was put forth is that doing correct alpha blending using traditional methods would require polygon sorting on the CPU, which could be the reason for the slowdown. I guess this could be tested by checking whether the slowdown is dependent on the amount of screen area covered by blended effects or not.Performance doesnt seem to be choked due to GPU intensive tasks, does it? Thought I read where slowdowns occur in areas with lots of alpha, or areas with lots of normal maps or high res textures, in game, which would be a RAM bandwidth issue
Ok first of all you can't measure performance between two different devices in watts. Secondly no matter what you compare it to its always going to be a 30% increase which is large, no matter what the increase in wattage.
Thanks for the info. That makes the difference even weirder...The kits with final speeds (at least the gpu at 550 mhz) are 1.5 years old (more or less summer 2011) so it doesn't make sense use older kits than this ones with "gimped" performance.
Do you think I would have bothered writing a lengthy explanation if I were upset?Christ dude, calm down.
You guys are silly.
Inefficiencies are large problems on larger scales. An Ipad can use upto what, 4 watts at full loads? We're talking about like 2 Ipads and some change worth of wattage. The readily observable graphics increase that would bring to the Wii U would be far less noticeable than a 30% power bump in a 200 watt system.
Do you think I would have bothered writing a lengthy explanation if I were upset?
Thats like saying that if you increase the quality of a character model from 1000 polys to 1300 polys it'll be a less noticabble difference than if you increase a 6700 poly model to 8700 polys. Not only is it very questionable, but its also quite a pointless discussion. You're increase the systems power usage by 30%, all we can say about that is that it should allow for a large increase in performance for that system. That won't change no matter how many Ipads you compare the power increase to.
Nah, that doesn't work for me. My posts get shorter the more angry I am. Different architecture, my MPU (MAD processing unit) has a built-in compression stage.Length of post is the greatest measurement of MAD.
You seem to have a 45 MOPS (Mad operations per second). At 3 lines of anger, you roughly are at a meager 135 MOPS.
Your MOP per line efficiency can probably increase even more so, and with the amount of space you have on a page, you can increase the MOPS per White Space. You're currently at a rough 32ws (white space), if you can bump that up to 40, that would be a 30% increase.
With a name and avatar like mine are you really going to question my sensibility?
What sensibility?
Nah, that doesn't work for me. My posts get shorter the more angry I am. Different architecture, my MPU (MAD processing unit) has a built-in compression stage.
The measured power consumption seems weird, looking at the hardware presentation back in September:
Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic.
http://www.neogaf.com/forum/showpost.php?p=42312104&postcount=1
28 - 33W isn't "roughly 40W" - it's roughly 30W. What's with the 30% difference? What if it's related to Arkam's statements that initial devkits were running at 1GHz/ 400MHz? Maybe games designed for firmware 1.x run at lower clock speeds and no currently released game or application has the chips running at the 1.25/ 550 clocks Marcan discovered? That difference in clock speed could potentially explain the 10W difference.
What is more noticeable, a 30% increase in fuel efficiency in a Prius or a 30% increase in fuel efficiency in a full size pick up? You will be saving hundreds if not thousands of dollars more per year on the pick up in fuel costs.
Increasing the Wii U's performance 30% will make a negligible impact on the graphics. Increasing some monster 200 watt next-gen platform's performance 30% could see dramatic increases in scene complexity which the Wii U could only dream of.