Well, the first post in this thread, for example.
It also says three enchanced broadway cores, which may not hold up.
But okay: R700. Why couldn't the customization process account for the difference?
Well, the first post in this thread, for example.
Have you forgotten just that quick?
http://www.neogaf.com/forum/showpost.php?p=39805199&postcount=8056
http://www.neogaf.com/forum/showpost.php?p=39806466&postcount=8065
That seems true.It also says three enchanced broadway cores, which may not hold up.
Whatever customization have been made, we're still seeing it described as "DX10.1" level.
I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.
What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.
Is the 1gb ram shared between system memory and graphics?
That is, in fact, true.
Whatever customization have been made, we're still seeing it described as "DX10.1" level.
I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.
What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.
That is, in fact, true.
Whatever customization have been made, we're still seeing it described as "DX10.1" level.
I fully grant they customized the part to a certain number of shaders, to work with embedded DRAM, to integrate with the CPU on the same die or module, to integrate with the hardware audio subsystem, to work with the chosen memory configuration, to reduce power utilization during streaming media playback or home screen use.
What I doubt is there is some miraculous, black box "GPGPU" hardware that has been grafted on.
Didn't the guy who leaked the info say it was beyond that?
I don't think anyone who has an understanding of it believes that though. I do believe those same people believe they've addressed the inefficiency of said architecture in some fashion.
And if the operation of the part has been so sufficiently altered as to make an significant difference in GPGPU tasks, why do all the leaks still say, "yeah, it's basically like a R700"?
Wouldn't it be cheaper and easier to just use a later, DX11 design from AMD than to spend the time and money rooting around in the core architecture to put a bandaid on the DX10.1 hardware? A DX11 design where, presumably, those inefficiencies have already been addressed? And if the operation of the part has been so sufficiently altered as to make an significant difference in GPGPU tasks, why do all the leaks still say, "yeah, it's basically like a R700"?
Does anyone one know GPU7's development was finalised?
Bah. I still call them Tualatins ;pIt's still not a broadway or three broadways at that; that's dissing it.
For starters triple core is not so easy as to make 3 dies and glue them together with tape; for there are shared components if done right; it's a big evolution, otherwise we'd be calling the first Core Duo's (and Pentium M's in between) Pentium 3 Tualatin's.
You're slightly confusing the nomenclatures here. The eXXX series is Freescale's nomenclature (after they spawned from Moto), IBM never changed theirs - i.e. an IBM's G3 is still a ppc7xx, and a G4 - a ppc7xxx. Freescale's nomenclature is:Also, bare in mind PPC 7xx development was phased out in favour of PowerPC e500 which unlike PPC 7xx has multi-core support. This, PowerPC e600 or even PowerPC 476FP seem more likely to be used seeing they're not deprecated and are more up to date; producing custom versions of them could be cheaper seeing they're in production. (PPC 7xx is also still in production, obviously; but such a different core with a core shrink would require dedicated investment)
It is a member of the R700 family, but it has been modified.
They didn't just 'change some IO stuff to talk to the PPC cores'. Sorry, that's all I can say.Modified how? Do you even know? Cause you didn't seem to even understand the thing about the CPU you were trying to leak. Was it changed in one of the many ways I just listed that don't fundamentally alter the performance per watt or GPGPU efficiency? If it was altered in such a way as to alter the performance characteristics of the R700 design, are we talking about a 5% gain? 10%? 15%? 25%? 50%? Because a lot of people want to take the word "modified" to mean 50% faster, but it could just as easily mean "we changed some I/O stuff to talk to the PPC cores".
Don't know. Would also be keen to know - as it helps fill in the blanks.What's the power draw of 2GBs of RAM + WiFi + Wireless Controller video feed transmission?
Unfortunately I'm not familiar enough with optical drive wattage to make a comment. And with the GPU it's tough to say in that regard because we had that person from Tezzaron talking about TSMC and stacking so there is a possibility that ih he wasn't speaking hypothetically the GPU may not reach 30w to begin with.
They didn't just 'change some IO stuff to talk to the PPC cores'. Sorry, that's all I can say.
40 W ? I thought the power consumption of the console was 75 W, which should make it possible for the GPU to use about 40 W itself.
For reference, the HD 7750 is a 55 W, 819.2 TFlops card.
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.
With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.
With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.
With even 30 full watts just for the GPU the WiiU's GPU would have to be nearly as efficient as AMD's latest GCN architecture manufactured at 28nm to reach 600GFlops. Given that I think it's unlikely that A) it will be manufactured at 28nm, B) its power usage will be more than about 20 watts, and C) that whatever they've done to the R700 is enough to nearly double its performance per watt I think it's pretty safe to discount 600GFlops as a realistic expectation.
At 20W, at something similar to the E6760's 16.5 GFLOPs/W, that would put "GPU7" at like 330 GFLOPs...
Does anyone have equivalent figures for the RSX and Xenos?
75 watts is the max the power supply is rated for. Nintendo says typical usage is around 45 watts in game.
Iwata, via Nintendo Direct apparently. 40W, actually.What is the best source for this? I googled for a few minutes and some sources say "45 watts typical power usage, 75 watts max", while others say something like "45 watts idle, 75 watts under load".
According to Iwata http://www.neogaf.com/forum/showthread.php?t=492101&highlight=iwata+direct
The Wii U is rated at 75 watts of electrical consumption.
Please understand that this electrical consumption rating is measured at the maximum utilization of all functionality, not just of the Wii U console itself, but also the power provided to accessories connected via USB ports.
However, during normal gameplay, that electrical consumption rating won't be reached.
Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic.
This energy-saving specification has made this compact form-factor possible.
Iwata, via Nintendo Direct apparently. 40W, actually.
OK, tx. If that is a full-load figure then 600 GFlops looks a bit unlikely indeed. Unless it's a mobile GPU.
WUT.Xenon is about 100GFlops. Cell is around 200GFlops.
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?
The world of tech is more and more fascinating by the minute.
All about the flops again
Just checking in ...
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?
Just to add to the mix here, the TDP given for gpus is also the max under load figure, right? We shouldn't really be discounting things based on their max TDP compared to wii u's minimum/basic power draw, right?
Just to add to the mix here, the TDP given for gpus is also the max under load figure, right? We shouldn't really be discounting things based on their max TDP compared to wii u's minimum/basic power draw, right?
Which leaks do you refer to?
Considering how long they've been working on the GPU it may not have been cheaper for what Nintendo may have been going for. After all didn't you just mention GCN and with that AMD only recently addressed those inefficiencies. And we both know Nintendo would be highly unlikely to go with something that new. So with that said the only real option is to modify a VLIW part from AMD. My take is that looking at when development started they chose that GPU, focused on improving the compute of the GPU, and "tacked on", at worst, capabilities above the 10.1-level due to dev input.
Excluding further tweaks, late Dec./early Jan.
The 40W figure appears to be in reference to during gaming, and dependent on what peripherals are attached.
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?
The 40W figure appears to be in reference to during gaming, and dependent on what peripherals are attached.
45 watts is the WiiU under load.
I wouldnt say that's 100% accurate....
From chesemeisters Japanese translation:
Iwata: "Depending on the game being played and the accessories connected, roughly 40 watts of electrical consumption could be considered realistic."
And remember Nintendo are trying to market this as energy efficient. They could have taken this figure whilst playing a virtual console nes game for all we know!
WUT.
Wii's processor was 2.9 GFLOPS according to Wikipedia. Surely, Xenon is not 30 times as quick even though there were three cores ?
The world of tech is more and more fascinating by the minute.
Elecy bills have been a big talking point in the UK the last couple of years. I think it could be a USP with some value here when referred to alongside any potential next MS/Sony machine. I'm no Greenpeace warrior, but I do love t buy energy efficient devices and save where ever I can.
Imo: Forget the FLOPS when you are comparing different architectures. It just confuses people and leads to wrong conclusions. You can't put CPU performance into a single number.
Alright, explanations that indicate that the situation is more obscure and complex than imagined. I am out of the thread. Thank you guys to let me understand the topic a bit better.Flops are a very poor way to compare the performance of different CPU's.
As an example the i7 920 is theoretically capable of 42.5Gflops while Xenon is theoretically capable of 100Gflops. But in reality the i7 920 is several times better than Xenon.