phosphor112
Banned
Being sold at a loss was a per unit loss, and that did not include things like R+D
Figured as much. Any idea of the cost of the CPU? Apparently that thing cost the most out of all the onboard stuff...?
Being sold at a loss was a per unit loss, and that did not include things like R+D
Figured as much. Any idea of the cost of the CPU? Apparently that thing cost the most out of all the onboard stuff...?
Figured as much. Any idea of the cost of the CPU? Apparently that thing cost the most out of all the onboard stuff...?
You're stopping yourself short from seeing the full picture there.The impression I'm getting now is that Nintendo made poor choices for the hardware and basically shot themselves in the foot. Why have they persisted with the PPC 750 CPU architecture? It's a 15 year old architecture give or take, and frankly no matter how modified it is, its going nothing on modern x86 or IBM CPU architectures. Where else is the 750 used? No where outside of Nintendo's products, IBM dumped it over a decade ago with the last of the Apple iBooks.
The only thought I have for why Nintendo persisted with PPC 750 is because of backwards compatibility and a reluctance to embrace and learn a new architecture. Having used PPC 750 since Gamecube sticking with it means they can avoid having to reskill and can reuse a lot of assets and tools they've developed over the years. Nintendo do seem to go out of their way to avoid having to learn or embrace new architectures. Evident by their continued support of PPC 750, fixed function GPUs, and utilising the same base architecture concept from the Gamecube to Wii U. Seems to me they've spent more money trying to adapt their existing architectures and beef them up for HD gaming, like dicking around and making a multi core PPC 750, then what that money could have brought had Nintendo invested it into the best architecture AMD and IBM could have provided.
Spend $100 beefing up a 750 CPU. Result = still pathetically bad performance
Spend $100 buying a best CPU IBM/AMD have available. Result = Very good performance but we'd have to learn a new architecture, develop new tools and assets, up skill and retrain staff, and we'd also lose backwards compatibility
Generally, if you have your own CPU design, the "cost" for the chip is very simple, it's the cost of the silicon, so you take the price of the full size of the origional silicon wafer, and then devide that by how many CPU's you can cut out of it given the size of the CPU, then factor in yield costs (how many CPU's are invalid due to process errors and such)."4. For reference sake, the Apple A6 is fabricated in a 32 nm CMOS process and is also designed from scratch. Its manufacturing costs, in volumes of 100k or more, about $26 - $30 a pop. Over 16 months degrade to about $15 each
a. Wii U only represents like 30M units per annum vs iPhone which is more like 100M units per annum. Put things in perspective.
He didn't link to it, but those numbers are from chipworks. It's in the GPU die shot OP. Not linking because I'm on my phone.Generally, if you have your own CPU design, the "cost" for the chip is very simple, it's the cost of the silicon, so you take the price of the full size of the origional silicon wafer, and then devide that by how many CPU's you can cut out of it given the size of the CPU, then factor in yield costs (how many CPU's are invalid due to process errors and such).
Essentially all things being equal (which isn't the case as some designs can cause for worse yield), the bigger the CPU the less you get from a wafer, the more expensive it is. Your data on the cost of the A6 is interesting, do we have any die size comparisons to the Wii U's CPU, is it bigger or smaller?
First we'll go for PS2, PS2 was still a MIPS architecture, but rather than holding itself back for backwards compatibility they kept the PSone MIPS chip and used it for I/O while not running PSone games;
Nintendo could have done this as well seeing PPC750 CPU's have been used for embedded systems for years; just core shrink it and it's efficient enough.
Nintendo could have done the same here, they could have went with a Power7 part and then fitting some extra PPC750 core in there, no biggie; and seeing how 3DS apparently lacks both high frequencies or backwards physical components (different GPU design/heritage and 2 ARM11 rather than a ARM7+9 config); doing the same for Wii U could have been an option as well.
As for why they didn't, that's a simpler thing to explain; PPC750 was designed from the ground for low power consumption, way back when (1997) it already boasted that it spent half the energy of a Pentium II whilst delivering the same performance; and these old designs tend to be energy efficient by today's standards, seeing intel is still trying to fit Pentium 1 cores in some of their designs (hint: energy efficient rather than powerful; but this is a different approach).
The successor PPC7400 wasn't nearly as energy efficient while being more scaleable (per clock it delivered pretty much the same performance), and Power5/PPC970 architecture and later simply weren't meant for embedded devices (and still were in the same per clock performance ballpark, if albeit worse). To make things worse, even when IBM had 30W PPC970/G5 parts at hand, their northbridge/controllers weren't energy efficient enough, and with Apple no longer pressuring them to do so they still aren't, haven't been for newer generations and probably will never be. (fun fact, some PPC970 northbridges had a PPC405 on them; dropping a broadway on such designs could be viable).
not so on CPU's where taking ehancement instructions aside (Altivec, VM, MMX, 3D Now, SSE) the core functionality stays largely the same
Taking pipeline stages, architecture, instruction enhancements and scalability aside PPC750 is very similar to processors as recent as Core 2 Duo
Generally, if you have your own CPU design, the "cost" for the chip is very simple, it's the cost of the silicon, so you take the price of the full size of the origional silicon wafer, and then devide that by how many CPU's you can cut out of it given the size of the CPU, then factor in yield costs (how many CPU's are invalid due to process errors and such).
Essentially all things being equal (which isn't the case as some designs can cause for worse yield), the bigger the CPU the less you get from a wafer, the more expensive it is. Your data on the cost of the A6 is interesting, do we have any die size comparisons to the Wii U's CPU, is it bigger or smaller?
Because those solutions were on the table for Nintendo. I separated them on two scenarios because I figured they were different enough approaches (one where said CPU gets "banned" from the spec but it's used in background tasks versus one where it's used in the actual spec and was embedded on the main cpu nonetheless)I don't see what relavence that has to the Wii U?
My point was: probably because their only choice keeping up with the punch for watt of the PPC750 would have been the PPC476FP.Yes they could have, and i'm wondering why they didnt.
Your point? they should have invested more money on downsampling a higher spec part into the same sub-2GHz footprint and get the same performance as a PPC750 in return?Nintendo seem to have spent significant money having IBM engineer a highly customised PPC 750 cpu. Multi core, 6x the cache, 25-30% higher clock speed, higher bus, more logic, these are significant changes and far beyond 'customisation'. This CPU has what at least 3-4x the transistor count of any other 750? It's also multi core, something the architecture was never designed for. Also given all these changes it would have required a brand new fab process, again increasing costs significantly.
That's precisely my point. They didn't forego with 3 PPC750 cores just to maintain compatibility. They did so because it really was the best solution for the power draw not to go through the wall with the available options.The new mini Wii is what $50 CA. Seems like if all Nintendo wanted was BC with the Wii they could have implamented the entire Wii's SoC into the Wii U for a fraction of the cost of what they've spent building this multi core PPC 750.
How so? you have current generation HD platforms whose CPU's get murdered by this. Jaguar on PS4/X360 is also nothing to write home about.PPC 750 is not suited for use in a modern HD game console. It's an architecture that simply wasn't designed for the demands of modern day processing, yet alone playing HD video games. Look at its SIMD capabilities, its number crunching, this thing is better suited to a smart phone then it is a 'HD games console'.
First and foremost retaining backward compatibility and one also has to take into account that Nintendo is the type of company that becomes a regular customer (the don't fix it if it's not broken formula), I'm betting they kept AMD in the loop without going to Nvidia because of their pre-existing relation and the fact they also knew GC/Wii inner workings quite intimately.Which again raises the question of why Nintendo persisted with IBM.
Already touched on that above.And again the only logic i can see is they wanted to avoid at all costs migrating to a new architecture. They have over a decade of experience on PPC750, no doubt significant tools, resources, and assets developed for it, so they wanted to avoid changing. Nintendo's primary motiviation from what i can see is a reluctance to embrace a new architecture.
I know that.AFAIK the PPC750 has no Altivec.
That's not saying much when you had core 2 duo's clocked at 3.5 GHz.A Core 2 Duo would mop the floor with this chip.
And yet it's powering a modern game console, doing things that this passing gen were considered the domain of fp/simd 'monsters'.I don't agree with this.
PPC 750 is not suited for use in a modern HD game console. It's an architecture that simply wasn't designed for the demands of modern day processing, yet alone playing HD video games. Look at its SIMD capabilities, its number crunching, this thing is better suited to a smart phone then it is a 'HD games console'.
And yet it's powering a modern game console, doing things that this passing gen were considered the domain of fp/simd 'monsters'.
If anything, WiiU should have hosted more of the 'smart-phone-class' ppc750cl.
I'm referring to the fact WiiU runs the same software as the other consoles on the market, despite the fact the others are considered to have particularly potent fp/simd CPUs. Off the top of my hat:Can you provide an example of what you're referring to?
Are we ever going to find out any more about the GPU, or is it now just a case of look at the games to see how powerful it is?
I'm referring to the fact WiiU runs the same software as the other consoles on the market, despite the fact the others are considered to have particularly potent fp/simd CPUs. Off the top of my hat:
That's not impressive at all....
Telling me that the Wii U can run games just as good as consoles 7 years older then it. That's like the lowest expectation one could have.
Better actually. Need For Speed does look better on WiiU and so does Trine 2.
I'm hoping that there will be some software at E3 that'll prove once and for all that the WiiU is more capable then current consoles.
I'm still waiting for something on par with the Zelda demo.
On paper, those 7-year old CPUs are quite comparable fp/simd-wise to the CPUs in the upcoming consoles. Whether that impresses you or not is beyond the scope of this conversation.That's not impressive at all....
Telling me that the Wii U can run games just as good as consoles 7 years older then it. That's like the lowest expectation one could have.
Nintendo seem to have spent significant money having IBM engineer a highly customised PPC 750 cpu. Multi core, 6x the cache, 25-30% higher clock speed, higher bus, more logic, these are significant changes and far beyond 'customisation'. This CPU has what at least 3-4x the transistor count of any other 750? It's also multi core, something the architecture was never designed for. Also given all these changes it would have required a brand new fab process, again increasing costs significantly.
From what we know about the dev kits they have only increased in power since they were first released. It's more likely to be how you hope, that the retail Wii U is more powerful than the one running the Zelda and bird demos.I have a feeling that the Zelda Tech Demo ran on a more powerful devkit than the final product. I hope it's the opposite though. They could just release the Tech Demo on the Eshop for people to get excited. I mean, they already made a Zelda Community on Miiverse.
keeping the old architecture doesn't seem to have paid off since Nintendo is still struggling to get the games out on time.
they have to drop these old chips at some point. delaying this more and more only makes it harder. imagine when they have to make the sucessor of the Wii U, are they going to make an 8-core PPC750?
I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?
I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?
EDIT: What I'm trying to say is, they could use the GPGPU for BC and a completely new CPU/GPU/GPGPU for a new console, like Sony did with the 60GB PS3s back then, which had a PS2 CPU built in, and so on.
You lost me there. How could they use GPGPU for BC?I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?
EDIT: What I'm trying to say is, they could use the GPGPU for BC and a completely new CPU/GPU/GPGPU for a new console, like Sony did with the 60GB PS3s back then, which had a PS2 CPU built in, and so on.
You lost me there. How could they use GPGPU for BC?
Better in fact. It was confirmed by the devs that the Wii U version actually runs the more advanced PC physics model rather than the nerfed PS360 one.Exhibit 2: Trine2. The physics sim runs at least as good on the WiiU as it does on the rest of the consoles.
Better in fact. It was confirmed by the devs that the Wii U version actually runs the more advanced PC physics model rather than the nerfed PS360 one.
Better in fact. It was confirmed by the devs that the Wii U version actually runs the more advanced PC physics model rather than the nerfed PS360 one.
Source?
If this is true, then the Wii U CPU might be better suited for certain things like Physics than the Xbox 360 CPU. Nintendo's Hardware Engineer said that it was a "memory intensive" architecture. I wonder what he meant by that. I mean, people have been saying that the RAM isn't as fast as the 360/PS3 RAM, but I think there are more things we don't know yet.
But in the end, it doesn't really matter for me anymore. I saw the Zelda and Bird Tech Demos. I was pleased with what the system can do. If the new games at E3 2013 will look as good as those Demos, then I don't have anything to worry about. Technically I'm a graphics whore, but visuals really lost their importance to me. It's still nice to have cutting edge tech, but that's why I have a PC.
http://www.eurogamer.net/articles/digitalfoundry-trine-2-face-off
"On top of that the PC game also adopts PhysX enhancements, which mildly improve the quality and scope of destructible objects and surfaces - something that we see on Wii U too. The Wii U version also deserves credit, of course. The game not only features many of the graphical upgrades found on the PC, but does so while delivering better image quality than the 360 and PS3 without compromising on the solid frame-rate"
I'm referring to the fact WiiU runs the same software as the other consoles on the market, despite the fact the others are considered to have particularly potent fp/simd CPUs. Off the top of my hat:
Exhibit 1: NFS: Most Wanted. A contemporary sandbox racing game. The sandbox simulation runs at last as good on the WiiU as it does on ps360.
Exhibit 2: Trine2. The physics sim runs at least as good on the WiiU as it does on the rest of the consoles.
Exhibit 3: Zen Pinball 2. Ditto as with Trine2.
I can't think of anything else at this time running on all three platforms which would be physics-intensive. But if you can think of a counter example where WiiU struggles with a simulation that runs fine on the ps360 it'd be interested to see that.
I was getting pretty skeptical after playing Batman Arkham City. The framerate was a disaster. And it didn't even look THAT good.
I was getting pretty skeptical after playing Batman Arkham City. The framerate was a disaster. And it didn't even look THAT good.
Pretty sure there's been Wii U footage with visible screen tearing. Can't remember a specific game at this time, however.The thing is that Nintendo seems to require developmers to enable V-sync (most likely to avoid screen tearing at the cost of higher framerates). Anybody think that if Nintendo didn't require V-sync to be enabled, games like BO2 would almost never drop below 50FPS or is that still more CPU dependent?
Pretty sure there's been Wii U footage with visible screen tearing. Can't remember a specific game at this time, however.
I thought it was only Darksiders(?)Darksiders and others.
It's certainly unsuited to run a emulator dynarec/JIT core; but it can be used for other things to make up for missing overhead.A GPU is extremely unsuited to running an emulator.
Emulators need extremely fast single threaded performance, something a GPU does not have.
The per a thread performance of a GPU is very low, The Wii CPU would be faster!
You need a ton more performance than the CPU you are emulating due to the need to translate the code.
I was getting pretty skeptical after playing Batman Arkham City. The framerate was a disaster. And it didn't even look THAT good.
i though it was clear that that devs had to work with older dev kits and still not much is known about the console.
Yeah but that doesn't really make any sense to me. You see, even if they had weaker Devkits. Shouldn't the game run better on the final hardware nonetheless? I don't want to imagine how laggy that game was without the final hardware. The extra bump in processing power should at least have improved the performance without any tweaking from the developers side. It's not like they intentionally lowered the framerate below 20 on some of the areas.
The final hardware wasn't so much more powerful, but the software tools were just not ready.
Yeah but that doesn't really make any sense to me. You see, even if they had weaker Devkits. Shouldn't the game run better on the final hardware nonetheless? I don't want to imagine how laggy that game was without the final hardware. The extra bump in processing power should at least have improved the performance without any tweaking from the developers side. It's not like they intentionally lowered the framerate below 20 on some of the areas.
A GPGPU can make up for throughput. Emulation has quite a few latency icebergs, though.It's certainly unsuited to run a emulator dynarec/JIT core; but it can be used for other things to make up for missing overhead.
How do you suggest such an emulation met the latency requirements of, e.g. VU0 macro mode?For instance, with clever coding PS2 emulation could do VU0/VU1 emulation on modern GPU's stream processors, as you could for Cell's SPE's; providing there's a deficit in floating point performance on the CPU, any FPU portion could be emulated/transfered for a GPU part.
If you actually suggest that the translation pass was done by GPGPU - that's a bit far-fetched. It would be spectacularly inefficient as the translation task is inherently serial, and the best serial processor in the system is still the CPU. You'd be using a cappuccino machine to drive nails.Not just that, if the GC/Wii was being emulated by software here, it would probably benefit from doing the TEV pipeline manipulation translation "as is" on the GPU rather than translating and converting it on the CPU to something the GPU could understand. The fact that GPU's can now interpret and run some "real" code is a big help, just like when you have lots of DSP's, certainly not suitable to be the brains, but suitable to help.
I'll take your word for it, I'm not a programmer so just hypothesizing, seems like I overstep my bounds.A GPGPU can make up for throughput. Emulation has quote a few latency icebergs.
How do you suggest such an emulation met the latency requirements, e.g. VU0 macro mode?
If you actually suggest that the translation pass was done by GPGPU - that's a bit far-fetched. It would be spectacularly inefficient as the translation task is inherently serial, and the best serial processor in the system is still the CPU. You'd be using a cappuccino machine to drive nails.
No you won't. You'll keep crying about how underpowered it is. I mean NFS is out, videos and pics PROVE Wii U is a cut above the 360 but you just refuse to give in. You'll just claim it's not 'Enough'probably the latter.
looks like this chipworks stuff got very muddy and little clear info was gleaned.
what i take most out of it is wii u has either 160 or 320 shaders almost surely. if i start seeing games a cut above ps360, i'll believe 320 (but by then nobody will care as those games wont show till next gen ships). if it keeps this pattern of slightly inferior ps360 ports, i'll believe 160,
No you won't. You'll keep crying about how underpowered it is. I mean NFS is out, videos and pics PROVE Wii U is a cut above the 360 but you just refuse to give in. You'll just claim it's not 'Enough'
No you won't. You'll keep crying about how underpowered it is.