• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

StevieP

Banned
Figured as much. Any idea of the cost of the CPU? Apparently that thing cost the most out of all the onboard stuff...?

Not a clue. Chipworks probably has the more accurate estimate than most, seeing as it's their business. I think between the MCM and the gamepad, the MCM still probably costs more. The costs will fall more quickly for it over time/production than the gamepad, though, if I were to guess.
 

stanley1993

Neo Member
Figured as much. Any idea of the cost of the CPU? Apparently that thing cost the most out of all the onboard stuff...?

"4. For reference sake, the Apple A6 is fabricated in a 32 nm CMOS process and is also designed from scratch. It’s manufacturing costs, in volumes of 100k or more, about $26 - $30 a pop. Over 16 months degrade to about $15 each
a. Wii U only represents like 30M units per annum vs iPhone which is more like 100M units per annum. Put things in perspective.
5. This Wii U GPU costs more than that by about $20-$40 bucks each making it a very expensive piece of kit. Combine that with the IBM CPU and the Flash chip all on the same package and this whole thing is closer to $100 a piece when you add it all up"

doesnt this point to the gpu being the most expensive.
 
The impression I'm getting now is that Nintendo made poor choices for the hardware and basically shot themselves in the foot. Why have they persisted with the PPC 750 CPU architecture? It's a 15 year old architecture give or take, and frankly no matter how modified it is, its going nothing on modern x86 or IBM CPU architectures. Where else is the 750 used? No where outside of Nintendo's products, IBM dumped it over a decade ago with the last of the Apple iBooks.

The only thought I have for why Nintendo persisted with PPC 750 is because of backwards compatibility and a reluctance to embrace and learn a new architecture. Having used PPC 750 since Gamecube sticking with it means they can avoid having to reskill and can reuse a lot of assets and tools they've developed over the years. Nintendo do seem to go out of their way to avoid having to learn or embrace new architectures. Evident by their continued support of PPC 750, fixed function GPUs, and utilising the same base architecture concept from the Gamecube to Wii U. Seems to me they've spent more money trying to adapt their existing architectures and beef them up for HD gaming, like dicking around and making a multi core PPC 750, then what that money could have brought had Nintendo invested it into the best architecture AMD and IBM could have provided.

Spend $100 beefing up a 750 CPU. Result = still pathetically bad performance
Spend $100 buying a best CPU IBM/AMD have available. Result = Very good performance but we'd have to learn a new architecture, develop new tools and assets, up skill and retrain staff, and we'd also lose backwards compatibility
You're stopping yourself short from seeing the full picture there.

I'll give you two cases at hand; PS2 and DS.

First we'll go for PS2, PS2 was still a MIPS architecture, but rather than holding itself back for backwards compatibility they kept the PSone MIPS chip and used it for I/O while not running PSone games; Nintendo could have done this as well seeing PPC750 CPU's have been used for embedded systems for years; just core shrink it and it's efficient enough.

Second, the DS, the DS had a dual processor configuration (on a single package) with an ARM7 and an ARM9, ARM7 being there because of the GBA. My point? Nintendo could have done the same here, they could have went with a Power7 part and then fitting some extra PPC750 core in there, no biggie; and seeing how 3DS apparently lacks both high frequencies or backwards physical components (different GPU design/heritage and 2 ARM11 rather than a ARM7+9 config); doing the same for Wii U could have been an option as well.


As for why they didn't, that's a simpler thing to explain; PPC750 was designed from the ground for low power consumption, way back when (1997) it already boasted that it spent half the energy of a Pentium II whilst delivering the same performance; and these old designs tend to be energy efficient by today's standards, seeing intel is still trying to fit Pentium 1 cores in some of their designs (hint: energy efficient rather than powerful; but this is a different approach).

The successor PPC7400 wasn't nearly as energy efficient while being more scaleable (per clock it delivered pretty much the same performance), and Power5/PPC970 architecture and later simply weren't meant for embedded devices (and still were in the same per clock performance ballpark, if albeit worse). To make things worse, even when IBM had 30W PPC970/G5 parts at hand, their northbridge/controllers weren't energy efficient enough, and with Apple no longer pressuring them to do so they still aren't, haven't been for newer generations and probably will never be. (fun fact, some PPC970 northbridges had a PPC405 on them; dropping a broadway on such designs could be viable).

That leaves a company like Nintendo, doing embedded boxes with only a few options, go mad and strap some Power 7 cores in there (high power consumption on the final product); change architecture for ARM or x86 low-power designs/configurations or go for the best bang for the buck on IBM's field.

And the thing they figured out is that at least for them/for their briefing IBM's best bang for the buck is still the thing they cherry picked 14 years ago; PPC 476FP lacks paired singles 2x32-bit floating point so it would actually fare worse (and break compatibility) in that regard; PPC7xx development was discontinued in favour of PPC e500 (by Freescale) which falls out of IBM/Nintendo agreement and despite being an evolution of sorts, it breaks compatibility (more processing stages I believe). e600 is a regular PPC7400, when it comes to stages and the like but suffers from the same problems (the one's who continued development were freescale) and PPC7400 is not a PPC750 too; and PPC A2 is a freaking in-order design. And this paragraph sorts all off the shelf embedded solutions available. Leaving Power 5/6/7 lines.

My point being, going higher in the IBM processor family would just mean they'd easily double their box energy requirements; never forget this is Nintendo, backward compatibility is a factor to leave a PPC750 behind, but they didn't decide to go with 3 of them for that reason alone.



Also, never judge an architecture by it's age; age means feature parity can be behind, which is often a bummer on GPU's and other processors where API's are very specialized and change every year; not so on CPU's where taking ehancement instructions aside (Altivec, VM, MMX, 3D Now, SSE) the core functionality stays largely the same (the purpose being to run code efficiently). Taking pipeline stages, architecture, instruction enhancements and scalability aside PPC750 is very similar to processors as recent as Core 2 Duo (who is a semi-direct Pentium 3 evolution, who by it's turn was a straight Pentium II rebranding... a CPU/architecture from the time the original PPC750 is). Of course I wish they went further on the FPU capabilities, but if they had we'd have a very recent processor design at hands (for floating point performance is the thing that has really increased on CPU's lately and the thing that makes this part feel it's age with 256-bit AVX implementations being often used to pump 8 32 bits floating point operations at once).
 

joesiv

Member
"4. For reference sake, the Apple A6 is fabricated in a 32 nm CMOS process and is also designed from scratch. It’s manufacturing costs, in volumes of 100k or more, about $26 - $30 a pop. Over 16 months degrade to about $15 each
a. Wii U only represents like 30M units per annum vs iPhone which is more like 100M units per annum. Put things in perspective.
Generally, if you have your own CPU design, the "cost" for the chip is very simple, it's the cost of the silicon, so you take the price of the full size of the origional silicon wafer, and then devide that by how many CPU's you can cut out of it given the size of the CPU, then factor in yield costs (how many CPU's are invalid due to process errors and such).

Essentially all things being equal (which isn't the case as some designs can cause for worse yield), the bigger the CPU the less you get from a wafer, the more expensive it is. Your data on the cost of the A6 is interesting, do we have any die size comparisons to the Wii U's CPU, is it bigger or smaller?
 
Generally, if you have your own CPU design, the "cost" for the chip is very simple, it's the cost of the silicon, so you take the price of the full size of the origional silicon wafer, and then devide that by how many CPU's you can cut out of it given the size of the CPU, then factor in yield costs (how many CPU's are invalid due to process errors and such).

Essentially all things being equal (which isn't the case as some designs can cause for worse yield), the bigger the CPU the less you get from a wafer, the more expensive it is. Your data on the cost of the A6 is interesting, do we have any die size comparisons to the Wii U's CPU, is it bigger or smaller?
He didn't link to it, but those numbers are from chipworks. It's in the GPU die shot OP. Not linking because I'm on my phone.
 

ikioi

Banned
First we'll go for PS2, PS2 was still a MIPS architecture, but rather than holding itself back for backwards compatibility they kept the PSone MIPS chip and used it for I/O while not running PSone games;

I don't see what relavence that has to the Wii U?

Nintendo could have done this as well seeing PPC750 CPU's have been used for embedded systems for years; just core shrink it and it's efficient enough.

Yes they could have, and i'm wondering why they didnt.

Nintendo seem to have spent significant money having IBM engineer a highly customised PPC 750 cpu. Multi core, 6x the cache, 25-30% higher clock speed, higher bus, more logic, these are significant changes and far beyond 'customisation'. This CPU has what at least 3-4x the transistor count of any other 750? It's also multi core, something the architecture was never designed for. Also given all these changes it would have required a brand new fab process, again increasing costs significantly.

Nintendo could have done the same here, they could have went with a Power7 part and then fitting some extra PPC750 core in there, no biggie; and seeing how 3DS apparently lacks both high frequencies or backwards physical components (different GPU design/heritage and 2 ARM11 rather than a ARM7+9 config); doing the same for Wii U could have been an option as well.

Exactly.

The new mini Wii is what $50 CA. Seems like if all Nintendo wanted was BC with the Wii they could have implamented the entire Wii's SoC into the Wii U for a fraction of the cost of what they've spent building this multi core PPC 750.

As for why they didn't, that's a simpler thing to explain; PPC750 was designed from the ground for low power consumption, way back when (1997) it already boasted that it spent half the energy of a Pentium II whilst delivering the same performance; and these old designs tend to be energy efficient by today's standards, seeing intel is still trying to fit Pentium 1 cores in some of their designs (hint: energy efficient rather than powerful; but this is a different approach).

I don't agree with this.

PPC 750 is not suited for use in a modern HD game console. It's an architecture that simply wasn't designed for the demands of modern day processing, yet alone playing HD video games. Look at its SIMD capabilities, its number crunching, this thing is better suited to a smart phone then it is a 'HD games console'.

Frankly i don't think its 'performance' per watt is good at all. While it may use bugger all power, it also provides bugger all performance.


The successor PPC7400 wasn't nearly as energy efficient while being more scaleable (per clock it delivered pretty much the same performance), and Power5/PPC970 architecture and later simply weren't meant for embedded devices (and still were in the same per clock performance ballpark, if albeit worse). To make things worse, even when IBM had 30W PPC970/G5 parts at hand, their northbridge/controllers weren't energy efficient enough, and with Apple no longer pressuring them to do so they still aren't, haven't been for newer generations and probably will never be. (fun fact, some PPC970 northbridges had a PPC405 on them; dropping a broadway on such designs could be viable).

Which again raises the question of why Nintendo persisted with IBM.

And again the only logic i can see is they wanted to avoid at all costs migrating to a new architecture. They have over a decade of experience on PPC750, no doubt significant tools, resources, and assets developed for it, so they wanted to avoid changing. Nintendo's primary motiviation from what i can see is a reluctance to embrace a new architecture.

I can see no logical reason what so ever Nintendo persisted with PPC750 architecture for a HD game console, yet alone invested a lot of money building this multi core chip.

not so on CPU's where taking ehancement instructions aside (Altivec, VM, MMX, 3D Now, SSE) the core functionality stays largely the same

AFAIK the PPC750 has no Altivec.

Taking pipeline stages, architecture, instruction enhancements and scalability aside PPC750 is very similar to processors as recent as Core 2 Duo

A Core 2 Duo would mop the floor with this chip.
 

stanley1993

Neo Member
Generally, if you have your own CPU design, the "cost" for the chip is very simple, it's the cost of the silicon, so you take the price of the full size of the origional silicon wafer, and then devide that by how many CPU's you can cut out of it given the size of the CPU, then factor in yield costs (how many CPU's are invalid due to process errors and such).

Essentially all things being equal (which isn't the case as some designs can cause for worse yield), the bigger the CPU the less you get from a wafer, the more expensive it is. Your data on the cost of the A6 is interesting, do we have any die size comparisons to the Wii U's CPU, is it bigger or smaller?

http://www.neogaf.com/forum/showthread.php?t=511628
i dont no about comparisons but you could do it yourself.
http://appleinsider.com/articles/12..._a6_processor_finds_1gb_ram_2_cpu_3_gpu_cores
 
I don't see what relavence that has to the Wii U?
Because those solutions were on the table for Nintendo. I separated them on two scenarios because I figured they were different enough approaches (one where said CPU gets "banned" from the spec but it's used in background tasks versus one where it's used in the actual spec and was embedded on the main cpu nonetheless)

Nintendo did the first scenario themselves infact, they left the Z80 on the GBA for backward compatibility (and as a fallback overhead for sound processing, since it was there).
Yes they could have, and i'm wondering why they didnt.
My point was: probably because their only choice keeping up with the punch for watt of the PPC750 would have been the PPC476FP.

G5/970 would actually take a slight beating if clocked at the same speed as a PPC750, I doubt subsequent architectures changed that. And the fact that Power7/Watson is strictly a server CPU makes it so that the power draw isn't going to go under 49-52 Watts for a appropriately clocked triple core solution. (yes, I did the math) It probably doesn't scale down well too.

It's not a CPU for a console, and it does get hot.

IBM's solutions for embedded devices are down to PPC4xx, PPC 7xx and PPC74xx architectures. And there's nothing wrong with them.
Nintendo seem to have spent significant money having IBM engineer a highly customised PPC 750 cpu. Multi core, 6x the cache, 25-30% higher clock speed, higher bus, more logic, these are significant changes and far beyond 'customisation'. This CPU has what at least 3-4x the transistor count of any other 750? It's also multi core, something the architecture was never designed for. Also given all these changes it would have required a brand new fab process, again increasing costs significantly.
Your point? they should have invested more money on downsampling a higher spec part into the same sub-2GHz footprint and get the same performance as a PPC750 in return?

The problem is that the PPC750 is as barebones as it gets, meaning that, for the buck, you're not getting the same performance in the same silicon area or the same consumption.

It scales up like crap, hence why it was left be years ago, but short pipeline designs have that as a flipside.
The new mini Wii is what $50 CA. Seems like if all Nintendo wanted was BC with the Wii they could have implamented the entire Wii's SoC into the Wii U for a fraction of the cost of what they've spent building this multi core PPC 750.
That's precisely my point. They didn't forego with 3 PPC750 cores just to maintain compatibility. They did so because it really was the best solution for the power draw not to go through the wall with the available options.
PPC 750 is not suited for use in a modern HD game console. It's an architecture that simply wasn't designed for the demands of modern day processing, yet alone playing HD video games. Look at its SIMD capabilities, its number crunching, this thing is better suited to a smart phone then it is a 'HD games console'.
How so? you have current generation HD platforms whose CPU's get murdered by this. Jaguar on PS4/X360 is also nothing to write home about.

I'd enjoy a reworked SIMD unit as much as you do, but truth is it's not the be all end all for a CPU, general purpose is and PPC750 was always respectable per clock at that, and still is. Also, current generation consoles suffered by having said lousy architectures at hand, but made ends meet nonetheless, because there was the need to; this can surely make do.

With integrated cpu+gpu designs seemingly being the signalled future (making dedicated cpu fpu units obsolete both in performance per clock and energy draw) one has to wonder what's the fuzz in increasing floating point performance tenfold via regular means at this point (specially if you have a good GPU in there). Said performance rarely get's used when you have to account for thriving architecture's who lack it and on the other hand can probably count on it having a GPGPU capable part more.

On top of it all, floating point performance is costly on the power draw side of things, hence why ARM designs are very conservative about wide implementations of it. (instead getting brushed aside for often optional VFP and/or Neon units). It's also why Nintendo's alternative in the same power draw ballpark 476FP isn't even doing 64 bit floating point.
Which again raises the question of why Nintendo persisted with IBM.
First and foremost retaining backward compatibility and one also has to take into account that Nintendo is the type of company that becomes a regular customer (the don't fix it if it's not broken formula), I'm betting they kept AMD in the loop without going to Nvidia because of their pre-existing relation and the fact they also knew GC/Wii inner workings quite intimately.

The AMD/ATi end result though is nothing like Flipper/Hollywood so the same kind of proposition could have happened on the CPU side of things, if IBM had something for them. They didn't.

But that doesn't make the PPC750 garbage.


The other side of the coin would be that Nintendo likes to take small steps; GPU this time around amounts to a few steps, so they wanted to keep the CPU pipeline somewhat compatible and that means out-of-order and PowerPC-based, for sure.

That makes it so that, say, for Wind Waker engine to run on the Wii U they only have to change the GPU overhead; and that's exactly why it's happening in the first place. It also means they can keep that engine for a Zelda after SS without fully rewriting it.

Then again they always rewrite huge portions of it since WW (TP and SS), but it's different than having to rewrite it all before they release a game. This time around they have to make it so GPU takes over skinning and rigging (and that alone means more overhead for the CPU even if it stayed that the same speed) and making it so their engine is SMP compatible; the rest of the work is down to assets, own needs, use of that extra overhead and learning how to use the GPU.

That's it. If they went with another PPC part providing it's out-of order the result and hurdles would be the same, so we're left where we where; you don't get more performance per clock, same power drain and retain code compatibility on other any other available design (code compatibility, different than backwards; PPC970 or a Power7 would be largely code compatible).

That makes PPC750 still the design to go for this gen.
And again the only logic i can see is they wanted to avoid at all costs migrating to a new architecture. They have over a decade of experience on PPC750, no doubt significant tools, resources, and assets developed for it, so they wanted to avoid changing. Nintendo's primary motiviation from what i can see is a reluctance to embrace a new architecture.
Already touched on that above.

They didn't want to change from PPC into something else, but they also wanted a PPC with the same features PPC750 has, as in out-of-order (that makes Cell PPE/Xenon, Power 6 and PPC A2 ineligible). So it's down to PPC4xx (lacking in FP), PPC7xx, PPC74xx and bigger and not meant for embedded designs Power4/5 and 7 processors.

The fact that PPC7xx takes less silicon meaning more cpu's per yield is a silver lining.
AFAIK the PPC750 has no Altivec.
I know that.

Just to be anal though, the 50 SIMD instructions added on Gekko (and we don't know if they added more for broadway/PPC750CL or for Espresso) can be related to something lifted and retrofitted from the altivec implementation as the regular PPC750 lacked SIMD. Altivec is a embedded cpu instruction set just like MMX or SSE, except in the Nintendo PPC750 situation the cpu falls short of implementing a 128-bit floating point pipeline which was standard on everything with the Altivec implementation.
A Core 2 Duo would mop the floor with this chip.
That's not saying much when you had core 2 duo's clocked at 3.5 GHz.

Yes, Core 2 Duo was a better, more modern and balanced implementation and there's a reason why short pipeline's where left behind (bigger pipeline means higher frequencies can be achieved and the R&D that went into it along the years is nothing to sniff at); but the bottomline is PPC750 is a respectable CPU/basis for a CPU still, very predictable performance and little bottlenecking, it's really strong for what it is; not something to laugh at.

It also spent more energy, the smallest TDP it got to was 10 Watts on a dual core configuration on a 45nm process (with it's single core equivalent being rated 5.5 Watts TDP), this thing is certainly pulling less at comparable clock rates and performance.


I think you're underestimating it; although everyone would like to see a larger SIMD pipeline, more MHz and more core's; but that doesn't make the basis for it a wreck, it really isn't.


And a top range Core 2 could mop the floor with an octocore AMD Jaguar; in general purpose, leaving floating point performance aside.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I don't agree with this.

PPC 750 is not suited for use in a modern HD game console. It's an architecture that simply wasn't designed for the demands of modern day processing, yet alone playing HD video games. Look at its SIMD capabilities, its number crunching, this thing is better suited to a smart phone then it is a 'HD games console'.
And yet it's powering a modern game console, doing things that this passing gen were considered the domain of fp/simd 'monsters'.

If anything, WiiU should have hosted more of the 'smart-phone-class' ppc750cl.
 

ikioi

Banned
And yet it's powering a modern game console, doing things that this passing gen were considered the domain of fp/simd 'monsters'.

If anything, WiiU should have hosted more of the 'smart-phone-class' ppc750cl.

Can you provide an example of what you're referring to?
 

SmokeMaxX

Member
Anyone think that Nintendo's problems with their account management and unifying accounts is at least partially due to the supposed EA/Origin deal that fell through? If EA devs were working on Nintendo online and Origin was supposed to power the WiiU's online, maybe accounts were working until EA/Nintendo's relationship took a nosedive and EA withdrew their work on the account system.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Can you provide an example of what you're referring to?
I'm referring to the fact WiiU runs the same software as the other consoles on the market, despite the fact the others are considered to have particularly potent fp/simd CPUs. Off the top of my hat:

Exhibit 1: NFS: Most Wanted. A contemporary sandbox racing game. The sandbox simulation runs at last as good on the WiiU as it does on ps360.

Exhibit 2: Trine2. The physics sim runs at least as good on the WiiU as it does on the rest of the consoles.

Exhibit 3: Zen Pinball 2. Ditto as with Trine2.

I can't think of anything else at this time running on all three platforms which would be physics-intensive. But if you can think of a counter example where WiiU struggles with a simulation that runs fine on the ps360 it'd be interested to see that.
 
Are we ever going to find out any more about the GPU, or is it now just a case of look at the games to see how powerful it is?

probably the latter.

looks like this chipworks stuff got very muddy and little clear info was gleaned.

what i take most out of it is wii u has either 160 or 320 shaders almost surely. if i start seeing games a cut above ps360, i'll believe 320 (but by then nobody will care as those games wont show till next gen ships). if it keeps this pattern of slightly inferior ps360 ports, i'll believe 160,
 

ikioi

Banned
I'm referring to the fact WiiU runs the same software as the other consoles on the market, despite the fact the others are considered to have particularly potent fp/simd CPUs. Off the top of my hat:

That's not impressive at all....

Telling me that the Wii U can run games just as good as consoles 7 years older then it. That's like the lowest expectation one could have.
 

TedNindo

Member
That's not impressive at all....

Telling me that the Wii U can run games just as good as consoles 7 years older then it. That's like the lowest expectation one could have.

Better actually. Need For Speed does look better on WiiU and so does Trine 2.

I'm hoping that there will be some software at E3 that'll prove once and for all that the WiiU is more capable then current consoles.

I'm still waiting for something on par with the Zelda demo.
 
Better actually. Need For Speed does look better on WiiU and so does Trine 2.

I'm hoping that there will be some software at E3 that'll prove once and for all that the WiiU is more capable then current consoles.

I'm still waiting for something on par with the Zelda demo.

I have a feeling that the Zelda Tech Demo ran on a more powerful devkit than the final product. I hope it's the opposite though. They could just release the Tech Demo on the Eshop for people to get excited. I mean, they already made a Zelda Community on Miiverse.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
That's not impressive at all....

Telling me that the Wii U can run games just as good as consoles 7 years older then it. That's like the lowest expectation one could have.
On paper, those 7-year old CPUs are quite comparable fp/simd-wise to the CPUs in the upcoming consoles. Whether that impresses you or not is beyond the scope of this conversation.
 

Madao

Member
keeping the old architecture doesn't seem to have paid off since Nintendo is still struggling to get the games out on time.

they have to drop these old chips at some point. delaying this more and more only makes it harder. imagine when they have to make the sucessor of the Wii U, are they going to make an 8-core PPC750?
 
Nintendo seem to have spent significant money having IBM engineer a highly customised PPC 750 cpu. Multi core, 6x the cache, 25-30% higher clock speed, higher bus, more logic, these are significant changes and far beyond 'customisation'. This CPU has what at least 3-4x the transistor count of any other 750? It's also multi core, something the architecture was never designed for. Also given all these changes it would have required a brand new fab process, again increasing costs significantly.

Just wanted to point out that you simultaneously contradicted your original point and answered your own original argument. (That being that Nintendo didn't actually spend enough money for the system to be selling at a loss.) Instead of accepting this and acknowledging, you moved the goalposts to "I don't like Nintendo's choice in chips." But carry on, enjoy your crusade.
 
I have a feeling that the Zelda Tech Demo ran on a more powerful devkit than the final product. I hope it's the opposite though. They could just release the Tech Demo on the Eshop for people to get excited. I mean, they already made a Zelda Community on Miiverse.
From what we know about the dev kits they have only increased in power since they were first released. It's more likely to be how you hope, that the retail Wii U is more powerful than the one running the Zelda and bird demos.
 
keeping the old architecture doesn't seem to have paid off since Nintendo is still struggling to get the games out on time.

they have to drop these old chips at some point. delaying this more and more only makes it harder. imagine when they have to make the sucessor of the Wii U, are they going to make an 8-core PPC750?

I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?

EDIT: What I'm trying to say is, they could use the GPGPU for BC and a completely new CPU/GPU/GPGPU for a new console, like Sony did with the 60GB PS3s back then, which had a PS2 CPU built in, and so on.
 
I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?

GPGPU just means that you can compute general purpose stuff (which was traditionally computed on the CPU) on the GPU (whether that's efficient or not entirely depends on the workload). That's been standard for GPUs for many years now.
You still need a proper CPU, and the Wii U has one. So, for full backwards compatibility, the CPU has to be included.
 

Madao

Member
I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?

EDIT: What I'm trying to say is, they could use the GPGPU for BC and a completely new CPU/GPU/GPGPU for a new console, like Sony did with the 60GB PS3s back then, which had a PS2 CPU built in, and so on.

the very same thing was asked already regarding Wii BC on Wii U a few pages back and it looked like the better alternative compared to what we got.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I'm no pro in technical stuff like this, but since the Wii U uses a GPGPU (which, if i udnerstand correctly, is a GPU and CPU on one chip or something), wouldn't it be easier for Nintendo to just put in that GPGPU into their new console for full backwards compatibility?

EDIT: What I'm trying to say is, they could use the GPGPU for BC and a completely new CPU/GPU/GPGPU for a new console, like Sony did with the 60GB PS3s back then, which had a PS2 CPU built in, and so on.
You lost me there. How could they use GPGPU for BC?
 

TheD

The Detective
A GPU is extremely unsuited to running an emulator.

Emulators need extremely fast single threaded performance, something a GPU does not have.
The per a thread performance of a GPU is very low, The Wii CPU would be faster!
You need a ton more performance than the CPU you are emulating due to the need to translate the code.
 
Exhibit 2: Trine2. The physics sim runs at least as good on the WiiU as it does on the rest of the consoles.
Better in fact. It was confirmed by the devs that the Wii U version actually runs the more advanced PC physics model rather than the nerfed PS360 one.
 

TheD

The Detective
Better in fact. It was confirmed by the devs that the Wii U version actually runs the more advanced PC physics model rather than the nerfed PS360 one.



Last I checked he said was that it was linked against a slightly newer version of physx, not that it was doing anything more.
 
Better in fact. It was confirmed by the devs that the Wii U version actually runs the more advanced PC physics model rather than the nerfed PS360 one.

Source?

If this is true, then the Wii U CPU might be better suited for certain things like Physics than the Xbox 360 CPU. Nintendo's Hardware Engineer said that it was a "memory intensive" architecture. I wonder what he meant by that. I mean, people have been saying that the RAM isn't as fast as the 360/PS3 RAM, but I think there are more things we don't know yet.

But in the end, it doesn't really matter for me anymore. I saw the Zelda and Bird Tech Demos. I was pleased with what the system can do. If the new games at E3 2013 will look as good as those Demos, then I don't have anything to worry about. Technically I'm a graphics whore, but visuals really lost their importance to me. It's still nice to have cutting edge tech, but that's why I have a PC.
 

gamingeek

Member
Source?

If this is true, then the Wii U CPU might be better suited for certain things like Physics than the Xbox 360 CPU. Nintendo's Hardware Engineer said that it was a "memory intensive" architecture. I wonder what he meant by that. I mean, people have been saying that the RAM isn't as fast as the 360/PS3 RAM, but I think there are more things we don't know yet.

But in the end, it doesn't really matter for me anymore. I saw the Zelda and Bird Tech Demos. I was pleased with what the system can do. If the new games at E3 2013 will look as good as those Demos, then I don't have anything to worry about. Technically I'm a graphics whore, but visuals really lost their importance to me. It's still nice to have cutting edge tech, but that's why I have a PC.

http://www.eurogamer.net/articles/digitalfoundry-trine-2-face-off

"On top of that the PC game also adopts PhysX enhancements, which mildly improve the quality and scope of destructible objects and surfaces - something that we see on Wii U too. The Wii U version also deserves credit, of course. The game not only features many of the graphical upgrades found on the PC, but does so while delivering better image quality than the 360 and PS3 without compromising on the solid frame-rate"
 
http://www.eurogamer.net/articles/digitalfoundry-trine-2-face-off

"On top of that the PC game also adopts PhysX enhancements, which mildly improve the quality and scope of destructible objects and surfaces - something that we see on Wii U too. The Wii U version also deserves credit, of course. The game not only features many of the graphical upgrades found on the PC, but does so while delivering better image quality than the 360 and PS3 without compromising on the solid frame-rate"

I was getting pretty skeptical after playing Batman Arkham City. The framerate was a disaster. And it didn't even look THAT good.
 

Chronos24

Member
I'm referring to the fact WiiU runs the same software as the other consoles on the market, despite the fact the others are considered to have particularly potent fp/simd CPUs. Off the top of my hat:

Exhibit 1: NFS: Most Wanted. A contemporary sandbox racing game. The sandbox simulation runs at last as good on the WiiU as it does on ps360.

Exhibit 2: Trine2. The physics sim runs at least as good on the WiiU as it does on the rest of the consoles.

Exhibit 3: Zen Pinball 2. Ditto as with Trine2.

I can't think of anything else at this time running on all three platforms which would be physics-intensive. But if you can think of a counter example where WiiU struggles with a simulation that runs fine on the ps360 it'd be interested to see that.

Exhibit 4- Puddle maybe?
 

gamingeek

Member
I was getting pretty skeptical after playing Batman Arkham City. The framerate was a disaster. And it didn't even look THAT good.

Also Criterion addressed the CPU here

Reading carefully it's clear that he is only talking about the CPU comparison here.

"I think a lot of people have been premature about it in a lot of ways because while it is a lower clock-speed, it punches above its weight in a lot of other areas," he explains.

"So I think you've got one group of people who walked away, you've got some other people who just dived in and tried and thought, 'Ah... it's not kind of there,' but not many people have done what we've done, which is to sit down and look at where it's weaker and why, but also see where it's stronger and leverage that. It's a different kind of chip and it's not fair to look at its clock-speed and other consoles' clock-speed and compare them as numbers that are relevant. It's not a relevant comparison to make when you have processors that are so divergent. It's apples and oranges."


When Nintendo were asked at an investor conference about U's weak CPU, Takada (Nintendo's head tech guy) said that he doesn't think the U CPU is weak. What he said was that in modern CPUs the logic portion was quite small compared to the memory portion and that it was a memory optimised design with a large SRAM cache.
 
I was getting pretty skeptical after playing Batman Arkham City. The framerate was a disaster. And it didn't even look THAT good.

The thing is that Nintendo seems to require developmers to enable V-sync (most likely to avoid screen tearing at the cost of higher framerates). Anybody think that if Nintendo didn't require V-sync to be enabled, games like BO2 would almost never drop below 50FPS or is that still more CPU dependent?
 

Easy_D

never left the stone age
The thing is that Nintendo seems to require developmers to enable V-sync (most likely to avoid screen tearing at the cost of higher framerates). Anybody think that if Nintendo didn't require V-sync to be enabled, games like BO2 would almost never drop below 50FPS or is that still more CPU dependent?
Pretty sure there's been Wii U footage with visible screen tearing. Can't remember a specific game at this time, however.
 
A GPU is extremely unsuited to running an emulator.

Emulators need extremely fast single threaded performance, something a GPU does not have.
The per a thread performance of a GPU is very low, The Wii CPU would be faster!
You need a ton more performance than the CPU you are emulating due to the need to translate the code.
It's certainly unsuited to run a emulator dynarec/JIT core; but it can be used for other things to make up for missing overhead.

For instance, with clever coding PS2 emulation could do VU0/VU1 emulation on modern GPU's stream processors, as you could for Cell's SPE's; providing there's a deficit in floating point performance on the CPU, any FPU portion could be emulated/transfered for a GPU part.

Not just that, if the GC/Wii was being emulated by software here, it would probably benefit from doing the TEV pipeline manipulation translation "as is" on the GPU rather than translating and converting it on the CPU to something the GPU could understand. The fact that GPU's can now interpret and run some "real" code is a big help, just like when you have lots of DSP's, certainly not suitable to be the brains, but suitable to help.
 
i though it was clear that that devs had to work with older dev kits and still not much is known about the console.

Yeah but that doesn't really make any sense to me. You see, even if they had weaker Devkits. Shouldn't the game run better on the final hardware nonetheless? I don't want to imagine how laggy that game was without the final hardware. The extra bump in processing power should at least have improved the performance without any tweaking from the developers side. It's not like they intentionally lowered the framerate below 20 on some of the areas.
 

ozfunghi

Member
Yeah but that doesn't really make any sense to me. You see, even if they had weaker Devkits. Shouldn't the game run better on the final hardware nonetheless? I don't want to imagine how laggy that game was without the final hardware. The extra bump in processing power should at least have improved the performance without any tweaking from the developers side. It's not like they intentionally lowered the framerate below 20 on some of the areas.

The final hardware wasn't so much more powerful, but the software tools were just not ready.
 

gamingeek

Member
Yeah but that doesn't really make any sense to me. You see, even if they had weaker Devkits. Shouldn't the game run better on the final hardware nonetheless? I don't want to imagine how laggy that game was without the final hardware. The extra bump in processing power should at least have improved the performance without any tweaking from the developers side. It's not like they intentionally lowered the framerate below 20 on some of the areas.

Read that Criterion article I linked you.

"The starting point is always, let's just get some running software and see what it's like - get something that's running and playable. When you start you're at some sort of frame-rate or other... you take out absolutely everything you can that's optional, get something playable, tune what you've got and get that up to an acceptable frame-rate, and then put more and more back in," he reveals.

"The difference with Wii U was that when we first started out, getting the graphics and GPU to run at an acceptable frame-rate was a real struggle. The hardware was always there, it was always capable. Nintendo gave us a lot of support - support which helps people who are doing cross-platform development actually get the GPU running to the kind of rate we've got it at now. We benefited by not quite being there for launch - we got a lot of that support that wasn't there at day one... the tools, everything."

"Tools and software were the biggest challenges by a long way... the fallout of that has always been the biggest challenge here," Idries reaffirms. "[Wii U] is a good piece of hardware, it punches above its weight. For the power consumption it delivers in terms of raw wattage it's pretty incredible. Getting to that though, actually being able to use the tools from Nintendo to leverage that, was easily the hardest part."
 
It's very interesting to me that they didn't tack on the Broadway and shift to an ARM Architecture for the main CPU.

It seems like they could shift towards a unified handheld/main console platform, with the handheld just lagging behind by a few years.

Of course this might not be in the cards for them.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
It's certainly unsuited to run a emulator dynarec/JIT core; but it can be used for other things to make up for missing overhead.
A GPGPU can make up for throughput. Emulation has quite a few latency icebergs, though.

For instance, with clever coding PS2 emulation could do VU0/VU1 emulation on modern GPU's stream processors, as you could for Cell's SPE's; providing there's a deficit in floating point performance on the CPU, any FPU portion could be emulated/transfered for a GPU part.
How do you suggest such an emulation met the latency requirements of, e.g. VU0 macro mode?

Not just that, if the GC/Wii was being emulated by software here, it would probably benefit from doing the TEV pipeline manipulation translation "as is" on the GPU rather than translating and converting it on the CPU to something the GPU could understand. The fact that GPU's can now interpret and run some "real" code is a big help, just like when you have lots of DSP's, certainly not suitable to be the brains, but suitable to help.
If you actually suggest that the translation pass was done by GPGPU - that's a bit far-fetched. It would be spectacularly inefficient as the translation task is inherently serial, and the best serial processor in the system is still the CPU. You'd be using a cappuccino machine to drive nails.
 
A GPGPU can make up for throughput. Emulation has quote a few latency icebergs.

How do you suggest such an emulation met the latency requirements, e.g. VU0 macro mode?

If you actually suggest that the translation pass was done by GPGPU - that's a bit far-fetched. It would be spectacularly inefficient as the translation task is inherently serial, and the best serial processor in the system is still the CPU. You'd be using a cappuccino machine to drive nails.
I'll take your word for it, I'm not a programmer so just hypothesizing, seems like I overstep my bounds.

I'll still try to answer though: regarding the VU0 macro mode, I'm sure that's not as much of an issue in MCM/APU configurations.

Regarding the translation pass, I reckon hearing late Wii SDK had a TEV pipeline emulator of sorts running on shader model, no? but perhaps it just translated it in real time in order to preview the effect without having to tape it out.


But the bottomline I think is that nobody really knows where this is heading, we have GPU's obtaining general purpose features, we have CPU's evolving into APU's and we have intel doing GPU's with tons of Pentium 1 cores (Larabee/MIC); everything should be somewhat usable, eventually. Doesn't mean it probably will though.
 

AzaK

Member
probably the latter.

looks like this chipworks stuff got very muddy and little clear info was gleaned.

what i take most out of it is wii u has either 160 or 320 shaders almost surely. if i start seeing games a cut above ps360, i'll believe 320 (but by then nobody will care as those games wont show till next gen ships). if it keeps this pattern of slightly inferior ps360 ports, i'll believe 160,
No you won't. You'll keep crying about how underpowered it is. I mean NFS is out, videos and pics PROVE Wii U is a cut above the 360 but you just refuse to give in. You'll just claim it's not 'Enough'
 

ikioi

Banned
No you won't. You'll keep crying about how underpowered it is. I mean NFS is out, videos and pics PROVE Wii U is a cut above the 360 but you just refuse to give in. You'll just claim it's not 'Enough'

I agree the Wii U is a cut above the PS3 and Xbox 360, but it's not significant.

Perhaps improvements to dev kits and developer experience will lead to even greater improvements then those we've already seen, but i can't see the Wii U offering any significant upgrades over the PS3 and Xbox 360. Most improvements will imho be largely token, like those we've already seen. Slightly better textures here and there, maybe better physics, but same frame rate, resolution, and polygon count etc. I can't see the Wii U being able to take a 720p 30fps Xbox 360 game and turn it into 1080p 30fps, or even 720p 60fps.
 
Top Bottom