• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Wii U CPU |Espresso| Die Photo - Courtesy of Chipworks

160 shader units seems a little more reasonable since we now know it's on a larger process than the 40nm TSMC parts we were using to compare. I'm sure some can still argue otherwise, but I've personally seen enough to convince me that those numbers are accurate.
 
160 shader units seems a little more reasonable since we now know it's on a larger process than the 40nm TSMC parts we were using to compare. I'm sure some can still argue otherwise, but I've personally seen enough to convince me that those numbers are accurate.

And how many Gflops would that make it, 176 ?.
 
Was just looking around the internet out of boredom to see if eDRAM would be feasible in Nintendo's next-gen, and I came across this article. It appears to confirm that the Wii U GPU is made on a 45nm process node. That thread is locked, so I figured this was the best place. It also has a nice profile shot of Renesas' eDRAM and the TSMC eDRAM in the Xbox 360.

http://chipworksrealchips.blogspot.com/

Thanks for the info. That is probably the case afterall.

Right now, though, I'm very curious on what Nintendo could do going forward. Do you believe that Nintendo will keep working with PowerPC/Radeon for their possible console and future portable? I can see Nintendo playing with the idea, as I recall that Nintendo was also rumored to be checking out the possibility of making a portable with a shrunk Gekko before the NDS information was released. Iwata himself implied that Nintendo is trying to absorb the Wii U's architecture by some extent. I just don't see it as a feasible solution for a portable compared to an ARM-based SoC solution.
 
I think almost everyone has come to that conclusion. I have accepted that and I am satisfied with the Wii U outputs. Still eager to see Zelda U, a new 3D Mario and 3D Metroid.
 

AzaK

Member
160 shader units seems a little more reasonable since we now know it's on a larger process than the 40nm TSMC parts we were using to compare. I'm sure some can still argue otherwise, but I've personally seen enough to convince me that those numbers are accurate.

We've seen the developer docs and it's 160.
 

efyu_lemonardo

May I have a cookie?
160 shader units seems a little more reasonable since we now know it's on a larger process than the 40nm TSMC parts we were using to compare. I'm sure some can still argue otherwise, but I've personally seen enough to convince me that those numbers are accurate.
What does that mean for shrinking and reducing the power and thermal envelopes in the future?
Currently the entire box is around 33 Watts if I remember correctly?
How much of that is actual computing power and not USB ports, optical drive, radio communication with the controller etc?

What I basically want to know is if they tried getting the console into a controller sized package, removing everything that's implied by that (stuff I mentioned above), and shrinking and optimising parts with technology available in 2020, what's the best and worst they could achieve, in terms of power requirements, in your opinion?
 

Hermii

Member
Its pretty amazing what 176 gflops can do when looking at Mariokart 8. Iwata is not completely wrong when he talks about diminishing returns.

Of course we have yet to see what 18400 gflops can do when developers push it.
 

Qassim

Member
Its pretty amazing what 176 gflops can do when looking at Mariokart 8. Iwata is not completely wrong when he talks about diminishing returns.

Of course we have yet to see what 18400 gflops can do when developers push it.

Mario Kart 8 is a very simplistic game, it looks great because of the art style, level design and the detail they've put on the few effects the player sees constantly (e.g. drifting sparks, flames, hang glider flapping).

I can really tell that it's a game running on old hardware. As I said, I still do think it looks great - I think they're using the limited power they have to great effect, but I don't think this would be effective ammunition for the "diminishing returns" argument.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Mario Kart 8 is a very simplistic game, it looks great because of the art style, level design and the detail they've put on the few effects the player sees constantly (e.g. drifting sparks, flames, hang glider flapping).

I can really tell that it's a game running on old hardware. As I said, I still do think it looks great - I think they're using the limited power they have to great effect, but I don't think this would be effective ammunition for the "diminishing returns" argument.
We need a proper multiplat to see the effect of diminishing returns. Something along Project CARS.
 

Panajev2001a

GAF's Pleasant Genius
Its pretty amazing what 176 gflops can do when looking at Mariokart 8. Iwata is not completely wrong when he talks about diminishing returns.

Of course we have yet to see what 18400 gflops can do when developers push it.

We are still barely scratching the surface of physically based interactive elements/systems in games. Let alone advanced in lighting, shadowing, and particle effects all while pushing for higher and higher Image Quality, there is so much to do still.

A good example last gen of a not AAA game which in my mind have quite a nice next-generation feeling anyway was Red Faction: Guerrilla and the physics based destructible environments. Doing the appropriate damages to structures and seeing mechanical stress in action, glass shattering, the weight of the building not properly supported by the load bearing parts of the structure, and the eventual collapse of it was quite a "ok, this was NOT possible on PS2" kind of moment... Well the graphics in and of themselves were not bad either :).
 

Rolf NB

Member
What does that mean for shrinking and reducing the power and thermal envelopes in the future?
Currently the entire box is around 33 Watts if I remember correctly?
How much of that is actual computing power and not USB ports, optical drive, radio communication with the controller etc?
You could measure the power draw of the system while running a game installed to SD card (or a disc game in an instance where the disc is not spinning, if such an opportunity presents itself). That'll get you very close to power used by just the major chips and the radios, plus maybe 10% extra for PSU loss.

USB ports do not automatically consume power if nothing is connected that draws any. Likewise, drive motors do not consume power if they don't spin. The controller powers itself from its battery, so that doesn't need to be considered.

Radios are hard to eliminate, but let me just say that the default transmit power for WLAN g is 75mW, and class 2 Bluetooth is 10mW. You can be very generous with the radio losses and budget a couple Watts for both of these functions combined, just to not undershoot anything.
 

AzaK

Member
We need a proper multiplat to see the effect of diminishing returns. Something along Project CARS.

It seems like the developers are attempting to push every platform as best as they can so comparisons will be very interesting indeed.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
It seems like the developers are attempting to push every platform as best as they can so comparisons will be very interesting indeed.
Indeed. Also, it's worth noting they've actually proven themselves as competent.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Apropos, Gooogle is going POWER8 for their next-get server farms, in case somebody missed it.
 

krizzx

Junior Member
Apropos, Gooogle is going POWER8 for their next-get server farms, in case somebody missed it.

So the power series is still quite viable. This is good. I have no problem with Nintendo continuing to use the PowerPC architecture. Anything to brake the Intel/AMD focus is good.

Now we just need a 3rd competitor in the GPU business to really push things.
 
So the power series is still quite viable. This is good. I have no problem with Nintendo continuing to use the PowerPC architecture. Anything to brake the Intel/AMD focus is good.

Now we just need a 3rd competitor in the GPU business to really push things.
LOL, does Qualcomm count?
 
Not for this kind of performance, no.

And they only do SoCs with the Adrenos anyway.
I was joking.

That said, the Wii U uses a SoC and it's GPU's performance is probably lower than an Adreno chip should have with a similar TDP.

The Adreno won't compete against consumer PC graphics cards, but Latte isn't a powerhouse at all compared to mobile chips.
 
So the power series is still quite viable. This is good. I have no problem with Nintendo continuing to use the PowerPC architecture. Anything to brake the Intel/AMD focus is good.

Now we just need a 3rd competitor in the GPU business to really push things.
The problem the PowerPC and Radeon isn't with a possible next-gen console itself, but with the next-gen portable that Nintendo is aiming to have an unified architecture along with the console.

I was joking.

That said, the Wii U uses a SoC and it's GPU's performance is probably lower than an Adreno chip should have with a similar TDP.

The Adreno won't compete against consumer PC graphics cards, but Latte isn't a powerhouse at all compared to mobile chips.
In several years, Nintendo should be able to make a system with Adreno chips that are considerably beyond the Wii U. The tricky part is how much they are going to also attempt to absorb Wii U's architecture.
 

krizzx

Junior Member
The problem the PowerPC and Radeon isn't with a possible next-gen console itself, but with the next-gen portable that Nintendo is aiming to have an unified architecture along with the console.


In several years, Nintendo should be able to make a system with Adreno chips that are considerably beyond the Wii U. The tricky part is how much they are going to also attempt to absorb Wii U's architecture.

I'd imagined that Espresso on 28nm would be suitable for any portable device.
 

TunaLover

Member
Probably answered to death, but why is the problem with frame rate in Wii U games? I noticed that even in 1st party games like Pikmin 3 there are some stuttering (of course it's almost no existent compared to some 3rd party games), it's because the development kits, early 1st generation development or the CPU is just not up to the task?
 
I'd imagined that Espresso on 28nm would be suitable for any portable device.

That would be small, but I'm unsure if it would be portably small. It may need to be on an even smaller process. Do we know what power range Expresso is?
Probably answered to death, but why is the problem with frame rate in Wii U games? I noticed that even in 1st party games like Pikmin 3 there are some stuttering (of course it's almost no existent compared to some 3rd party games), it's because the development kits, early 1st generation development or the CPU is just not up to the task?
Don't think 3D World has framerate issues. IIRC, these framerate stutters are often random instead of them taking place during a busy scene. It's likely just that developers are still not super familiar with the hardware in yet.
 

Mr Swine

Banned
That would be small, but I'm unsure if it would be portably small. It may need to be on an even smaller process. Do we know what power range Expresso is?

Don't think 3D World has framerate issues. IIRC, these framerate stutters are often random instead of them taking place during a busy scene. It's likely just that developers are still not super familiar with the hardware in yet.

I think that Expresso even on a 22nm node is simply to power hungry to be used in a handheld. Probably if Nintendo could get it down to 12-14nm but then again, it would be a lot better to use a arm CPU/GPU
 

jnWake

Member
Pikmin 3 only stutters in the landing scene at the beggining of each day, not during gameplay. I haven't played the game in a while but I heard that got fixed in a patch... Not sure if it's true though.
 
I think that Expresso even on a 22nm node is simply to power hungry to be used in a handheld. Probably if Nintendo could get it down to 12-14nm but then again, it would be a lot better to use a arm CPU/GPU

Yeah, it's hard to tell without getting a good figure on how much power Espresso is currently sucking. I've heard guesses as low as 6w. Espresso also might have been made even smaller, but I think they needed it a certain size for the I/O pins. And remember they can clock it whatever they want to cut draw. I've been going back and forth on interpreting Iwata's comments regarding the unified architecture and how much will be based on Wii U.

Nintendo would realistically be looking at a fairly affordable SoC. It's still gonna be too early in 2016 for 3D stacking and whatnot I'm thinking, although a "2.5D" package like what's in Vita could happen.

I read an article that IBM is now licensing their PowerPC architecture. So Nintendo could theoretically license the cores, tinker with them, and stick them on an SoC manufactured by TSMC. eDRAM will be a problem, but at a smaller node, switching the L2 at least to SRAM shouldn't be an issue. I suppose there is the crazy possibility that Nintendo gets IBM to manufacture the whole SoC. They're basically the only major player (besides Intel) who offer eDRAM on modern process nodes.

Interestingly enough, AMD is saying that they're not licensing their GCN designs. They could very well have exceptions made in the case of console manufactures. But they are focusing on their semicustom line and that's what Sony and MS got, so who knows if Nintendo could even get something more custom out of them. I would not rule out VLIW5 in their next console if that's the only way they can extend their current Radeon license.
 

Oblivion

Fetishing muscular manly men in skintight hosery
I've got a dumb question that I've been wanting to ask for a long time.

Is the shrinking process (going from 45 nm to 20 nm or whatever) generally a good thing?
 

Rolf NB

Member
I've got a dumb question that I've been wanting to ask for a long time.

Is the shrinking process (going from 45 nm to 20 nm or whatever) generally a good thing?
It's always good for the lower per-unit manufacturing cost and lowered power draw.

It takes some up-front money to "port" a chip design to a new process node though, especially if it just became available. There's an argument to be made to stick with mature silicon processes and not jump on the new hotness immediately.
 
I've got a dumb question that I've been wanting to ask for a long time.

Is the shrinking process (going from 45 nm to 20 nm or whatever) generally a good thing?

+ Big drop in power consumption ( or more ops per watt )
+ Potential to build more chips on the same process
+ Potential to build much "bigger" chips on the same die size
- performance/lifetime issues in NAND Flash (SSDs)
- yields suck when you move to a new die size
- cost; gets almost exponentially more expensive with each major shrink
 

pulsemyne

Member
One way to save on power draw would be to knock a few hundred megahertz of the clock speed. Drop it to around 900 and the power draw should be more reasonable. They could also go as far as chopping out a core. A reduction in the dye size would make a big difference though. That is where the real power savings come in.
What could prove to be very interesting is if there is a revision of the WiiU announced sometime. That would most likely be a revision that would reduce power draw furthur and could give a good idea of just low the chip can go. Personally I think that they would look for about 2kw of power draw for a mobile CPU. If the chip can get to that level then it would be viable.
 

HTupolev

Member
Personally I think that they would look for about 2kw of power draw for a mobile CPU.
Either a glorious typo, or Nintendo is moving into the integrated "space heater plus gaming device" industry.

They get more expensive when you shrink?
They can initially. Obviously there's a (huge) cost associated with setting up production for a new version of a die on a new node. But even ignoring that? Per-die production costs on a new node can absolutely be higher, early on in the lifetime of a process node.

Actually, right now is a very scary time, because it seems to be taking new processes longer to surpass their predecessors in metrics like cost-per-transistor than it used to. You might notice that there's been concern recently about people not jumping to 20nm in a very expedient manner.
 
Xbox One GPU has 1.31 Tflops. I knew that was a big difference but not so huge.

You know the Gflops for 360 and PS3 GPU?

Both are around 350 gflop cpu+gpu but ps3 was a bit more(~50gflop I think). Dont think it means much, Cell has 230 gflops, PS4 cpu is 100gflop but isn't worse.
 

DonMigs85

Member
Wii U GPU has only 176 Gflops?

If it really only has 160 stream processors, then yes. My cheap laptop has an 80-core Radeon 7340 GPU and it's rated at around 90 GFLOPS at 523 Mhz.
Xbox 360's Xenos tops out at around 240 GFLOPS I believe.
Plus, Wii U's GPU only has 1 texture unit per pixel pipe whereas Xenos and RSX have 2 each.
 
Xbox One GPU has 1.31 Tflops. I knew that was a big difference but not so huge.

You know the Gflops for 360 and PS3 GPU?
See above.

The Wii U should still outperform the PS3 and 360 because FLOPS are not created equal. They aren't a good measure unless the systems being compared are from the same generation and on the same process.

That said, yes. The XBone and PS4 are much more powerful... But even then, it can still produce good looking games. Just don't expect it to do so at the same res, with the same draw distances, with as many polygons rendered, or pushing all of the post processing pretties that the more powerful systems can.
 
I'd imagine that this is what Shin'en meant by this statement:

'Especially the workings of the CPU caches are very important to master. Otherwise you can lose a magnitude of power for cache relevant parts of your code. In the end the Wii U specs fit perfectly together and make a very efficient console when used right.'

I'd also imagine that this is how devs like Slightly Mad Studios are reportedly so far only using one CPU core for their Wii U build of Project CARS, in spite of the common thought amongst gamers that the Wii U CPU is inferior.

Has their been any statement or proof that Slightly Mad Studios is only using 1 Core on the Wii U version? I'm curious because if that is the case, can't the developers notice when there are just 2 Cores sitting idle? Are they using the DSP for sound and 1 Core Logic and that's it?

That all sounds "Slightly Mad" if you ask me.....
 
Has their been any statement or proof that Slightly Mad Studios is only using 1 Core on the Wii U version? I'm curious because if that is the case, can't the developers notice when there are just 2 Cores sitting idle? Are they using the DSP for sound and 1 Core Logic and that's it?

That all sounds "Slightly Mad" if you ask me.....

It's just temporary if true, its like what Shinen said about managing embedded memory on GPU and CPU that get the performance they'll need.
 
Why is this thread back? I thought everyone came to an agreement and was over this?
There are several reasons, but here are the ones I quickly thought of:

1) The GPU thread for the Wii U was closed a long time ago probably due to derailment and repeated arguments, so there are some discussions of it here.

2) "Expresso" in some form may come back in the future, and that is enforced with Iwata stating that they will try to absorb Wii U's architecture for future systems.
 

LordOfChaos

Member
- yields suck when you move to a new die size
- cost; gets almost exponentially more expensive with each major shrink

With new ones, yeah, but at 45nm the Wii U could still shrink moving to a mature process. It's a few generations behind. Ie they would not have to move to 22/28nm, they could go to 32nm first, or any of the several half-nodes.

You know the Gflops for 360 and PS3 GPU?

Around 200 each. Xbox had a more efficient unified shader architecture though, allowing more power to be put where it was needed.

Both are around 350 gflop cpu+gpu but ps3 was a bit more(~50gflop I think). Dont think it means much, Cell has 230 gflops, PS4 cpu is 100gflop but isn't worse.

Adding the CPU and GPU Gflops together is pointless.
 

clav

Member
There are several reasons, but here are the ones I quickly thought of:

1) The GPU thread for the Wii U was closed a long time ago probably due to derailment and repeated arguments, so there are some discussions of it here.

2) "Expresso" in some form may come back in the future, and that is enforced with Iwata stating that they will try to absorb Wii U's architecture for future systems.

How though?

Choosing an IBM processor over x86 is going to hurt them long-term unless Nintendo plans to move over everything to IBM.
 
One way to save on power draw would be to knock a few hundred megahertz of the clock speed. Drop it to around 900 and the power draw should be more reasonable. They could also go as far as chopping out a core. A reduction in the dye size would make a big difference though. That is where the real power savings come in.
What could prove to be very interesting is if there is a revision of the WiiU announced sometime. That would most likely be a revision that would reduce power draw furthur and could give a good idea of just low the chip can go. Personally I think that they would look for about 2kw of power draw for a mobile CPU. If the chip can get to that level then it would be viable.

What if they continue with the way of thinking that they had for Wii backwards compatibility? Both consoles in Wii U work off of the same chip. When the Wii U is in Wii Mode, the power draw is less. So if we're speaking of a console/Handheld hybrid, then do you all think it's possible that they can further this idea along with scaling so that when the console is plugged in it uses and draws more power since it's fully active, and does a reverse stream of what the Gamepad does now onto your TV - while in hand held mode, it uses the battery, less power, and plays a scaled down version of the same games by using less of the hardware during those times?

Also, I wouldn't think that Nintendo would completely forsake disks again, but we are at the point where large SD cards are getting pretty cheap. Imagine a semi-proprietary one such as the 3DS game cartridge, only in 2016 for this new console. No disk drive needed. less power even moreso.
 

krizzx

Junior Member
How though?

Choosing an IBM processor over x86 is going to hurt them long-term unless Nintendo plans to move over everything to IBM.

x86 processors are archaic and need to go away in my opinion. The efficiency of IBM CPU's is so much greater watt for watt, nm for nm.

The only benefit of x86 is that it makes it easier for people who were taught on the x86 architecture and move their code around to different systems.

My ideal of gaming consoles are systems that offer me something unique to my PC or something my PC can't do better.

I still stand behind my belief that we do not need 3 of the same console. Variety is a good thing and I don't like the idea of having all system getting hardware from the same company either. At that point, the only things that really make a difference in gaming would be raw horsepower and the name molded/taped to the side of the casing. There would be no real reason to buy 3 different consoles as the one with the highest specs would be the only one that mattered. That is an ill scenario.

The Power8 or a Power8/Espresso hybrid would be a nice choice for Nintendo moving forward.
 
Top Bottom