• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Shin'en Interview on WiiU Hardware: GPU "several generations ahead of Current Gen"

JordanN

Banned
In terms of features it surely is, but in terms of power, it is not.
Has this been confirmed? We don't even know if Wii U has hull and domain shaders.

Just a tessellator hinted to be in the HD 2000/3000/4000 range. That would put it in the middle (although 360 was close to the HD 2000).
 

gngf123

Member
Aren't you the one who used your experience in Unity to discuss the potential of Frostbite 3 coming to the Wii U?

Nice ad hominem.

There were people who said (in this thread) that the Wii U is closer to the Xbox One than 360. And that's either nonsense or fanboy talk.

In some ways it is, in some ways it isn't. Power output isn't one of those things. I'll need to see the posts in question to know what they were talking about. There's a lot of stuff we still don't know.
 

StevieP

Banned
There were people who said (in this thread) that the Wii U is closer to the Xbox One than 360. And that's either nonsense or fanboy talk.

Architecture wise it certainly is. Raw power wise it is closer to the previous gen consoles. If you don't understand that distinction there are plenty of places to read up on it, even here.

Has this been confirmed? We don't even know if Wii U has hull and domain shaders.

It's open GL 4.x compliant
 
Nobody in their right mind have said such a thing, and don't remember it being mentioned at all. But carry on.

Some people think the gap between 360-Wii U is greater than the gap between Wii U and Xbone.

So yes, people have held onto that thought.

But I suppose they wouldn't be in "their right mind" anyway.

Architecture wise it certainly is. Raw power wise it is closer to the previous gen consoles. If you don't understand that distinction there are plenty of places to read up on it, even here.

Isn't what power matters most? Sure it has new features and what not, but it still is only a small leap from current gen consoles in power.

Also, it's extremely low TDP won't all of a sudden put those new architecture features to good work and bring them anywhere near Next-Gen consoles.
 
Has this been confirmed? We don't even know if Wii U has hull and domain shaders.

Just a tessellator hinted to be in the HD 2000/3000/4000 range. That would put in in the middle.

True. Info is still pretty scarce. I guess I argue on the side that it's not crazy to believe (or fanboyish) the featureset isn't far off from the One.
 

Haunted

Member
This is not the same at all.

Unlike the companies you posted, Shin'en don't work for Nintendo. They just publish the majority of their games on Nintendo hardware. They are completely third party with no contractual bindings.

Shin'en has seen success on Nintendo hardware since the GBA regardless of how every other third party dev was doing. They have nothing to lose or gain by doing this. They are just speaking from their own experience and view point.

What is with all of these people trying to write off their statement as just PR or marketing? I could see this hurting their rep more than helping it.
Even if they're not officially part of the company, I see their impartiality as roughly in the same place as those that are. They've clearly aligned themselves with Nintendo at this point and I treat their statements accordingly.
 

daxy

Member
There were people who said (in this thread) that the Wii U is closer to the Xbox One than 360. And that's either nonsense or fanboy talk.

See, if you say the gap between Wii U and XBO is smaller than the gap between 360 and XBO, meaning that Wii U is more capable than 360, that is correct. That person is not necessarily saying Wii U is more comparable to XBO than it is to 360 in terms of performance.

Reading comprehension.
 

fallingdove

Member
Yepp, I agree. It happens particularily often in threads about gaming hardware. And even more often in threads about Nintendo gaming hardware. I really hate all this "ha, my console is more powerful than yours" discussion. It's childish and annoying.

Especially when people are too fucking narrow minded to even read the OP properly. Instead, they just take any opportunity to derail any thread with their stupid fanboy agendas.

Shin'en said the Wii U has newer tech, and programmers need time to utilize it properly. Something that's always been like that for every console ever made. There is no need to put that statement out of context and bring out PS4/Wii U hardware wars.

There is no fucking war. Both mashines will be able to produce nice looking graphics. And if one of them isn't your cup of tea and doesn't meet your expectations - then fucking deal with it, but please stop derailing threads with your embarassing, hidden fanboyism.

Let's not pretend that Nintendo fans are the innocent party. I see comments all the time in threads like these where the first comments are - "See, we told you that the Wii U was on the same level as other next gen consoles." These comments are just as guilty of derailing.
 
I really thought the notion that the Wii U will be comparable to the PS4/Uno had been squashed. Do some people truly believe that?

No. They just refuse to put it on the same level as ps360. Because that's insulting or something. It has to be like 2x current gen.

Oh, I forgot about the "architecture" argument. As if it matters. But hey, anything to avoid speaking of it in the same sentence as current gen I suppose.
 
Architecture wise it certainly is. Raw power wise it is closer to the previous gen consoles. If you don't understand that distinction there are plenty of places to read up on it, even here.

That's not that important though. The PowerVR SGX based chip of the VITA is more modern than the PS3 GPU, doesn't change the fact that putting the Wii U closer to the Xbox One is absurd.
 
M°°nblade;60180481 said:
Yes ofcourse, everybody knows that. But the date of finalisation of a chip says little of architecture choices.
2008 GPU tech customised in 2011 is still 2008 tech at it's core. Because it's based on 2008 GPU architecture with a dx10 feature set which is different from actual 2011 GPU architecture with a dx11 feature set. I doubt anyone currently discussing this has enough technical expertise to form a valid opinion whether the difference between a dx9 GPU and dx10 GPU is bigger or smaller than the difference between a dx10 and a dx11 GPU. The same goes for x86 and powerpc architecture.
This affirmation doesn't make any sense. The HD5000 series started development before the HD4000 series was finalized, and the HD4000 series was finalized way before being at sale.

I don't understand this necessity to downplay whatever facts could benefit Nintendo. EVERY GPU ON THE MARKET STARTS ITS DEVELOPMENT UPON SOME BASE MODEL WHICH OF COURSE IS AN OLDER MODEL, AND THEN EVOLVE FROM IT. AND EVERY GPU HAS A DEVELOPMENT CYCLE OF AT LEAST 2 YEARS (in the case of Nintendo, it may have been a longer development since they go for ultra-customized designs that diverge from standard designs a lot more than what is normal even for console vendors).

I can't understand why the fact that Nintendo started to work with it's GPU in 2008 makes impossible for them to go higher than any 2008 design, when some of the GPU you claim to be superior in terms of architectural advances started their development BEFORE and upon OLDER DESIGNS.
 
This affirmation doesn't make any sense. The HD5000 series started development before the HD4000 series was finalized, and the HD4000 series was finalized way before being at sale.

I don't understand this necessity to downplay whatever facts could benefit Nintendo. EVERY GPU ON THE MARKET STARTS ITS DEVELOPMENT UPON SOME BASE MODEL WHICH OF COURSE IS AN OLDER MODEL, AND THEN EVOLVE FROM IT. AND EVERY GPU HAS A DEVELOPMENT CYCLE OF AT LEAST 2 YEARS (in the case of Nintendo, it may have been a longer development since they go for ultra-customized designs that diverge from standard designs a lot more than what is normal even for console vendors).
There's no need to start yelling. If you reread my post you'll see that I'm not downplaying anything since I address people on both sides claiming its either closer to the X360 or closer to the XBone. In my simple mind, the Wii GPU is as many years and generations ahead of Xenos as the Xbone GPU seems to be ahead of the Wii U GPU and it is as much directX feature set levels apart from the X360 as the Xbone.
The real problem is that the (usual) people claiming it is closer to one or another seem to have even less understanding of GPU technology than I do.

I can't understand why the fact that Nintendo started to work with it's GPU in 2008 makes impossible for them to go higher than any 2008 design, when some of the GPU you claim to be superior in terms of architectural advances started their development BEFORE and upon OLDER DESIGNS.
Nobody's saying it's impossible.
However, people do think it's weird to assume that, if Nintendo wanted to go higher than an 2008 design, why they have decided to pick up a 2008 GPU and hang some bells and whistles on it in the first place instead of just picking a 2009, 2010 or 2011 GPU which seems a lot easier to achieve the same result.

And like other people just mentioned: does a architecture even matter?
The console is out for more than 6 months and we still haven't seen any releases, screenshots or other footage of 2013 games that put it above PS360 level. Not from multiplatform games, not from exclusive games, not even from Nintendo games.

The proof is in the pudding and all the Wii U pudding we can eat this year tastes like PS360 pudding.
 
M°°nblade;60454381 said:
There's no need to start yelling. If you reread my post you'll see that I'm not downplaying anything since I address people on both sides claiming its either closer to the X360 or closer to the XBone. In my simple mind, the Wii GPU is as many years and generations ahead of Xenos as the Xbone GPU seems to be ahead of the Wii U GPU and it is as much directX feature set levels apart from the X360 as the Xbone.
The real problem is that the (usual) people claiming it is closer to one or another seem to have even less understanding of GPU technology than I do.
Of course you're downplaying it. You're drawing a picture here where the WiiU GPU is old because its development started in 2007-2008, as if XboxOne and PS4 GPUs were made in less than a year. That's a FALLACY.
We know that the WiiU GPU was developed over an R700 core for four years. On the WiiU GPU thread, we had informations (from bgassassin) of final silicon not being available until December 2011 - January 2012. And we also know that WiiU had a really premature launch, that the devkits and documentation weren't finalized until post-launch and with an OS that still needed a lot of work, which also means that the GPU is in fact really new.

M°°nblade said:
Nobody's saying it's impossible.
However, people do think it's weird to assume that, if Nintendo wanted to go higher than an 2008 design, why they have decided to pick up a 2008 GPU and hang some bells and whistles on it in the first place instead of just picking a 2009, 2010 or 2011 GPU which seems a lot easier to achieve the same result.
¿Some "bells and whistles" the design we've seen that differs completely from any R700 design we've seen?
If they started to develop the GPU in 2007-2008, why not grab the last finalized model which had a great performance for its time, and then start to work from here?
The R800's development (ATI's first DX11 GPU) started over R600 or even an older prototype, and then reworked it among with the R700 and so on.

Nintendo has had 4 years of technological advancements that of course has used to its favour. Even the CPU which has a really familiar architecture has benefited from this, with a low-latency-eDram as the L2 cache that allowed it to have the largest amount of memory per core of this generation of consoles.

And like other people just mentioned: does a architecture even matter?
The console is out for more than 6 months and we still haven't seen any releases, screenshots or other footage of 2013 games that put it above PS360 level. Not from multiplatform games, not from exclusive games, not even from Nintendo games.

The proof is in the pudding and all the Wii U pudding we can eat this year tastes like PS360 pudding.
If the launch was premature, and Nintendo has had problems to finalize it as it should have, then its natural than a system with a much modern and different architecture than the 360/PS3 has so many problems to show its strength.

We are at two weeks to see what has done Nintendo over this past year on this regard, and also to show us what this console is capable of. Now will be time to judge.
 

69wpm

Member
So, the TDP of the Wii U was mentioned quite a lot in this thread. How come nobody takes a better look at it? Yes, the TDP of the PS360 is a lot higher, which is not a good think, quite the contrary, but that should also give you something to think about, namely the relationship between TDP and power. The Wii U draws a lot less power while still being more powerful than the PS360. In conclusion: The Wii u is a lot stronger, because it has newer hardware that makes this performance possible. I still can't believe that people argue about the power difference between these consoles. Until developers start to optimise their engines for Wii U, you won't see the big graphic leaps you want to see.
 

Rolf NB

Member
Cliff notes version:
0)Dearest freezamite. eDRAM should never appear in the same sentence as "low latency". eDRAM is a denser, slower drop-in for SRAM. Repeat after me: eDRAM is smaller and slower than SRAM.

2)"generations ahead" is a feature set metric. As in, "The Geforce GT610 is several generations ahead of the Geforce 8800 (but this implies absolutely nothing about performance)".

3)Optimizing for caches is both barely possible and hardly necessary. Cache's automatically optimize latency and bandwidth of repeat access patterns. That's the whole idea, and it hasn't changed in 30 years.

The only angle you even have is A)manual prefetch instructions and B)cache-bypassing stores for data you know won't be valuable as a cache entry. Intel's had the relevant instructions since the Pentium 3, AMD since the original Athlon, and of course Xenon and Cell PPE have them as well. Nothing about this is new.
And we can immediately throw away A: manual prefetch, because any modern CPU has automatic prediction-based prefetching, and all you're going to do with manual prefetch instructions is cause interference.

B is a fringe benefit in total edge cases. You gain a complete cache refill if you happen to stream out exactly enough data to overwrite your cache hierarchy once. If you stream less, you don't lose your whole cache, so the penalty diminishes. If you stream more, the penalty stays the same but gets amortized over more work performed.

I've written software "seriously" for 20 years, and I seriously can't tell you how and what to optimize for a CPU cache, much less for a CPU cache of a certain size. My best generic advice is "don't". Completely puzzled by that statement.
 

Hermii

Member
There were people who said (in this thread) that the Wii U is closer to the Xbox One than 360. And that's either nonsense or fanboy talk.

In power its closer to 360. In architecture its closer to xbone. Is that so hard to grasp ?

Next gen architecture, current gen power.
 
Cliff notes version:
0)Dearest freezamite. eDRAM should never appear in the same sentence as "low latency". eDRAM is a denser, slower drop-in for SRAM. Repeat after me: eDRAM is smaller and slower than SRAM.

2)"generations ahead" is a feature set metric. As in, "The Geforce GT610 is several generations ahead of the Geforce 8800 (but this implies absolutely nothing about performance)".

3)Optimizing for caches is both barely possible and hardly necessary. Cache's automatically optimize latency and bandwidth of repeat access patterns. That's the whole idea, and it hasn't changed in 30 years.

The only angle you even have is A)manual prefetch instructions and B)cache-bypassing stores for data you know won't be valuable as a cache entry. Intel's had the relevant instructions since the Pentium 3, AMD since the original Athlon, and of course Xenon and Cell PPE have them as well. Nothing about this is new.
And we can immediately throw away A: manual prefetch, because any modern CPU has automatic prediction-based prefetching, and all you're going to do with manual prefetch instructions is cause interference.

B is a fringe benefit in total edge cases. You gain a complete cache refill if you happen to stream out exactly enough data to overwrite your cache hierarchy once. If you stream less, you don't lose your whole cache, so the penalty diminishes. If you stream more, the penalty stays the same but gets amortized over more work performed.

I've written software "seriously" for 20 years, and I seriously can't tell you how and what to optimize for a CPU cache, much less for a CPU cache of a certain size. My best generic advice is "don't". Completely puzzled by that statement.

When I think about optimizing for caches or specific cache sizes things like blocking come to my mind. For example in matrix multiplication it can be useful to split the matrices into several smaller blocks of such a size that each fits into the cache (instead of multiplying the whole matrix at once like you are normally used to).
Just saying though. I have no experience in game development so I'm not sure if such things would often be useful in that context.
 
In power its closer to 360. In architecture its closer to xbone. Is that so hard to grasp ?

Next gen architecture, current gen power.

I don't even know why you quoted me if you just want to write your nonsense.
People were talking about the power of the WiiU and Xbox One.

The talk about architecture is just goalpost moving at this point.
 

Hermii

Member
I don't even know why you quoted me if you just want to write your nonsense.
People were talking about the power of the WiiU and Xbox One.

The talk about architecture is just goalpost moving at this point.

Sorry I thought it was me you were referring to by "people in this thread". And thats all I claimed.
 
Rolf NB said:
0)Dearest freezamite. eDRAM should never appear in the same sentence as "low latency". eDRAM is a denser, slower drop-in for SRAM. Repeat after me: eDRAM is smaller and slower than SRAM.
IBM's eDram implementation with the POWER7 series had a latency access times of only 5.7 nano-seconds, compared to 7.5ns of 6MB SRAM on "Tukwila" designs.

Fact is that Wii emulation, which had SRAM cache, is done through a single core at the same speed and with the same amount of L2 cache (but this time eDram instead of eSram). If latencies were higher, perfect emulation wouldn't be possible, because having longer latencies in such a low level memory in a CPU affects performance a LOT.
Even the Xbox 360 had problems when it got "mini" because reduced latencies affected how code was executed, and had to introduce some delays to compensate it. And this was to compensate REDUCED latencies!

The fact that Wii emulation is done at the same clock speed confirms that latency to L2 cache is at least as low as it was in terms of clocks.

Rolf NB said:
2)"generations ahead" is a feature set metric. As in, "The Geforce GT610 is several generations ahead of the Geforce 8800 (but this implies absolutely nothing about performance)".
Of course is a feature set metric, but newer features means better performance in a console. At least first and second parties should take advantage of them.

Rolf NB said:
3)Optimizing for caches is both barely possible and hardly necessary. Cache's automatically optimize latency and bandwidth of repeat access patterns. That's the whole idea, and it hasn't changed in 30 years.
Optimizing for much bigger caches or ones that change specifications is of course possible and much necessary as has been stated by Shin'en.
Wii had a more flexible cache implementation than GC, and WiiU is AT LEAST as capable per cycle and per kb as the Wii, but with 2MB of L2 in it's main core and 512KB on the others compared to just 256KB the Broadway had.

Rolf NB said:
The only angle you even have is A)manual prefetch instructions and B)cache-bypassing stores for data you know won't be valuable as a cache entry. Intel's had the relevant instructions since the Pentium 3, AMD since the original Athlon, and of course Xenon and Cell PPE have them as well. Nothing about this is new.
And we can immediately throw away A: manual prefetch, because any modern CPU has automatic prediction-based prefetching, and all you're going to do with manual prefetch instructions is cause interference.

B is a fringe benefit in total edge cases. You gain a complete cache refill if you happen to stream out exactly enough data to overwrite your cache hierarchy once. If you stream less, you don't lose your whole cache, so the penalty diminishes. If you stream more, the penalty stays the same but gets amortized over more work performed.

I've written software "seriously" for 20 years, and I seriously can't tell you how and what to optimize for a CPU cache, much less for a CPU cache of a certain size. My best generic advice is "don't". Completely puzzled by that statement.
That statement had other implications as I read it. While you focus on low level instruction optimization, I think they were focusing on the fact that you have a huge 2MB+1MB L2 lockable cache so you can make use of it in a more flexible way to reduce accesses to main memory.

More than to "reprogram" the L2 fetches to perform better, to make code suited for that 2+1MB configuration.

I insist on the latencies, if they were higher, it would be impossible to emulate Wii the way it is done. We're having hardware emulation here.
 
These discussions about the Wii U's power are starting to get ridiculous. I think we all have a pretty good idea of what the console is going to be capable of when developed specifically for the Wii U. Take a PS360 game, add high resolution textures due to having 2x the amount of RAM, better draw distances, and some modern lighting/GPU techniques due to having a newer GPU feature-set, and Bam. Done. That's the Wii U.
 

Xun

Member
These discussions about the Wii U's power are starting to get ridiculous. I think we all have a pretty good idea of what the console is going to be capable of when developed specifically for the Wii U. Take a PS360 game, add high resolution textures due to having 2x the amount of RAM, better draw distances, and some modern lighting/GPU techniques due to having a newer GPU feature-set, and Bam. Done. That's the Wii U.
That's pretty much what I'm expecting, and I'm fine with that.

I honestly think the best looking Wii U games will match the first PS4/Xbone games, albeit at 720p.
 

CrunchyB

Member
Would be interesting to know how many of the people in the WiiU GPU thread do have a degree in computer engineering.

I have a degree in Computer Engineering, currently in Electrical Engineering I followed advanced classes on semiconductor production and RISC processor design. I work as a developer on a high-end (non-game) 3D application.

I don't understand most of the discussion in that thread and I doubt many people do. They are just regurgitating stuff they read somewhere without really understanding any of it. Trying to divine the capabilities of a chip set by photographs is a fool's errand.
 

alan666

Banned
Why do people keep comparing the WiiU to the X360 & PS3 ?

Also if the WiiU is so powerful where are the games ?

I know the WiiU has eDRAM & its supposed to make all the difference, but why did Nintendo bother with eDRAM as it is so expensive & not just go for more DDR3 or even have GDDR5 ?

I have had a WiiU since launch & i want to know where the games are !?
 

Fandangox

Member
Why do people keep comparing the WiiU to the X360 & PS3 ?

Cause its competing with them.

Also if the WiiU is so powerful where are the games ?

Not enough userbase/selling consoles for Third party to bother, and Nintendo is struggling with the shift to HD.

I know the WiiU has eDRAM & its supposed to make all the difference, but why did Nintendo bother with eDRAM as it is so expensive & not just go for more DDR3 or even have GDDR5 ?

They always have very custom-made hardware, I don't know. They make baffling decisions in this regard.

I have had a WiiU since launch & i want to know where the games are !?

Store Shelves.
 
That's pretty much what I'm expecting, and I'm fine with that.

I honestly think the best looking Wii U games will match the first PS4/Xbone games, albeit at 720p.

I don't see the Wii U ever getting a game on par with the high-end late gen PS3 games, let alone their launch PS4 games. Nintendo will never put those kinds of resources into a game.
 
Well really, 'crap on' has to be filtered through the reality of diminishing returns. But the truth is that Wii U has games already that graphically look as good as or better than the best PS3 games, but also have more modern lighting, draw distance, and still look great when you get in really close up or step out of a corridor type situation, or use a camera that isn't fixed.
 
Do you honestly believe that? If so, can I buy some pot off you?

If you think Nintendo will ever put enough money into a game to get it to look like The Last of Us or GT6, I think I'll come to you first for crack.

Seriously, there's nothing ever shown for the Wii U up to and including tech demos that are anywhere near the level of something like GoW:A. It's like looking at games separated by an entire generation. Go ahead and compare the two.
 
Top Bottom