• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

Donnie

Member
A fixed function pipeline should perform the limited things it can do faster than a programmable pipeline (assuming both have a similar theoretical performance).
 

JohnB

Member
Couple of queries for the techs:

The 3GB (+-) SSD which is dedicated to the system: how feasible is it that WiiU can utilise it for more than, say, streaming data from the Blu-Ray drive; that is, could the system be designed in such a way (depending on bandwidth) that that 3GB could be used to boost MEM2?

Also, last year, it was suggested that certain functions that TEV had been enhanced to provide much better lighting/shadowing - would this be 'cost free', per se, as a direct hardware feature?
 

ozfunghi

Member
Couple of queries for the techs:

The 3GB (+-) SSD which is dedicated to the system: how feasible is it that WiiU can utilise it for more than, say, streaming data from the Blu-Ray drive; that is, could the system be designed in such a way (depending on bandwidth) that that 3GB could be used to boost MEM2?

Also, last year, it was suggested that certain functions that TEV had been enhanced to provide much better lighting/shadowing - would this be 'cost free', per se, as a direct hardware feature?

I think the main issue with the RAM is a question of bandwidthn not of size. I doubt the SSD is going to solve that problem. It would be much more logical to trim the OS so that it doesn't use up 1GB for itself. I think an additional 512MB would be much preferred over an additional 1-3GB on the SSD, even at the supposed low bandwidth it faces.

Nothing is cost free, from what i gather. But if your question is whether using TEV functions means reducing the total amount of SPU's available in the GPU or lowering the full amount of Gflops at its disposal, i don't think so. See Wsippels answer above.
 
From Nintendos Q&A (translated by Ruliweb, should be out in English soon). http://www.neogaf.com/forum/showthread.php?p=47206646#post47206646

Q. Isnt the WiiU architecture too much focused on the GPU?

Miyamoto: For High End graphics there is a hurdle, since we have to reeducate our people. The development itself hasnt changed but we are recruiting specialists that can become core members in each specialized area. External Developers are used to shader techniques and we are collaborating a lot with external companies nowadays so we have a very good development structure.

Iwata: Every gaming hardware has its specialities. There is a timing of hit and miss before the functions can be used fully. We were not able to provide development kits that get out all the power of wiiu until mid of last year. With other gaming consoles firms had 6 to 7 years to experiment but our console has a different balance so it is easy to see who has adapted and who hasnt. However this is something time will heal so we are not too worried.

Takeda: WiiU is a machine that has a lot of performance compared to its power consumption. The GPU is definately more pronounced than the CPU . There are people saying that the CPU is weak but that is not true. It is a trendl that the cash memory is whats getting biggrr with CPUs not the processing power. i do not think at the CPu is underpowered. Its just a design where the memory is more stressed.
 

JohnB

Member
I think the main issue with the RAM is a question of bandwidthn not of size. I doubt the SSD is going to solve that problem. It would be much more logical to trim the OS so that it doesn't use up 1GB for itself. I think an additional 512MB would be much preferred over an additional 1-3GB on the SSD, even at the supposed low bandwidth it faces.

Nothing is cost free, from what i gather. But if your question is whether using TEV functions means reducing the total amount of SPU's available in the GPU or lowering the full amount of Gflops at its disposal, i don't think so. See Wsippels answer above.

Thanks; Wsippels' post wasn't, em, posted when I posted my post.

That's a dreadful number of the word 'post' and sequitors!

I'm just curious if 'extra' TEVs have been added, thus the rumours of hardware-based 'free' DX11-equavlent might have been added by Nintendo.

Again, ta for the answers.
 
That's good stuff I don't see Iwata as a liar. I always believed this console would outperform what the numbers are saying. Good times are ahead....20+ years in the game and people questioning if Nintendo can design a console that outperforms 7 year old tech. Laughable to me.

well, 7 years late, you'd expect it to outperfom. but it hasn't yet
 

LeleSocho

Banned
well, 7 years late, you'd expect it to outperfom. but it hasn't yet

image.php
 

Blades64

Banned
Been lurking this thread since the start. Just wanted to drop in and say thanks to the tech guys with all the interesting discussion, and I would also like to ask for us to please stay on topic. I fear we might be going to off path here...
 
That's good stuff I don't see Iwata as a liar. I always believed this console would outperform what the numbers are saying. Good times are ahead....20+ years in the game and people questioning if Nintendo can design a console that outperforms 7 year old tech. Laughable to me.

Its illegal to lie to your investors. No ones gonna take that risk... So theres no reason not to believe him.
 

Durante

Member
Now (IF the rumors are true) the 720 has "only" 2.5-3x the flops and Wii U has NO architectual disadvantage on the GPU part. (Using 476 Gflops for Wii U and 1200GFlops for 720)
This is untrue. Of course, it's not comparable to a Wii situation, but it's still an older GPU architecture. If you look a few pages back you can see some speculation as to its per-FLOP disadvantage compared to GCN, which is what's likely being used in Orbis/Durango.
 
Have you guys followed the latest tweets from cheesemeister3k. Translated the gpu/cpu talk also. Like I said I never took nintendo for big PR talkers they keep it about as honest as it can get. This has gotten me excited by not only the GPU but CPU also. I can't wait to see games that push this system.

I did.

Yes this gets you excited. He basically said the console is memory intensified design.

Genyo Takeda: It's a memory-intensified design. I can't get into specifics, but I think it's rather powerful.

And the GPU is "fairly mature and goinf the same direction as competitors" is also good, but not surprising.

Genyo Takeda: As for the GPU, the tech is fairly mature and going in the same direction as competitors. Makers are used to prog. shaders.

Genyo Takeda: The experience of other companies and Nintendo with shaders means that the difficulty with them early on has decreased.
 
This is untrue. Of course, it's not comparable to a Wii situation, but it's still an older GPU architecture. If you look a few pages back you can see some speculation as to its per-FLOP disadvantage compared to GCN, which is what's likely being used in Orbis/Durango.

So what graphical features does GCN bring that R7XX could not replicate?
 

Spongebob

Banned
I believe so because Wii U and Durango are more similar that Wii U and Orbis. And the 720 being closer to Wii U in terms of power (IF the rumors are true) then Orbis as the lead platform would be worse for Wii U. Although even the Wii U - Orbis difference is still alot smaller that Wii - PS360.

No rumor has pointed to Durango being closer to Wii U than Orbis.

So what graphical features does GCN bring that R7XX could not replicate?

He's talking about efficiency.
 
No rumor has pointed to Durango being closer to Wii U than Orbis.



He's talking about efficiency.

Im taking about features. Also with what he quoted...

And yes 1200 gflops of the Durango rumor ( http://www.vgleaks.com/world-exclusive-durango-unveiled-2/ )

vs

Orbis 1843 gflops of Orbis ( http://www.vgleaks.com/world-exclusive-orbis-unveiled/ )

brings 720 closer to Wii U. And since both are supposed to be GCN based, the flops numbers of Orbis and Durango are even comparable.
 

Durante

Member
So what graphical features does GCN bring that R7XX could not replicate?
Talking about "graphical features" detracts from the point. You were comparing GFLOPs and stating that there is no architectural disadvantage for Wii U. Since, however, Wii U GFLOPs are presumably "VLIW5 FLOPS" and the other consoles' numbers are "GCN FLOPS", there is an architectural disadvantage that comes into play for this particular comparison: the higher per-FLOP efficiency of GCN.
 
Talking about "graphical features" detracts from the point. You were comparing GFLOPs and stating that there is no architectural disadvantage for Wii U. Since, however, Wii U GFLOPs are presumably "VLIW5 FLOPS" and the other consoles' numbers are "GCN FLOPS", there is an architectural disadvantage that comes into play for this particular comparison: the higher per-FLOP efficiency of GCN.

Where was i taking about efficiency?

Yeah thats one downside for Wii U. But the featureset is basically complete and comparable through all 3 consoles.
 

Durante

Member
You were talking about "architectural disadvantages". I consider lower efficiency an architectural disadvantage, especially in the context of a GFLOP-level comparison.
 

ugoo18

Member
Have you guys followed the latest tweets from cheesemeister3k. Translated the gpu/cpu talk also. Like I said I never took nintendo for big PR talkers they keep it about as honest as it can get. This has gotten me excited by not only the GPU but CPU also. I can't wait to see games that push this system.

Any links?
 

Spongebob

Banned
Im taking about features. Also with what he quoted...

And yes 1200 gflops of the Durango rumor ( http://www.vgleaks.com/world-exclusive-durango-unveiled-2/ )

vs

Orbis 1843 gflops of Orbis ( http://www.vgleaks.com/world-exclusive-orbis-unveiled/ )

brings 720 closer to Wii U. And since both are supposed to be GCN based, the flops numbers of Orbis and Durango are even comparable.

Is that you iamshadowlark?

Why would you just look at the GPU FLOPS??

Durango: 8GB DDR3 (68GB/s) + 32MB ESRAM vs Wii U: 2GB DDR3 (12.8GB/s) + 32MB eDRAM

Durango: 1.23TFLOPS GCN GPU vs Wii U: 0.3-0.6GFLOPS VLIW5 GPU

Durango: 8 jaguar cores @ 1.6GHz vs Wii U: 3 Espresso cores @ 1.2GHz

Durango is far closer to Orbis than it is to the Wii U.
 

KingSnake

The Birthday Skeleton
Any links?

Cheesemeister (@Cheesemeister3k) tweeted at 5:42 PM on Sat, Feb 02, 2013:
Genyo Takeda: I don't think that the #WiiU CPU and GPU are imbalanced in favor of the GPU. It depends how you measure. GPU chip is bigger.
(https://twitter.com/Cheesemeister3k/status/297746798385172480)

Cheesemeister (@Cheesemeister3k) tweeted at 5:44 PM on Sat, Feb 02, 2013:
Genyo Takeda: In modern CPUs, the math logic portions are rather small. In new PCs and servers, the CPUs may be big, but the logic is small.
(https://twitter.com/Cheesemeister3k/status/297747369645187073)

Cheesemeister (@Cheesemeister3k) tweeted at 5:48 PM on Sat, Feb 02, 2013:
Genyo Takeda: It's common for the surrounding SRAM cache memory in CPUs to be bigger. From that viewpoint, you wouldn't see them imbalanced.
(https://twitter.com/Cheesemeister3k/status/297748259177365504)
 
Is that you iamshadowlark?

Why would you just look at the GPU FLOPS??

Durango: 8GB DDR3 (68GB/s) + 32MB ESRAM vs Wii U: 2GB DDR3 (12.8GB/s) + 32MB eDRAM

Durango: 1.23TFLOPS GCN GPU vs Wii U: 0.3-0.6GFLOPS VLIW5 GPU

Durango: 8 jaguar cores @ 1.6GHz vs Wii U: 3 Espresso cores @ 1.2GHz

Durango is far closer to Orbis than it is to the Wii U.
I don't think he is arguing that. He was saying that if Durango < PS4, then it is closer to the WiiU than the PS4 is.

Cheesemeister (@Cheesemeister3k) tweeted at 5:42 PM on Sat, Feb 02, 2013:
Genyo Takeda: I don't think that the #WiiU CPU and GPU are imbalanced in favor of the GPU. It depends how you measure. GPU chip is bigger.
(https://twitter.com/Cheesemeister3k/status/297746798385172480)

Cheesemeister (@Cheesemeister3k) tweeted at 5:44 PM on Sat, Feb 02, 2013:
Genyo Takeda: In modern CPUs, the math logic portions are rather small. In new PCs and servers, the CPUs may be big, but the logic is small.
(https://twitter.com/Cheesemeister3k/status/297747369645187073)

Cheesemeister (@Cheesemeister3k) tweeted at 5:48 PM on Sat, Feb 02, 2013:
Genyo Takeda: It's common for the surrounding SRAM cache memory in CPUs to be bigger. From that viewpoint, you wouldn't see them imbalanced.
(https://twitter.com/Cheesemeister3k/status/297748259177365504)
Thanks for posting that. Very interesting.
 

ugoo18

Member
Cheesemeister (@Cheesemeister3k) tweeted at 5:42 PM on Sat, Feb 02, 2013:
Genyo Takeda: I don't think that the #WiiU CPU and GPU are imbalanced in favor of the GPU. It depends how you measure. GPU chip is bigger.
(https://twitter.com/Cheesemeister3k/status/297746798385172480)

Cheesemeister (@Cheesemeister3k) tweeted at 5:44 PM on Sat, Feb 02, 2013:
Genyo Takeda: In modern CPUs, the math logic portions are rather small. In new PCs and servers, the CPUs may be big, but the logic is small.
(https://twitter.com/Cheesemeister3k/status/297747369645187073)

Cheesemeister (@Cheesemeister3k) tweeted at 5:48 PM on Sat, Feb 02, 2013:
Genyo Takeda: It's common for the surrounding SRAM cache memory in CPUs to be bigger. From that viewpoint, you wouldn't see them imbalanced.
(https://twitter.com/Cheesemeister3k/status/297748259177365504)

Thanks for that
 
I think this is not realistic. They're hiring more of them.
Yes. If you look at Nintendoland, you can see that Nintendo were already experimenting with different variations of advanced shaders and lighting.

Have there been other systems that would be labeled as having a "memory intensive" architecture?
 

tipoo

Banned
I did.

Yes this gets you excited. He basically said the console is memory intensified design.



And the GPU is "fairly mature and goinf the same direction as competitors" is also good, but not surprising.

What does "memory intensified" even mean? I think something is being lost in translation here. In the context he said it in I assume he's saying they found it more important to increase the CPU cache than other aspects of it?

As for the GPU part of course they're going to be using programmable shaders, that's all that's being made outside of mobile now.

edit:
Genyo Takeda: It's common for the surrounding SRAM cache memory in CPUs to be bigger. From that viewpoint, you wouldn't see them imbalanced.

I guess that clears it up a bit, he's saying since it doesn't use SRAM for L2 it's not as imbalanced as the raw size would imply since eDRAM is I think 1/3rd as large per memory.
 

Oblivion

Fetishing muscular manly men in skintight hosery
The more I hear about the GPU, the better it seems, but then I look at some of Nintendo's own efforts like NSMBU and Nintendo Land, and while they're definitely good looking games, neither of them are complex enough to make 1080p impossible. But both are 720p. True, the NSMB team was never part of Nintendo's A team when it came to graphics, but still.
 
The more I hear about the GPU, the better it seems, but then I look at some of Nintendo's own efforts like NSMBU and Nintendo Land, and while they're definitely good looking games, neither of them are complex enough to make 1080p possible. But both are 720p. True, the NSMB team was never part of Nintendo's A team when it came to graphics, but still.

What do you even mean when you say this? Graphical Complexity does not dictate screen resolution.
 

Oblivion

Fetishing muscular manly men in skintight hosery
What do you even mean when you say this? Graphical Complexity does not dictate screen resolution.

It doesn't?

I should probably clarify. I know that Wii-U is capable of 1080p (Rayman and Sribblenauts do that), what I'm saying is that you would think something like NSMBU would be able to do everything it's doing right now while also being in 1080p.
 

Meelow

Banned
It doesn't?

I should probably clarify. I know that Wii-U is capable of 1080p (Rayman and Sribblenauts do that), what I'm saying is that you would think something like NSMBU would be able to do everything it's doing right now while also being in 1080p.

I think Nintendo really just wanted to make sure NSMBU was a launch title that they didn't really try to do anything to push the graphics or make it have a higher resolution, I think NSMBU could of easily been native 1080p on the Wii U.
 

tipoo

Banned
It doesn't?

I should probably clarify. I know that Wii-U is capable of 1080p (Rayman and Sribblenauts do that), what I'm saying is that you would think something like NSMBU would be able to do everything it's doing right now while also being in 1080p.

So wouldn't "neither are complex enough to make 1080p impossible" make more sense? The more graphically complex something is, the more the GPU has to do, and the more likely you have to fall back to a lower resolution to maintain acceptable framerates.
 
It doesn't?

I should probably clarify. I know that Wii-U is capable of 1080p (Rayman and Sribblenauts do that), what I'm saying is that you would think something like NSMBU would be able to do everything it's doing right now while also being in 1080p.

I think - and it may be backed up by an interview, IIRC - that Nintendo targetted 720p for their first round of games for ease of development and to ensure solid performance.
 
Talking about "graphical features" detracts from the point. You were comparing GFLOPs and stating that there is no architectural disadvantage for Wii U. Since, however, Wii U GFLOPs are presumably "VLIW5 FLOPS" and the other consoles' numbers are "GCN FLOPS", there is an architectural disadvantage that comes into play for this particular comparison: the higher per-FLOP efficiency of GCN.
The main reason GCN is better than VLIW5 is because it's a simpler architecture to optimize drivers for on PC, where games are programmed upon much more abstract layers than on consoles (and also GPGPU I presume, but I don't know how a modified VLIW5 would perform in that regard).

VLIW5 in a console makes a lot of sense because EVERYTHING will be programmed towards this architecture, which is more efficient per mm^2 than GCN.
 
So wouldn't "neither are complex enough to make 1080p impossible" make more sense? The more graphically complex something is, the more the GPU has to do, and the more likely you have to fall back to a lower resolution to maintain acceptable framerates.

Pretty much this. And even still, no matter how complex your graphics are, you can render them out at whatever resolution you want. Whether or not it runs well is a different story however. If EA wanted Battlefield 3 to run at 1080p on PS3 or 360, they could have, but the game would likely run at sub 20fps at best. Killzone 2 at that 4k resolution? Entirely doable...
if you enjoy watching slideshows as opposed to play video games...
 

chaosblade

Unconfirmed Member
The main reason GCN is better than VLIW5 is because it's a simpler architecture to optimize drivers for on PC, where games are programmed upon much more abstract layers than on consoles (and also GPGPU I presume, but I don't know how a modified VLIW5 would perform in that regard).

VLIW5 in a console makes a lot of sense because EVERYTHING will be programmed towards this architecture, which is more efficient per mm^2 than GCN.

I haven't seen anything that has convinced me you can get more performance out of VLIW5 than GCN, even though it's theoretical performance is better. But maybe I'm mistaken, I brought this up before but didn't really get a direct response.
 

ozfunghi

Member
I remember discussions on VLIW5 vs GCN in a console from months ago, in the WUST threads, where the same point was being made, that unlike for PC, the advantage for GCN was nihil. But that was back when BG was still around. Don't know how accurate this is, but it has been brought up before.
 
I remember discussions on VLIW5 vs GCN in a console from months ago, in the WUST threads, where the same point was being made, that unlike for PC, the advantage for GCN was nihil. But that was back when BG was still around. Don't know how accurate this is, but it has been brought up before.
At the very least, the GCN architecture did boost the FLOPS/sp compared to the VLIW5 architecture. Durango's GPU has at most only twice the sp count of Wii U's GPU, for example, but the difference in FLOPs will be a bit wider.
 

ozfunghi

Member
At the very least, the GCN architecture did boost the FLOPS/sp compared to the VLIW5 architecture. Durango's GPU has at most only twice the sp count of Wii U's GPU, for example, but the difference in FLOPs will be a bit wider.

You mean it gained more FPS per actual flop (see Durante's remark). But this is in PC context, is it proven/known that this is the case without the "software layer" known as windows? Meaning, in a dedicated console? Or does the formula to calculate flops differ for GCN architecture (which is somewhat what you are saying)?

The difference is much more pronounced than flops

http://ht4u.net/reviews/2012/amd_radeon_hd_7700_test/index4.php

HD5770 vs 7700 clocked the same. The 5770 has more raw FLOPS by 20+% but is outperformed in almost every task by as much as 37%

Right, but this is all in a windows environment. The argument was whether the same discrepancy holds up in a console environment.
 
At the very least, the GCN architecture did boost the FLOPS/sp compared to the VLIW5 architecture. Durango's GPU has at most only twice the sp count of Wii U's GPU, for example, but the difference in FLOPs will be a bit wider.

The difference is much more pronounced than flops

http://ht4u.net/reviews/2012/amd_radeon_hd_7700_test/index4.php

HD5770 vs 7700 clocked the same. The 5770 has more raw FLOPS by 20+% but is outperformed in almost every task by as much as 37%
 
Top Bottom