• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

Wasn't that strictly in regards to security? The same setup was used in GameCube and Wii.

And yeah no offense but Wsippel's research often turns up extremely optimistic.
Well, there are tasks like background downloads that are likely using the ARM processor, but we don't really have any offical info either way.

Well, criticizing "GPGPU magic" has some validity, when it is proposed as a general cure for weak CPU performance. Just like criticizing eDRAM as a general solution for all memory bandwidth issues is valid.

For an example, you need not look farther than how harsh people reacted in the current Durango thread when it was proposed that you can simply see it as a system with 8 GB of 170 GB/s memory. Or even the reaction to the "magic move engines". Basically, unrealistic scenarios are pointed out as such regardless of where they come from, or which hardware they pertain to.
I think he was specifically addressing some users that has been critical to the Wii U hardware but defends Microsoft's hardware decisions. Having said that, I have seen people (paricularly the threads in beyond3d) being very critical to alot of Microsoft's hardware decisions.
 

ozfunghi

Member
If the current rumours are accurate, a factor of 4-6 in GPU performance (and a newer architecture as well), up to 7 in CPU performance and around 6 in external memory bandwidth. (The latter is 15 instead of 6 for Orbis, but its memory setup is not comparable)

You are putting the U-GPU at 300 Gflops? Or what new rumors do you mean? If it's only 300 Gflops, then what is taking up all that space?

And how do you get "up to 7 in CPU performance"? I'm not questioning what you are saying but what i read is that these were basically CPU's for the next generation of tablets?
 

Durante

Member
You are putting the U-GPU at 300 Gflops? Or what new rumors do you mean? If it's only 300 Gflops, then what is taking up all that space?
No, I'm putting it at 320-500. 1.84/0.32 rounds up to 6, which gives you the upper end of the estimate. Though that doesn't include efficiency gains from the GCN architecture, which are pretty significant in some cases. I guess the lowest end number should be 2.5 if you assume the best case scenario for Wii U, take the weaker (in numbers at least) console of the other two for comparison, and disregard all architectural improvements.
 

USC-fan

Banned
Kinda amusing that, now that the NextBox is using a similar setup, the EDRAM stopped being "A Nintard wishful-thinking setup" and now is a serious solution to a (relative) low main memory bandwidth architecture.

The problem has always been the wishful thinking. Looking at the games released for the wuu the edram BW cannot be that great. Most likely around the same as the xbox 360 32 GB/s.

Just having edram just doesnt equal massive BW....

Edram was added to the wuu for its cost and power savings.
 
What significant architectural gains are we talking about that Nintendo couldn't have customized into their GPU?
Keep the fate my friend, to the bitter end.
Excuse me?
You are excused.

Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use Direct X API.

The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round. But wait there has to be some secret sauce in Hollywood, right? Since it has an increased transistor count. IT HAS TO BE!
 

ozfunghi

Member
You are excused.

I think you got the wrong topic.

Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use that API.

Ah, a semi-authentic edit. So again, my question, what are these significant architectural gains? It was an honest question to begin with, but it seems you are more interested in flames and/or trolling.

PS: you do know there was an interview with a developer that spoke of DX11-level features?

The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round. But wait there has to be some secret sauce in Hollywood, right? Since it has an increased transistor count. IT HAS TO BE!

Ah, and the good stuff keeps coming. Please, if you don't have anything of value to say, go bother someone else.
 

Durante

Member
What significant architectural gains are we talking about that Nintendo couldn't have customized into their GPU?
Primarily, the gains involved with changing the fundamental architecture away from VLIW, which AMD did with GCN. It's not something you just "customize into your GPU".

Looks like MS has spent a lot of money on memory performance in the x720.
I wouldn't necessarily agree with that. It looks like they went for capacity, and then tried to find a way to offer enough bandwidth without blowing up the budget. Hence, embedded memory. If 4 GB at 200 GB/s is accurate, then Sony are the ones who really spent on memory.

(Edit: I'm not saying that Sony's architecture is necessarily much better here, we don't know enough yet. But it certainly looks more expensive)
 

tipoo

Banned
Kinda amusing that, now that the NextBox is using a similar setup, the EDRAM stopped being "A Nintard wishful-thinking setup" and now is a serious solution to a (relative) low main memory bandwidth architecture.

We knew how much the eDRAM helped out the Xbox 360, I don't think anyone said it was useless. We also know it doesn't completely make up for slow main memory though, similar to how a CPU having a large cache helps with slow main memory but it doesn't completely eliminate the problem.

It was widely referred to as "GPGPU/eDRAM magic" :p


Some of the GPGPU magic criticisms were fair though I think, it usually only came up in the context of Nintendo fans saying the GPGPU could fix everything about the allegedly slow CPU, but anyone with a bit of knowledge in the matter knows that's not the case. Few workloads lend themselves well to GPU compute for one, and even for the ones that do you would take a substantial hit to 3D performance even with todays most modern highest end single chip graphics cards.
 

Schnozberry

Member
The problem has always been the wishful thinking. Looking at the games released for the wuu the edram BW cannot be that great. Most likely around the same as the xbox 360 32 GB/s.

Just having edram just doesnt equal massive BW....

Edram was added to the wuu for its cost and power savings.

Sigh. The 32GB/s number on the Xbox was for the daughter card that connected the EDRAM in the 360 to the GPU. The Wii U has the same amount of on die EDRAM as the rumored specs for Durango. The speed for both is still unknown, but it is likely much faster than the main memory pool in both systems.

EDRAM is expensive. It only offers cost savings in the sense that it would somewhat help balance the performance of using slower DDR3 for the main memory. To what extent I don't know.
 

tipoo

Banned
You are putting the U-GPU at 300 Gflops? Or what new rumors do you mean? If it's only 300 Gflops, then what is taking up all that space?
And how do you get "up to 7 in CPU performance"? I'm not questioning what you are saying but what i read is that these were basically CPU's for the next generation of tablets?

I don't feel like pulling the numbers, but factoring in 32MB of eDRAM didn't that leave a completely normal ~100mm2 for the GPU components?


And I won't defend his 7x CPU performance remark since we have no idea about the IPC of the U CPU, but just looking at the core counts and clock speeds, the Durango would have 2.6x more processor cores each clocked 1.33x faster than the Wii U. Core counts and clock speeds don't tell you everything, but the difference seems pretty large.
 

Schnozberry

Member
Keep the fate my friend, to the bitter end.

You are excused.

Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use Direct X API.

The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round.

There weren't any significant architectural gains for DX11 hardware. The changes were pretty minor, really.

Nobody but Microsoft uses the DX11 API. They own it. Also, troll harder.
 
What significant architectural gains are we talking about that Nintendo couldn't have customized into their GPU?

As opposed to being added to Durango where the main motivation was for the laughs?

Excuse me?
Durante will probably give you a better answer but from what I'm reading the Wii U GPU is older VLIW5 architecture while Durango and Orbis will use GCN/GCN2.

Saw this comparison posted:
HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s

Despite the latter having lower FLOPS it outperforms the former.

(Someone please correct me if this is completely off base.)
 
There weren't any significant architectural gains for DX11 hardware. The changes were pretty minor, really.

Nobody but Microsoft uses the DX11 API. They own it. Also, troll harder.
Did you read Durante's response? Did you at least tried? While true i was harsh in my comments and my apologies if i offend, we are basicaly both saying the same thing. Of course Durante was more specific since he's an expert and has more of a vast knowledge than i do. But in principle is the same answer.


ozfunghi is in wishful thinking mode, he thinks Nintendo would have customised the GPU enough to compensante for improvements that a new architecture supporting more cutting edge API will feature.

Hopefuly, you understand now. My harsh reaction was due to this matter being explained add nauseum, just to see the "devoted" still insisting with the same tune.
 

ozfunghi

Member
Durante will probably give you a better answer but from what I'm reading the Wii U GPU is older VLIW5 architecture while Durango and Orbis will use GCN/GCN2.

Saw this comparison posted:
HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s

Despite the latter having lower FLOPS it outperforms the former.

(Someone please correct me if this is completely off base.)

Thanks. I knew about VLIW5 vs GCN(2), but i don't/didn't know what the actual benefits are. Well, now i see it performs "better". Can anyone point out what the reason is (in plain English).
 

Schnozberry

Member
Did you read Durante's response? Did you at least tried? While true i was harsh in my comments and my apologies if i offend, we are basicaly both saying the same think. Of course Durante was more specific since he's an expert and has more of a vast knowledge than i do. But in principle is the same answer.


ozfunghi is in wishful thinking mode, he thinks Nintendo would have customised the GPU enough to compensante for improvements that a new architecture supporting more cutting edge API will feature.

Hopefuly, you understand now.

We're talking about different things. The switch from VLIW5 to GCN is separate from the difference between DirectX feature compatibility.
 
Keep the fate my friend, to the bitter end.

You are excused.

Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use Direct X API.

The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round.

Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.

It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: http://www.cinemablend.com/games/Wii-U-GPGPU-Squashes-Xbox-360-PS3-Capable-DirectX-11-Equivalent-Graphics-47126.html

The Wii U GPU is also in no way 300 GFLOPS since that would only make it about 15-20% more powerful than the GPU in the Xbox 360 which was about 240 GFLOPS. Based on the type of architecture Nintendo used for even the early dev kits suggested a GPU around 550-600 GFLOPS with performance that was pretty close to a HD4850 which reached over 1TFLOP.

It's funny how these discussions progress over time, the Wii U gets less and less powerful and more like an Xbox 360 lol......
 

Durante

Member
Durante will probably give you a better answer but from what I'm reading the Wii U GPU is older VLIW5 architecture while Durango and Orbis will use GCN/GCN2.

Saw this comparison posted:
HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s

Despite the latter having lower FLOPS it outperforms the former.

(Someone please correct me if this is completely off base.)
That's pretty much entirely accurate, and even supported by a nice real-world example!

A somewhat important point to remember with this is that we're looking at "efficiency" as something like (usable FLOPS)/(theoretical FLOPS) here. In such a comparison, GCN practically always wins, and by a significant margin. And that's the only relevant comparison in this case really, since we're looking at FLOPS numbers for the consoles.

However, the complete efficiency picture is more involved, since GCN also needs more die area to achieve the same FLOPS compared to AMDs VLIW GPUs.

Thanks. I knew about VLIW5 vs GCN(2), but i don't/didn't know what the actual benefits are. Well, now i see it performs "better". Can anyone point out what the reason is (in plain English).
In short, since I really need to sleep (almost 3 am here): to effectively use a VLIW architecture, the compiler (or programmer, but practically no one does this by hand these days) needs to arrange multiple instructions in a single "very long instruction word", which has "slots" for multiple instructions of different types. It's not always possible (nevermind feasible) to use all the slots in each instruction word, a fact that obviously reduces efficiency.
 
Things like DX11 hardware and the previously quoted performance differences are all things that are missing.

It's been stated in this thread before but DX10.1 can run everything in DX11 except for tesselation. Which AMD's DX10.1 cards had just did things differently. The other 2 features that DX11 added can both be done on DX10.1 hardware though.
 

Schnozberry

Member
Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.

It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: http://www.cinemablend.com/games/Wii-U-GPGPU-Squashes-Xbox-360-PS3-Capable-DirectX-11-Equivalent-Graphics-47126.html

The Wii U GPU is also in no way 300 GFLOPS since that would only make it about 15-20% more powerful than the GPU in the Xbox 360 which was about 240 GFLOPS. Based on the type of architecture Nintendo used for even the early dev kits suggested a GPU around 550-600 GFLOPS with performance that was pretty close to a HD4850 which reached over 1TFLOP.

It's funny how these discussions progress over time, the Wii U gets less and less powerful and more like an Xbox 360 lol......

The Wii U doesn't use OpenGL either. It's a proprietary API called GX2. That article....wow.
 
We're talking about different things. The switch from VLIW5 to GCN is separate from the difference between DirectX feature compatibility.
And i know that?
Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.

It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: -DirectX-11-Equivalent-Graphics-47126.html[/URL]
And i just said like wise in my first post, the one that initiated the disscussion. But when talking features it easier to refer in terms of Direct X since its more stablished than OpenGL these days.

It's not only about doing the same effects but also about doing them efficiently.
 

NBtoaster

Member
Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.

It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: http://www.cinemablend.com/games/Wii-U-GPGPU-Squashes-Xbox-360-PS3-Capable-DirectX-11-Equivalent-Graphics-47126.html

The Wii U GPU is also in no way 300 GFLOPS since that would only make it about 15-20% more powerful than the GPU in the Xbox 360 which was about 240 GFLOPS. Based on the type of architecture Nintendo used for even the early dev kits suggested a GPU around 550-600 GFLOPS with performance that was pretty close to a HD4850 which reached over 1TFLOP.

It's funny how these discussions progress over time, the Wii U gets less and less powerful and more like an Xbox 360 lol......

The content of that article doesn't offer any support to the title of it.

It's been stated in this thread before but DX10.1 can run everything in DX11 except for tesselation. Which AMD's DX10.1 cards had just did things differently. The other 2 features that DX11 added can both be done on DX10.1 hardware though.

If DX10 is capable of DX11 like effects it's bound to be slower, worse and/or more difficult to implement. Which is not what third party titles want to deal with.
 
And i know that?

And i just said like wise in my first post, the one that initiated the disscussion. But when talking features it easier to refer in terms of Direct X since its more stablished than OpenGL these days.

It's not only about doing the same effects but also about doing them efficiently.

It's been stated in this thread before but DX10.1 can run everything in DX11 except for tesselation. Which AMD's DX10.1 cards had just did things differently. The other 2 features that DX11 added can both be done on DX10.1 hardware though.

...
 

ozfunghi

Member
Did you read Durante's response? Did you at least tried? While true i was harsh in my comments and my apologies if i offend, we are basicaly both saying the same thing. Of course Durante was more specific since he's an expert and has more of a vast knowledge than i do. But in principle is the same answer.


ozfunghi is in wishful thinking mode, he thinks Nintendo would have customised the GPU enough to compensante for improvements that a new architecture supporting more cutting edge API will feature.

Hopefuly, you understand now. My harsh reaction was due to this matter being explained add nauseum, just to see the "devoted" still insisting with the same tune.

Thanks for basically acknowledging what i already knew about you. So don't bother replying to my posts from here on out. You are blabbering in vague terms and it seems clear you really are not the authority on the subject. Furthermore, i think i know my own motivations better than you do, as it seems you are mistaking me for a teenaged fanboy. I have better things to do than continue such conversations.

You are excused.

In short, since I really need to sleep (almost 3 am here): to effectively use a VLIW architecture, the compiler (or programmer, but practically no one does this by hand these days) needs to arrange multiple instructions in a single "very long instruction word", which has "slots" for multiple instructions of different types. It's not always possible (nevermind feasible) to use all the slots in each instruction word, a fact that obviously reduces efficiency.

Thanks man
 
The Wii U doesn't use OpenGL either. It's a proprietary API called GX2. That article....wow.

Yeah.....they were interviewing Unity Technology's CEO David Helgason.

It's true that efficiency may be an issue but the DX11 type effects should be able to be done on Wii U. Scaled-down ports would probably be the norm which was something people were expecting anyway. I'm definitely not disputing that the Xbox 720 will be able to do more and better effects with DX11 features, but the Wii U will be able to do similar effects "tricked" or "custom" with a developer would wants to do that (probably not many other than Retro and Nintendo themselves sadly).




Edit: Zelda HD demo already showed a preview of some of the Deferred Lighting/rendering the Wii U will be capable of which will be a often used effect of Next-Gen systems.......
 
Perhaps you should not be so vocal in this particular thread then? Especially considering that you just admitted to being a bit "harsh"
Oh but did you miss the part where i said it has been explained enough, that the "customisation" of the Wii U GPU is not a magicall bullet that will fix all. The poster i was adressing orbitates these Wii U technical threads enough to have a clue already. Yet, wishful thinking prevails.

But the Wii U technical disscussion keep always returning to square one. I respected this thread enough to shut the f*ck up and leave the experienced less partialised people debate and try to learn something from the bench. Now, that Orbis and Durango specs are making the rounds the thread gets a second wind of concerned Wii U partisans, clinching to the last hope that the machine won't be outclased hardware wise.

So it is so unfair of my part to vent a bit some times? XD
 

AzaK

Member
Primarily, the gains involved with changing the fundamental architecture away from VLIW, which AMD did with GCN. It's not something you just "customize into your GPU".

I wouldn't necessarily agree with that. It looks like they went for capacity, and then tried to find a way to offer enough bandwidth without blowing up the budget. Hence, embedded memory. If 4 GB at 200 GB/s is accurate, then Sony are the ones who really spent on memory.

(Edit: I'm not saying that Sony's architecture is necessarily much better here, we don't know enough yet. But it certainly looks more expensive)

2x Wii U RAM at about 15.5 times the speed is pretty sexy. What are Sony hoping to do with that. I mean do they expect to be able to use ~3GB per frame @60?
 

deviljho

Member
But the Wii U technical disscussion keep always returning to square one. I respected these thread enough to shut the f*ck up and leave the experienced less partialised people debate and try to learn something from the bench. Now, that Orbis and Durango specs are making the rounds the thread gets a second wind of concerned Wii U partisans, clinching to the last hope that the machine won't be outclased hardware wise.

I'm not sure why you are so vested in posting to defend some kind of murky truth. Not only would I ask "why bother?" but I'd also question if this is the thread to start giving people reality checks.
 

ikioi

Banned
I have serious doubts about the Wii U's capabilities to run down ports from the next gen Xbox and Playstation. Heck i have serious doubts the Wii U is much if at all more capable then the Xbox 360.

The Wii U's eDRAM appears to have significantly less bandwidth then the Xbox 360's. This is supported by the fact no multi platform game released to date, eg Mass Effect 3 and Assasins Creed 3, features any improvement in AA and AF on the Wii U. AA and AF are piss easy to tack on at the end of production, yet we don't see the Wii U offering any improvement in this area despite the significant increase in eDRAM capacity. Slow bandwidth seems the only plausable explination.

Given the Wii U's slow MEM2 bandwidth, it's also a fair assumption to say it's unlikely the GPU has more then 8 ROPs. ROPs are heavily dependant on bandwidth, there would be absolutely no point going with any more then 8 ROPs on a MEM2 bus as slow as the Wii U's. The Xbox 360's architecture had the ROPs tied to its eDRAM via a high speed bus, that also doesn't seem to be the case with the Wii U. The Wii U's eDRAM seems to be implamented differently and not tied into the ROPs, nor is the Wii U's eDRAM capable of offering bandwidth in the same ball park as that in the Xbox 360. If anything the Wii U's ROPs may be worse in performance then those in the Xbox 360 due to the shit bandwidth. Either way 8 ROPs for a modern day console is terrible.

Then there's the CPU, wich simply put is terrible at more things then its component at. Even the things its compent at are not sufficient for a modern HD console and games. SIMD, MAD, best of luck.

Seems to me Nintendo built this console with Xbox 360/PS3 target specs in mind. Further to that the console seems to only be able to exceed Xbox 360 level of performance if developers are willing to invest significant time optimising their game and engine for the Wii U. Overcoming issues with the eDRAM bandwidth, MEM2 bandwidth, the shitbox CPU, they will have to pull out every optimisation and trick possible to clearly exceed Xbox 360 level visuals. Which i think is very unlikely to occur given developers will again turn to the next gen Xbox, Playstation, and PC for their multi platforms and largely ignore the Wii U. I doubt very much many 3rd party devleopers or publishers are going to invest heavily into games and engines for the Wii U.

To finish, FUCK YOU NINTENDO. Can't believe they've yet again delivered us a new console thats performance is 7 years in the past. It would have cost them bugger all to have delivered a console within the same ball park as the next gen Xbox and Playstation with hardware that made down porting a very real and easily achievable process. Heck it would have cost them only a matter of dollars to increase the MEM2 bus to 128bit or even 256bit, and dollars more to get a non munted pathetic CPU.
 
I'm not sure why you are so vested in posting to defend some kind of murky truth. Not only would I ask "why bother?" but I'd also question if this is the thread to start giving people reality checks.
The thread has been bastardised already. So i dont feel so bad when derailing a bit. What do i tell you... This status quo" of Wii U debate has been the same since the first rumors more than 2 years ago, so it gets tiresome.

Funny thing is, dissussion has become more about "who" posts (and if i like him or not) than what is posted. There are some guys here that get away with far worse BS than i usually post. Yet, sadly it pays of more being diplomatic (to not use the H word) than being direct, sincere and cut to the chase.
 

Schnozberry

Member
Oh but did you miss the part where i said it has been explained enough, that the "customisation" of the Wii U GPU is not a magicall bullet that will fix all. The poster i was adressing orbitates these Wii U technical threads enough to have a clue already. Yet, wishful thinking prevails.

But the Wii U technical disscussion keep always returning to square one. I respected these thread enough to shut the f*ck up and leave the experienced less partialised people debate and try to learn something from the bench. Now, that Orbis and Durango specs are making the rounds the thread gets a second wind of concerned Wii U partisans, clinching to the last hope that the machine won't be outclased hardware wise.

So it is so unfair of my part to vent a bit some times? XD

I don't think it's wishful thinking that the Wii U will be able to keep up Orbis and Durango, it's that folks are trying to determine the gap between them and if ports will be possible with some pruning of detail and resolution. I don't think it's an unreasonable concern.

It's not nearly as irritating as the hype train passengers in the Orbis and Durango threads that are talking about them outclassing SLI gaming PC's due to being the lead platform on AAA studio releases.
 

Donnie

Member
IBM uses it for high performance server and workstation chips that optimize for high throughput use. They bet on larger caches cancelling out the fact that eDRAM is slower than SRAM. It works well for them in high performance compute, that's one area where they beat even the mighty Intel.

That said I have absolutely no idea how it would scale to a tiny chip with 3MB of it at 1.2GHz. It's thrice as dense as SRAM, so the alternative would be 1MB of faster SRAM, I have no idea if being three times that capacity makes up for the lower speed in a gaming context.




Kind of true based on what? 5GB RAM (potentially 7 if it's true they skinnied down the OS) available to games vs 1, 8 cores vs 3, still no idea what the U GPU is like but both have embedded memory to help out and the nextbox may have faster eSRAM.

I don't think it will be the Wii-360 gulf again, but I think 2x is way underestimating it.

You won't have 8 cores for games, how many you will have is arguable (rumours were saying 6) but it certainly won't be 8. Also WiiU has 3 cores plus Audio DSP.
 

ozfunghi

Member
I'm not sure why you are so vested in posting to defend some kind of murky truth. Not only would I ask "why bother?" but I'd also question if this is the thread to start giving people reality checks.

The funny part is, that i think i have a rather grounded idea of what to expect from the hardware in the WiiU. The things Durante said just surprised me in a way that i'm eager to find out if those numbers are things to expect in actual real-life performance and whether some other people might not need a realitycheck later on, but not for WiiU, but for Durango.

It's also funny to see the same people downplay the jump in featureset between 360 and WiiU, but are the first to start accusing others when honestly questioning the gains in architecture between WiiU and Durango. I'm done with this guy.
 

Schnozberry

Member
I have serious doubts about the Wii U's capabilities to run down ports from the next gen Xbox and Playstation. Heck i have serious doubts the Wii U is much if at all more capable then the Xbox 360.

The Wii U's eDRAM appears to have significantly less bandwidth then the Xbox 360's. This is supported by the fact no multi platform game released to date, eg Mass Effect 3 and Assasins Creed 3, features any improvement in AA and AF on the Wii U. AA and AF are piss easy to tack on at the end of production, yet we don't see the Wii U offering any improvement in this area despite the significant increase in eDRAM capacity. Slow bandwidth seems the only plausable explination.

Given the Wii U's slow MEM2 bandwidth, it's also a fair assumption to say it's unlikely the GPU has more then 8 ROPs. ROPs are heavily dependant on bandwidth, there would be absolutely no point going with any more then 8 ROPs on a MEM2 bus as slow as the Wii U's. The Xbox 360's architecture had the ROPs tied to its eDRAM via a high speed bus, that also doesn't seem to be the case with the Wii U. The Wii U's eDRAM seems to be implamented differently and not tied into the ROPs, nor is the Wii U's eDRAM capable of offering bandwidth in the same ball park as that in the Xbox 360. If anything the Wii U's ROPs may be worse in performance then those in the Xbox 360 due to the shit bandwidth. Either way 8 ROPs for a modern day console is terrible.

Then there's the CPU, wich simply put is terrible at more things then its component at. Even the things its compent at are not sufficient for a modern HD console and games. SIMD, MAD, best of luck.

Seems to me Nintendo built this console with Xbox 360/PS3 target specs in mind. Further to that the console seems to only be able to exceed Xbox 360 level of performance if developers are willing to invest significant time optimising their game and engine for the Wii U. Overcoming issues with the eDRAM bandwidth, MEM2 bandwidth, the shitbox CPU, they will have to pull out every optimisation and trick possible to clearly exceed Xbox 360 level visuals. Which i think is very unlikely to occur given developers will again turn to the next gen Xbox, Playstation, and PC for their multi platforms and largely ignore the Wii U. I doubt very much many 3rd party devleopers or publishers are going to invest heavily into games and engines for the Wii U.

To finish, FUCK YOU NINTENDO. Can't believe they've yet again delivered us a new console thats performance is 7 years in the past. It would have cost them bugger all to have delivered a console within the same ball park as the next gen Xbox and Playstation with hardware that made down porting a very real and easily achievable process. Heck it would have cost them only a matter of dollars to increase the MEM2 bus to 128bit or even 256bit, and dollars more to get a non munted pathetic CPU.

I'm glad you got that off your chest.
 

JordanN

Banned
Overcoming issues with the eDRAM bandwidth, MEM2 bandwidth, the shitbox CPU, they will have to pull out every optimisation and trick possible to clearly exceed Xbox 360 level visuals.

Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.

Shinen said:
We didn’t have such problems. The CPU and GPU are a good match. As said before, today’s hardware has bottlenecks with memory throughput when you don’t care about your coding style and data layout. This is true for any hardware and can’t be only cured by throwing more megahertz and cores on it. Fortunately Nintendo made very wise choices for cache layout, ram latency and ram size to work against these pitfalls.

Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...
 
Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.



Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...

But they've only developed Nintendo games console! And they've never developed for the PS3/360! And they've never made big games! What could they possibly know?!
 

JordanN

Banned
Bottom line: can the Wii U outperform the current gen machines or not?

If you want a real answer, wait for tomorrows Nintendo Direct or whenever the first big budget game lands on Wii U.

Because even with more power, games on Wii U right now are barely scratching the surface of what it can really do.
 

ikioi

Banned
Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.

On that note, AFAIK the only developer who has come out and said they had no issues with memory bandwidth was the developer for either Trine 2 or Nano Assault. I can't recall which one, but it was certainly one of the two.

Either way neither developer holds any real weight. They're not big name developers making complete 3D games and worlds. Rather they make small basic e-store indy titles. Trine 2 certainly isn't a graphically advanced game, nor is Nano Assault. So the fact they didn't have bandwidth issues or CPU issues means nothing.

Can you provide a source where a big name developer has said they've had no issues with the memory bandwidth of the Wii U? By big name i mean a 3rd party developer or publisher who make complex 3D games such as COD, Mass Effect, GTA, a Capcom, EA, Ubisoft, etc.

Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...

As per above.

Also given how bad the Trine 2 port on PS3 was, having AA on the menu and text, i wouldn't rate them very highly or competely at all. They're cool and made a great game, but they're not technical wizards. It's not surprising to me that the Wii U version of Trine 2 would be the best out of the 3 console ports. The Wii U's more modern GPU combined with its easier architecture means smaller indy devs have an easier time making titles for it. The PS3's Cell processor and SPEs do require a fair bit of experience and optimisation to achieve good results on, and the PS3s memory architecture can also be a massive PITA.

Bottom line: can the Wii U outperform the current gen machines or not?

I would say yes it can, but barely.

PUrely my opinion but my thoughts are:

For the Wii U to clearly out perform the Xbox 360 it would require a dedicated studio investing a significant budget, time, and manpower to achieve. Effort i believe will be limited to Nintendo and its second parties. I cannot see EA, Activision, or any big name 3rd party developer or publisher investing much time into this console. The Wii U's architecture and power is so far below the next gen Xbox and Playstation that there'd be little reward for the cost of providing Wii U multi platform down ports. This console will be just like the Wii, fantastic 1st and 2nd party support, next to no meaningful 3rd party.
 
Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.



Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...

No offense, but Shin'en aren't known for pushing modern high horsepower engines. They havent ran into any problems because they're adjusted to working on a much smaller space.
 
This thread has really turned to shit. Goodbye to the technical discussion.

Thanks anti-Nintendo crowd.
Both sides share the blame.
While ikioi's post had a negative slant, I don't see how it wasn't technical discussion. Unless the thread is only intended for effusive praise of Nintendo's design choices.
The disscussion level dropped. This thread was supposed to be respected, a sort of neutral ground. The Wii U rumored specs thread (remember?) was the one for the console warriors. But since the the competition specs are floating around....
 
Top Bottom