oversitting
Banned
GPGPU is great but how applicable is it to a console which needs the GPU power to push pixels?
Well, there are tasks like background downloads that are likely using the ARM processor, but we don't really have any offical info either way.Wasn't that strictly in regards to security? The same setup was used in GameCube and Wii.
And yeah no offense but Wsippel's research often turns up extremely optimistic.
I think he was specifically addressing some users that has been critical to the Wii U hardware but defends Microsoft's hardware decisions. Having said that, I have seen people (paricularly the threads in beyond3d) being very critical to alot of Microsoft's hardware decisions.Well, criticizing "GPGPU magic" has some validity, when it is proposed as a general cure for weak CPU performance. Just like criticizing eDRAM as a general solution for all memory bandwidth issues is valid.
For an example, you need not look farther than how harsh people reacted in the current Durango thread when it was proposed that you can simply see it as a system with 8 GB of 170 GB/s memory. Or even the reaction to the "magic move engines". Basically, unrealistic scenarios are pointed out as such regardless of where they come from, or which hardware they pertain to.
One funny thing for me personally is that I find myself downplaying the impact of GPGPU recently on GAF, when I spent 3 years or so of my life (~2005-2008) convincing people how great it is
If the current rumours are accurate, a factor of 4-6 in GPU performance (and a newer architecture as well), up to 7 in CPU performance and around 6 in external memory bandwidth. (The latter is 15 instead of 6 for Orbis, but its memory setup is not comparable)
CPU FLOPS.And how do you get "up to 7 in CPU performance"? I'm not questioning what you are saying but what i read is that these were basically CPU's for the next generation of tablets?
No, I'm putting it at 320-500. 1.84/0.32 rounds up to 6, which gives you the upper end of the estimate. Though that doesn't include efficiency gains from the GCN architecture, which are pretty significant in some cases. I guess the lowest end number should be 2.5 if you assume the best case scenario for Wii U, take the weaker (in numbers at least) console of the other two for comparison, and disregard all architectural improvements.You are putting the U-GPU at 300 Gflops? Or what new rumors do you mean? If it's only 300 Gflops, then what is taking up all that space?
Kinda amusing that, now that the NextBox is using a similar setup, the EDRAM stopped being "A Nintard wishful-thinking setup" and now is a serious solution to a (relative) low main memory bandwidth architecture.
Edram was added to the wuu for its cost and power savings.
Keep the fate my friend, to the biiter end.
Keep the fate my friend, to the bitter end.What significant architectural gains are we talking about that Nintendo couldn't have customized into their GPU?
You are excused.Excuse me?
As opposed to being added to Durango where the main motivation was for the laughs?
You are excused.
Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use that API.
The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round. But wait there has to be some secret sauce in Hollywood, right? Since it has an increased transistor count. IT HAS TO BE!
Primarily, the gains involved with changing the fundamental architecture away from VLIW, which AMD did with GCN. It's not something you just "customize into your GPU".What significant architectural gains are we talking about that Nintendo couldn't have customized into their GPU?
I wouldn't necessarily agree with that. It looks like they went for capacity, and then tried to find a way to offer enough bandwidth without blowing up the budget. Hence, embedded memory. If 4 GB at 200 GB/s is accurate, then Sony are the ones who really spent on memory.Looks like MS has spent a lot of money on memory performance in the x720.
Kinda amusing that, now that the NextBox is using a similar setup, the EDRAM stopped being "A Nintard wishful-thinking setup" and now is a serious solution to a (relative) low main memory bandwidth architecture.
It was widely referred to as "GPGPU/eDRAM magic"
The problem has always been the wishful thinking. Looking at the games released for the wuu the edram BW cannot be that great. Most likely around the same as the xbox 360 32 GB/s.
Just having edram just doesnt equal massive BW....
Edram was added to the wuu for its cost and power savings.
You are putting the U-GPU at 300 Gflops? Or what new rumors do you mean? If it's only 300 Gflops, then what is taking up all that space?
And how do you get "up to 7 in CPU performance"? I'm not questioning what you are saying but what i read is that these were basically CPU's for the next generation of tablets?
Keep the fate my friend, to the bitter end.
You are excused.
Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use Direct X API.
The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round.
Durante will probably give you a better answer but from what I'm reading the Wii U GPU is older VLIW5 architecture while Durango and Orbis will use GCN/GCN2.What significant architectural gains are we talking about that Nintendo couldn't have customized into their GPU?
As opposed to being added to Durango where the main motivation was for the laughs?
Excuse me?
Did you read Durante's response? Did you at least tried? While true i was harsh in my comments and my apologies if i offend, we are basicaly both saying the same thing. Of course Durante was more specific since he's an expert and has more of a vast knowledge than i do. But in principle is the same answer.There weren't any significant architectural gains for DX11 hardware. The changes were pretty minor, really.
Nobody but Microsoft uses the DX11 API. They own it. Also, troll harder.
Durante will probably give you a better answer but from what I'm reading the Wii U GPU is older VLIW5 architecture while Durango and Orbis will use GCN/GCN2.
Saw this comparison posted:
HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
Despite the latter having lower FLOPS it outperforms the former.
(Someone please correct me if this is completely off base.)
Did you read Durante's response? Did you at least tried? While true i was harsh in my comments and my apologies if i offend, we are basicaly both saying the same think. Of course Durante was more specific since he's an expert and has more of a vast knowledge than i do. But in principle is the same answer.
ozfunghi is in wishful thinking mode, he thinks Nintendo would have customised the GPU enough to compensante for improvements that a new architecture supporting more cutting edge API will feature.
Hopefuly, you understand now.
Keep the fate my friend, to the bitter end.
You are excused.
Sporting a batlanty obvious NIntendo aligment. Architectural gains that Nintendo customized in to it's GPU? It's already been settle at 10.1 Direct X level. All the significant architectural gains of DX 11 hardware are not there. Even if Nintendo doesn't use Direct X API.
The hardware is not what you want it to be, insisting won't change or fix things. It's the same cycle we already ran through last round.
That's pretty much entirely accurate, and even supported by a nice real-world example!Durante will probably give you a better answer but from what I'm reading the Wii U GPU is older VLIW5 architecture while Durango and Orbis will use GCN/GCN2.
Saw this comparison posted:
HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s
Despite the latter having lower FLOPS it outperforms the former.
(Someone please correct me if this is completely off base.)
In short, since I really need to sleep (almost 3 am here): to effectively use a VLIW architecture, the compiler (or programmer, but practically no one does this by hand these days) needs to arrange multiple instructions in a single "very long instruction word", which has "slots" for multiple instructions of different types. It's not always possible (nevermind feasible) to use all the slots in each instruction word, a fact that obviously reduces efficiency.Thanks. I knew about VLIW5 vs GCN(2), but i don't/didn't know what the actual benefits are. Well, now i see it performs "better". Can anyone point out what the reason is (in plain English).
Things like DX11 hardware and the previously quoted performance differences are all things that are missing.
Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.
It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: http://www.cinemablend.com/games/Wii-U-GPGPU-Squashes-Xbox-360-PS3-Capable-DirectX-11-Equivalent-Graphics-47126.html
The Wii U GPU is also in no way 300 GFLOPS since that would only make it about 15-20% more powerful than the GPU in the Xbox 360 which was about 240 GFLOPS. Based on the type of architecture Nintendo used for even the early dev kits suggested a GPU around 550-600 GFLOPS with performance that was pretty close to a HD4850 which reached over 1TFLOP.
It's funny how these discussions progress over time, the Wii U gets less and less powerful and more like an Xbox 360 lol......
And i know that?We're talking about different things. The switch from VLIW5 to GCN is separate from the difference between DirectX feature compatibility.
And i just said like wise in my first post, the one that initiated the disscussion. But when talking features it easier to refer in terms of Direct X since its more stablished than OpenGL these days.Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.
It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: -DirectX-11-Equivalent-Graphics-47126.html[/URL]
And i know that?
Yes you are right that the GPU doesn't do DX11....it's doesn't do DX8 or 9 either since it's using Open GL.
It's been confirmed many times in 2012 that the Wii U can do DX11 features in Open GL: http://www.cinemablend.com/games/Wii-U-GPGPU-Squashes-Xbox-360-PS3-Capable-DirectX-11-Equivalent-Graphics-47126.html
The Wii U GPU is also in no way 300 GFLOPS since that would only make it about 15-20% more powerful than the GPU in the Xbox 360 which was about 240 GFLOPS. Based on the type of architecture Nintendo used for even the early dev kits suggested a GPU around 550-600 GFLOPS with performance that was pretty close to a HD4850 which reached over 1TFLOP.
It's funny how these discussions progress over time, the Wii U gets less and less powerful and more like an Xbox 360 lol......
It's been stated in this thread before but DX10.1 can run everything in DX11 except for tesselation. Which AMD's DX10.1 cards had just did things differently. The other 2 features that DX11 added can both be done on DX10.1 hardware though.
And i know that?
And i just said like wise in my first post, the one that initiated the disscussion. But when talking features it easier to refer in terms of Direct X since its more stablished than OpenGL these days.
It's not only about doing the same effects but also about doing them efficiently.
It's been stated in this thread before but DX10.1 can run everything in DX11 except for tesselation. Which AMD's DX10.1 cards had just did things differently. The other 2 features that DX11 added can both be done on DX10.1 hardware though.
Did you read Durante's response? Did you at least tried? While true i was harsh in my comments and my apologies if i offend, we are basicaly both saying the same thing. Of course Durante was more specific since he's an expert and has more of a vast knowledge than i do. But in principle is the same answer.
ozfunghi is in wishful thinking mode, he thinks Nintendo would have customised the GPU enough to compensante for improvements that a new architecture supporting more cutting edge API will feature.
Hopefuly, you understand now. My harsh reaction was due to this matter being explained add nauseum, just to see the "devoted" still insisting with the same tune.
In short, since I really need to sleep (almost 3 am here): to effectively use a VLIW architecture, the compiler (or programmer, but practically no one does this by hand these days) needs to arrange multiple instructions in a single "very long instruction word", which has "slots" for multiple instructions of different types. It's not always possible (nevermind feasible) to use all the slots in each instruction word, a fact that obviously reduces efficiency.
The Wii U doesn't use OpenGL either. It's a proprietary API called GX2. That article....wow.
Oh but did you miss the part where i said it has been explained enough, that the "customisation" of the Wii U GPU is not a magicall bullet that will fix all. The poster i was adressing orbitates these Wii U technical threads enough to have a clue already. Yet, wishful thinking prevails.Perhaps you should not be so vocal in this particular thread then? Especially considering that you just admitted to being a bit "harsh"
Primarily, the gains involved with changing the fundamental architecture away from VLIW, which AMD did with GCN. It's not something you just "customize into your GPU".
I wouldn't necessarily agree with that. It looks like they went for capacity, and then tried to find a way to offer enough bandwidth without blowing up the budget. Hence, embedded memory. If 4 GB at 200 GB/s is accurate, then Sony are the ones who really spent on memory.
(Edit: I'm not saying that Sony's architecture is necessarily much better here, we don't know enough yet. But it certainly looks more expensive)
But the Wii U technical disscussion keep always returning to square one. I respected these thread enough to shut the f*ck up and leave the experienced less partialised people debate and try to learn something from the bench. Now, that Orbis and Durango specs are making the rounds the thread gets a second wind of concerned Wii U partisans, clinching to the last hope that the machine won't be outclased hardware wise.
The thread has been bastardised already. So i dont feel so bad when derailing a bit. What do i tell you... This status quo" of Wii U debate has been the same since the first rumors more than 2 years ago, so it gets tiresome.I'm not sure why you are so vested in posting to defend some kind of murky truth. Not only would I ask "why bother?" but I'd also question if this is the thread to start giving people reality checks.
more like 3-4 wii-u.
Probably more if you consider the newer hardware/software capabilities and modifications.
Oh but did you miss the part where i said it has been explained enough, that the "customisation" of the Wii U GPU is not a magicall bullet that will fix all. The poster i was adressing orbitates these Wii U technical threads enough to have a clue already. Yet, wishful thinking prevails.
But the Wii U technical disscussion keep always returning to square one. I respected these thread enough to shut the f*ck up and leave the experienced less partialised people debate and try to learn something from the bench. Now, that Orbis and Durango specs are making the rounds the thread gets a second wind of concerned Wii U partisans, clinching to the last hope that the machine won't be outclased hardware wise.
So it is so unfair of my part to vent a bit some times? XD
IBM uses it for high performance server and workstation chips that optimize for high throughput use. They bet on larger caches cancelling out the fact that eDRAM is slower than SRAM. It works well for them in high performance compute, that's one area where they beat even the mighty Intel.
That said I have absolutely no idea how it would scale to a tiny chip with 3MB of it at 1.2GHz. It's thrice as dense as SRAM, so the alternative would be 1MB of faster SRAM, I have no idea if being three times that capacity makes up for the lower speed in a gaming context.
Kind of true based on what? 5GB RAM (potentially 7 if it's true they skinnied down the OS) available to games vs 1, 8 cores vs 3, still no idea what the U GPU is like but both have embedded memory to help out and the nextbox may have faster eSRAM.
I don't think it will be the Wii-360 gulf again, but I think 2x is way underestimating it.
I'm not sure why you are so vested in posting to defend some kind of murky truth. Not only would I ask "why bother?" but I'd also question if this is the thread to start giving people reality checks.
I have serious doubts about the Wii U's capabilities to run down ports from the next gen Xbox and Playstation. Heck i have serious doubts the Wii U is much if at all more capable then the Xbox 360.
The Wii U's eDRAM appears to have significantly less bandwidth then the Xbox 360's. This is supported by the fact no multi platform game released to date, eg Mass Effect 3 and Assasins Creed 3, features any improvement in AA and AF on the Wii U. AA and AF are piss easy to tack on at the end of production, yet we don't see the Wii U offering any improvement in this area despite the significant increase in eDRAM capacity. Slow bandwidth seems the only plausable explination.
Given the Wii U's slow MEM2 bandwidth, it's also a fair assumption to say it's unlikely the GPU has more then 8 ROPs. ROPs are heavily dependant on bandwidth, there would be absolutely no point going with any more then 8 ROPs on a MEM2 bus as slow as the Wii U's. The Xbox 360's architecture had the ROPs tied to its eDRAM via a high speed bus, that also doesn't seem to be the case with the Wii U. The Wii U's eDRAM seems to be implamented differently and not tied into the ROPs, nor is the Wii U's eDRAM capable of offering bandwidth in the same ball park as that in the Xbox 360. If anything the Wii U's ROPs may be worse in performance then those in the Xbox 360 due to the shit bandwidth. Either way 8 ROPs for a modern day console is terrible.
Then there's the CPU, wich simply put is terrible at more things then its component at. Even the things its compent at are not sufficient for a modern HD console and games. SIMD, MAD, best of luck.
Seems to me Nintendo built this console with Xbox 360/PS3 target specs in mind. Further to that the console seems to only be able to exceed Xbox 360 level of performance if developers are willing to invest significant time optimising their game and engine for the Wii U. Overcoming issues with the eDRAM bandwidth, MEM2 bandwidth, the shitbox CPU, they will have to pull out every optimisation and trick possible to clearly exceed Xbox 360 level visuals. Which i think is very unlikely to occur given developers will again turn to the next gen Xbox, Playstation, and PC for their multi platforms and largely ignore the Wii U. I doubt very much many 3rd party devleopers or publishers are going to invest heavily into games and engines for the Wii U.
To finish, FUCK YOU NINTENDO. Can't believe they've yet again delivered us a new console thats performance is 7 years in the past. It would have cost them bugger all to have delivered a console within the same ball park as the next gen Xbox and Playstation with hardware that made down porting a very real and easily achievable process. Heck it would have cost them only a matter of dollars to increase the MEM2 bus to 128bit or even 256bit, and dollars more to get a non munted pathetic CPU.
Overcoming issues with the eDRAM bandwidth, MEM2 bandwidth, the shitbox CPU, they will have to pull out every optimisation and trick possible to clearly exceed Xbox 360 level visuals.
Shinen said:We didnt have such problems. The CPU and GPU are a good match. As said before, todays hardware has bottlenecks with memory throughput when you dont care about your coding style and data layout. This is true for any hardware and cant be only cured by throwing more megahertz and cores on it. Fortunately Nintendo made very wise choices for cache layout, ram latency and ram size to work against these pitfalls.
You won't have 8 cores for games, how many you will have is arguable (rumours were saying 6) but it certainly won't be 8. Also WiiU has 3 cores plus Audio DSP.
Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.
Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...
Bottom line: can the Wii U outperform the current gen machines or not?
Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.
Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...
Bottom line: can the Wii U outperform the current gen machines or not?
Annnnnnnnnnnnnd, on the other side of the spectrum, we have a developer who says none of those are a problem.
Of course, these guys are just game developers, who have many years of programming experience, and made a launch game with next gen graphics...
While ikioi's post had a negative slant, I don't see how it wasn't technical discussion. Unless the thread is only intended for effusive praise of Nintendo's design choices.This thread has really turned to shit. Goodbye to the technical discussion.
Thanks anti-Nintendo crowd.
Both sides share the blame.This thread has really turned to shit. Goodbye to the technical discussion.
Thanks anti-Nintendo crowd.
The disscussion level dropped. This thread was supposed to be respected, a sort of neutral ground. The Wii U rumored specs thread (remember?) was the one for the console warriors. But since the the competition specs are floating around....While ikioi's post had a negative slant, I don't see how it wasn't technical discussion. Unless the thread is only intended for effusive praise of Nintendo's design choices.