• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

This is a stupid argument. I can make a game that runs better on the original Wii than on a $1000 PC. What point does that prove?

I'm not proving anything, I just said Trine 2 is not a demanding game, it is a side scrolling game, with some interactive puzzles. The game is great, but some people don't like what I said.

I don't believe no one was arguing otherwise. You seem to be derailing the thread. Lets drop this and get back to real technical discussions.

All are free to ignore my posts. But you're right, I came here to read some technical discussions.
 
Guys seriously, Trine 2 really doesn't matter. In the context of what I said, I thought I remembered it being 1080p, it was not, and thus it was corrected.

If you want to discuss whether Trine 2 is a demanding game or not, make a thread about it, if you want to compare the Wii U to 360 or PS3, there are dozens of threads for that, this isn't one of them.
Agreed. I'm sorry I pointed out the Wii U version being 720p was still an upgrade over other console versions, despite not being 1080p. In the future I'll try to refrain from posting facts that make XtremeXpider lose his shit, sorry. :(
 

SYNTAX182

Member
Why? I have both version, Xbox 360 have some thing looking better than the original (not enhanced) PC version.

Lower resolution, framerate, textures, less shadow coverage and draw distance, etc. I'm amazed and loved the port work the devs did here, but they definitely had to make some compromises, (some areas are improved like lighting) to achieve almost the same look of the original version.

Anyways just didn't anyone to think this is some fact, back onto tech discussions
 

Lizardus

Member
And? what is you point? PS3 run some games worst than Xbox 360 version, and Wii U run some games worst than PS360.



Why? I have both version, Xbox 360 have some thing looking better than the original (not enhanced) PC version.

Enhanced version doesnt mean graphics, yet another example of people thinking graphics = everything.

On January 27, 2012, CD Projekt RED announced an Enhanced Edition of The Witcher 2 via its sister company GOG.com, which was released on April 17, 2012 on the Xbox 360.[33] The Enhanced Edition added over 10GB of new content, including four hours of gameplay, thirty-six minutes of cinematics including a new intro and outro, and a host of fixes to gameplay and the interface. All existing owners of The Witcher 2 received free upgrades to the Enhanced Edition, and could pre-load the update starting April 11, 2012. All new copies on PC and Xbox 360 shipped with the additional content on board.
 

AzaK

Member
So does this mean Wii U will get essentially the exact same games from Orbis and Durango with a few compromises like less fps, a lower resolution and/or some effects missing?
A case similar to PC>>PS360?
No because developers dont seem to want to support it which is a different, non technical reason.
 

OryoN

Member
Meanwhile, our group heading up the "Wii U GPU reveal project" have been suspiciously quiet these last couple days. :D

I sense a Fourth Storm is brewing...
 
Specs won't ultimately determine Wii U getting support. Madden says 'Hi.'

Konami with MGS and Castlevania.

Take-Two's reluctance.

Ultimately, regardless of the difficulty or lack thereof for porting - platform decisions will be business decisions.
 

z0m3le

Banned
Specs won't ultimately determine Wii U getting support. Madden says 'Hi.'

Konami with MGS and Castlevania.

Take-Two's reluctance.

Ultimately, regardless of the difficulty or lack thereof for porting - platform decisions will be business decisions.

Exactly this, and while this is the wrong thread to talk about it, I do want to say that Iwata going to developers for their older IPs and doing collaborations as well as buying whole IPs off of third parties, is about the best thing Nintendo can do right now.

It pushes their console as the one with the most exclusives, and if some of those exclusives are exciting to different gamers, it could bring the Wii U into a new perspective, as the companion console with one of the other two, which will largely share a library.
 

ugoo18

Member
I'm reading some "magic" on some posts.

But yes, there are some 1080p games on Wii U, some non demanding games. It will depend on the resources used by the game, just as the 1080p Xbox 360/PS3 games.



Come on, the Wii U version is a year later than Xbox 360 and PS3 version, even The Witcher 2 Xbox 360 is better than the original TW2 for PC, and it is not because the power. And they are a really good devs but sorry, Trine 2 is not a demanding game, it is beautiful because the art.

-_- please stop it's not, after seeing a 243 page argument by a troll saying the same thing on gamespot i would rather not see that happen in this topic as well.
 
In my opinion I think the Wii U will get all competitors game with 720p resolution & the other will be standard with 1080p.

Because the Durango & ps4 more power than Wii U between 2.25-3 times and this is the difference between 720p & 1080p.

Right, if it was limited to pixel fill rate, you'd be completely correct, but there are other things to take into consideration, the games should all be possible given how scale-able everything is, but it would be more than just a lower resolution, you'd also see lower effects, think of it as PC settings... You can bump the resolution down and this helps a lot with performance, but often you have to lower other things from high to medium or even low to get a good frame rate.

Developers would have to do this in a more manual way, creating lower resolution assets (if they don't already exist for lower pc specs) maybe having less characters on screen, stuff of this nature.

Orbis has 3-3.5x the GPU power(since we still dont know how powerful Wii U GPU is, whats up with that?!). 10x the compute power(500gflops vs ~50gflops-wait is Wii U's cpu even that much??). 13.5x the bandwidth. 3.5x the memory. Lets not go into how much more efficent its GPU and CPU may be(might be a significant amount, we dont know. its a least a bit more efficient).

Resolution is memory and gpu intensive, but decreasing the resolution from 1080p to 720p isn't gonna decrease your ram usage my 350% or 2.5 gb(3.5gb to 1gb). Also I expect a lot of Durango and Orbis games to be 720p, once they really start pushing the hardware and trying to get the most graphical fidelity out of there games. I just dont see Wii U games being possible here at all. I would think certain games would start doing this about 1.5 to 2 years down the line. Now until that starting happening your still going to have to change most or maybe all the assets, and probably the amount of unique objects in your environments, as well as reducing the resolution.

The compute side of things would bring the most problems. If the game featured complex physics calculations it would have to be severly brought back. If there physics engine didn't have that level of scalability, or it was really crucial to the gameplay it wouldn't work at all. Enemy counts and players on screen at one time would have to be severly reduced. But it also really depends on how the devs are utilizing the cpu, say really CPU intensive. If there doing a lot of post processing, ect with the cpu, or maybe doing things we haven't even heard of yet, then i dunno...

The other thing is the bandwidth. This would affect the AA shadows, effects all that stuff. Once again it may depend on how the game is designed to take advantage of all that 176gb/s of bandwidth.

Most impportantly lighting is gonna get fucked by everyone of these things.

Basically all I'm trying to say its not nearly as simple as lower the resolution. Its gonna come down to the specific game, and it will really depend on what concessions devs are willing to make, and how much time and money it will take to make those changes.
 

z0m3le

Banned
Orbis had 3.5x the GPU power. 10x the compute power(500gflops vs ~50gflops-wait is Wii U's cpu even that much??). 13.5x the bandwidth. 3.5x the memory. Lets not go into how much more efficent its GPU and CPU may be.

Resolution is memory and gpu intensive, but decreasing the resolution from 1080p to 720p isn't gonna decrease your ram usage my 350% or 2.5 gb(3.5gb to 1gb). Also I expect a lot of Durango and Orbis games to be 720p, once they really start pushing the hardware and trying to get the most graphical fidelity out of there games. I just dont see Wii U games being possible here at all. I would think certain games would start doing this about 1.5 to 2 years down the line. Now until that starting happening your still going to have to change most or maybe all the assets, and probably the amount of unique objects in your environments as well as reducing the resolution.

The compute side of things would bring the most problems. If the game featured complex physics calculations it would have to be severly brought back. If there physics engine didn't have that level of scalability, or it was really crucial to the gameplay it wouldn't work at all. Enemy counts and players on screen at one time would have to be severly reduced. But it also really depends on how the devs are utilizing the cpu, say really CPU intensive. If there doing a lot of post processing, ect with the cpu, or maybe doing things we haven't even heard of yet, then i dunno...

The other thing is the bandwidth. This would affect the AA shadows, effects all that stuff.

Most impportantly lighting is gonna get fucked by everyone of these things.

Basically all I'm trying to say its not nearly as simple as lower the resolution. Its gonna come down to the specific game, and it will really depend on what concessions devs are willing to make, and how much time and money it will take to make those changes.

Yeah, that is what I more or less pointed out. it would be more than pixel fill rate, and that it would be a reduction some or all of the things you mentioned. The point you make is that it is impossible at some point, and I really doubt there is any truth to that, Physics can only be part of the game play so much, at some point it becomes unplan-able, and thus impossible to build a game around, or at the very least, highly limiting.

These assets they would have to make, would largely be found in the PC port, since the PC having low spec machines that have to support this game would fit with Wii U's limited hardware rather well would be there, this should remain true for the next couple years at least.

As for everything you mentioned about lighting and all of those other things, until we see a game that truly is scaleless, we should assume that multiplatform games will exist on a scale.

PS4 having 400+ GFLOPs of the GPU set aside for GPGPU, won't be used in any crucial way since XB3 doesn't have this and if the game can't exist on XB3 or Wii U, it isn't multiplatform.

Comparing the Wii U to XB3 makes the most sense when we have quite a few reports stating that XB3 is the lead platform already and developers are already building their games on it and porting to PS4/PC. Saying PS4 has 500+GFLOPs of compute power is irrelevant to multiplatforms because at most it will mean that PS4 has wind/hair effects while XB3 does not.
 

wsippel

Banned
Cheesemeister (@Cheesemeister3k) tweeted at 5:42 PM on Sat, Feb 02, 2013:
Genyo Takeda: I don't think that the #WiiU CPU and GPU are imbalanced in favor of the GPU. It depends how you measure. GPU chip is bigger.
(https://twitter.com/Cheesemeister3k/status/297746798385172480)

Cheesemeister (@Cheesemeister3k) tweeted at 5:44 PM on Sat, Feb 02, 2013:
Genyo Takeda: In modern CPUs, the math logic portions are rather small. In new PCs and servers, the CPUs may be big, but the logic is small.
(https://twitter.com/Cheesemeister3k/status/297747369645187073)

Cheesemeister (@Cheesemeister3k) tweeted at 5:48 PM on Sat, Feb 02, 2013:
Genyo Takeda: It's common for the surrounding SRAM cache memory in CPUs to be bigger. From that viewpoint, you wouldn't see them imbalanced.
(https://twitter.com/Cheesemeister3k/status/297748259177365504)
This image should help:

vVPbclw.jpg


That's a 45nm Core 2 Duo. Almost half the die area is SRAM (L2). With eDRAM, that uniform area in the lower half would shrink to roughly one third, resulting in a significantly smaller die.
 

Darryl

Banned
Resolution is memory and gpu intensive, but decreasing the resolution from 1080p to 720p isn't gonna decrease your ram usage my 350% or 2.5 gb(3.5gb to 1gb). Also I expect a lot of Durango and Orbis games to be 720p, once they really start pushing the hardware and trying to get the most graphical fidelity out of there games. I just dont see Wii U games being possible here at all. I would think certain games would start doing this about 1.5 to 2 years down the line. Now until that starting happening your still going to have to change most or maybe all the assets, and probably the amount of unique objects in your environments, as well as reducing the resolution.

i think you're really stretching how difficult it will be to port these games. like with changing your assets, you could probably run batch processes to reduce the texture definition or polygon count in your models. it doesn't sound like something these major development studios aren't easily capable of - if you have a higher definition model it's usually not too difficult to scale it down, it's the reverse that is hard.
 
This image should help:

vVPbclw.jpg


That's a 45nm Core 2 Duo. Almost half the die area is SRAM (L2). With eDRAM, that uniform area in the lower half would shrink to roughly one third, resulting in a significantly smaller die.

SRAM needs 6 transitors for each bit while DRAM only requires 1 transititor. So this definitely plays in to the size of the memory when comparing the two.
 
Yeah, that is what I more or less pointed out. it would be more than pixel fill rate, and that it would be a reduction some or all of the things you mentioned. The point you make is that it is impossible at some point, and I really doubt there is any truth to that, Physics can only be part of the game play so much, at some point it becomes unplan-able, and thus impossible to build a game around, or at the very least, highly limiting.

These assets they would have to make, would largely be found in the PC port, since the PC having low spec machines that have to support this game would fit with Wii U's limited hardware rather well would be there, this should remain true for the next couple years at least.

As for everything you mentioned about lighting and all of those other things, until we see a game that truly is scaleless, we should assume that multiplatform games will exist on a scale.

PS4 having 400+ GFLOPs of the GPU set aside for GPGPU, won't be used in any crucial way since XB3 doesn't have this and if the game can't exist on XB3 or Wii U, it isn't multiplatform.

Comparing the Wii U to XB3 makes the most sense when we have quite a few reports stating that XB3 is the lead platform already and developers are already building their games on it and porting to PS4/PC. Saying PS4 has 500+GFLOPs of compute power is irrelevant to multiplatforms because at most it will mean that PS4 has wind/hair effects while XB3 does not.

To the first bolded. Yea there is truth to that when devs start pushing the power envelope so much that they start needing to decrease the resolution on PS4/360 to 720p or less. Or just in general when they really start taking advantage of the hardware and pushing it to its limits when the 3rd gen games start rolling out. Thats the point where I think there would be just to many concessions to be made, to the point your remaking the game. Then theres no Wii U version on these select titles.

to the 2nd bolded...WTF? I haven't heard any such thing. Even if there was some talk of XB3 being lead I would take it with a grain of salt. Is almost a year tell these consoles come out. Way to early to predict something like that, plus things always change. I remember when 360 used to always be the lead platform, then about 2 or so years in PS3 was usually the lead platform. Now their both the lead platform depending on the dev.

Also remember 720 is likely gonna have something to aid in compute, not likely 410gflops of help but something. It already has 8 cores vs 3 and like 3x the processing power. What are devs gonna do with games that are fully taking advantage of all 8 cores on Durango? At first most games wont, but eventually they will.

I agree with what you guys are saying about some of the things being really easy to downgrade. Lower the resolution of the assets should be very easy. Like you said they do that in the PC environment all the time. Taking out unique assets and trying to find more duplicates would be a pain but would be easy. I wasnt trying to suggest some of these things would be hard(some will be much harder than other tho), but just the fact its a lot more then just decreasing the resolution and once all these things add up together, it possible your left with a game thats quite a bit different.
 

Durante

Member
The main reason GCN is better than VLIW5 is because it's a simpler architecture to optimize drivers for on PC, where games are programmed upon much more abstract layers than on consoles (and also GPGPU I presume, but I don't know how a modified VLIW5 would perform in that regard).

VLIW5 in a console makes a lot of sense because EVERYTHING will be programmed towards this architecture, which is more efficient per mm^2 than GCN.
So you expect people to write assembly-level shaders for Wii U? Because I don't see that happening. And even then, there are workloads where even perfect coding simply can't fully use a VLIW5 arrangement.

I'd agree that it's probably less of a liability in a console setting, and it may even reach equal performance/mm² on a console, but it will still lag behind GCN in performance/FLOP.
 

z0m3le

Banned
So you expect people to write assembly-level shaders for Wii U? Because I don't see that happening. And even then, there are workloads where even perfect coding simply can't fully use a VLIW5 arrangement.

I'd agree that it's probably less of a liability in a console setting, and it may even reach equal performance/mm² on a console, but it will still lag behind GCN in performance/FLOP.

http://www.neogaf.com/forum/showpost.php?p=47226464&postcount=2735 this explains it a bit better, the main point is that GCN shouldn't have anywhere near the same efficiency gains over VLIW5 as it does on PC, and I wouldn't count the FLOPs differently, there will be added performance per flop but it won't even be 10% more than likely. Certainly much less than the 37% someone mentioned a couple pages back. This still doesn't matter much as Wii U GPU will likely have something like 30-45% of the FLOPs that PS4/XB3 GPUs will have anyways.

It is just disingenuous to say something like GCN in a console will make VLIW5 in another console obsolete. Which is the reason I went to length explaining why AMD even created VLIW4.

Not that I am explaining this to you, but if Wii U is 440GFLOPs and based on VLIW5, while XB3 is 1288 GFLOPs and based on GCN, you might say the XB3 GFLOPs would be equal to about 1400GFLOPs VLIW5. Again it is stretching it to say that it would even get 10%, but since Wii U is already behind by such an amount, it comes off a bit like nitpicking. VLIW5 should in the end be fairly comparable to GCN in a console setting and that is pretty clear given everything we know about HD5000 and HD7000 when used outside of DX.
 
Durante said:
So you expect people to write assembly-level shaders for Wii U? Because I don't see that happening. And even then, there are workloads where even perfect coding simply can't fully use a VLIW5 arrangement.

I'd agree that it's probably less of a liability in a console setting, and it may even reach equal performance/mm² on a console, but it will still lag behind GCN in performance/FLOP.
Of course all first party games and some third party games will have shaders coded to the metal, it's the way it has to be if you want optimal performance, and even most of the third parties had to do that on the PS3 to adapt their engines so they could port to PS3 with same graphical output than on 360.

VLIW5 is much more efficient per mm^2 and theoretically a better approach, but on PC it was clear that it couldn't be done this way.
Now, I don't think that Orbis and Durango will be GCN architecture per se, their GPU will also be a modified part with optimizations made.

At the end, we will have to wait to see WiiU's GPU and know more of Durango or Orbis to determine how width the gap will be between them.
 

ozfunghi

Member
yet none of those guys say it supports Tessellation. We just know it supports some features beyond dx10/SM4. It appears there definitely a chance it does, we still just don't know for sure.

The leaked & confirmed spec list clearly states it has a tesselation unit. We've known this for the better half of the year.
 

ozfunghi

Member
I would expect most wiiU games to stay at 720p. Nintendo are not in an arms race and the weaker GPU would make else to stick with good IQ at 720p

Seems like most are expecting Orbis/Durango (jeez, we need a short abbreviation for that) to stay at 720p in order to push effects. I don't see why we'd need 1080p Nintendo games. In fact, i'd like nothing more than Or/Dur (sounds like hors d'ouvres pronounced by an anglo saxon, lol) to be 1080p, it would make 720p WiiU ports a tad more likely.
 
Seems like most are expecting Orbis/Durango (jeez, we need a short abbreviation for that) to stay at 720p in order to push effects. I don't see why we'd need 1080p Nintendo games. In fact, i'd like nothing more than Or/Dur (sounds like hors d'ouvres pronounced by an anglo saxon, lol) to be 1080p, it would make 720p WiiU ports a tad more likely.

"Or/Dur" sounds more like "Ordure" to me, which - given the way discussions about system power tend to go - seems entirely appropriate.
 
Seems like most are expecting Orbis/Durango (jeez, we need a short abbreviation for that) to stay at 720p in order to push effects. I don't see why we'd need 1080p Nintendo games. In fact, i'd like nothing more than Or/Dur (sounds like hors d'ouvres pronounced by an anglo saxon, lol) to be 1080p, it would make 720p WiiU ports a tad more likely.

I7ve been trying out 'Durangorbis'.
 

tipoo

Banned
Re: GCN vs VLIW5

ht4u.net made a pretty awesome comparison. They took a HD 5770 and a new HD 7770. Both cards have 40 TMUs, 16 ROPs with a 128bit SI and GDDR5. And both have 10 shader-cluster. The only difference is, that while the HD 5770 has 160 5D shaders, the HD 7770 uses 640 1D GCN shaders. Now they clocked the HD 7770 down to the same clocks as the HD 5770 and compared the GPUs. HD 5770: 1360 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s HD 7770: 1088 GFLOP/s, 34,0 GTex/s, 13,6 GPix/s, 76,8 GB/s You can see that the HD 7770 has about 25% less FLOPs, the rest of the cards are almost identical (and keep in mind that the HD 7770 has the better AF). The results are great: The HD 7700 is up to 37% faster in some games while losing to the HD 5770 only in two games, Dirt 3 & DA2. And there it's a maximum 1.8% slower. + drivers for the VLIW5 arch are matured, GCN is quiet new yet, we'll see some improvements here for sure. So, despite having about 25% less raw power, the HD 7770 is faster in most games, sometimes even by far. This seems like the step AMD needed for a long time.


http://ht4u.net/reviews/2012/amd_radeon_hd_7700_test/index4.php
 

tipoo

Banned
This image should help:

vVPbclw.jpg


That's a 45nm Core 2 Duo. Almost half the die area is SRAM (L2). With eDRAM, that uniform area in the lower half would shrink to roughly one third, resulting in a significantly smaller die.



The C2D is over 100mm2 on 45nm, even with half gone to SRAM, the logic for two cores with zero cache is still larger than the three cores + eDRAM of the Wii U. If you don't completely remove the SRAM from the picture and instead shrink it to 1/3rd, it's that much bigger than the entire Wii U CPU. So yes, SRAM vs eDRAM is part of it, but the CPU is still tiny per-core.


Didn't we already conclude that the difference is because of the windows driver? So it wouldn't matter as much on a dedicated console?

If it was due to a driver, why did AMD switch architectures? And what about Linux and OSX drivers? If those couldn't be fixed, why could a console driver?
 

z0m3le

Banned

http://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/4 here read this, it shows why this is, also there are benchmarks that put GCN and VLIW5 on practically even grown... DX11 is what slows down VLIW5 so much, and will see a dramatic increase in overall performance per flop outside of DX, while GCN's increase is much more incline with moving to something like openGL. I talk about it a bit more on page 55, I think I link the post on this page a bit higher.

If it was due to a driver, why did AMD switch architectures? And what about Linux and OSX drivers? If those couldn't be fixed, why could a console driver?

http://www.neogaf.com/forum/showpost.php?p=47226464&postcount=2735 since I don't really want to keep explaining it.
 

tipoo

Banned
http://www.anandtech.com/show/4061/amds-radeon-hd-6970-radeon-hd-6950/4 here read this, it shows why this is, also there are benchmarks that put GCN and VLIW5 on practically even grown... DX11 is what slows down VLIW5 so much, and will see a dramatic increase in overall performance per flop outside of DX, while GCN's increase is much more incline with moving to something like openGL. I talk about it a bit more on page 55, I think I link the post on this page a bit higher.

That's comparing VLIW5 from the HD4000-5000 to VLIW4 from the 6000, GCN in the 7000 is neither

This is a good article:
http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/3

With AMD Graphics Core Next, VLIW is going away in favor of a non-VLIW SIMD design. In principal the two are similar – run lots of things in parallel – but there’s a world of difference in execution. Whereas VLIW is all about extracting instruction level parallelism (ILP), a non-VLIW SIMD is primarily about thread level parallelism (TLP).

GCN is very different
SIMD2.png


vs
VLIW4.png


So that comparison you provided doesn't address what I was talking about at all. Sure VLIW5 may be comparable to VLIW4 if it is properly used, but GCN is NOT VLIW4, it is radically new.
 
Could someone explain shaders to me ?.

Have heard them talked about for over a year regarding WiiU, the Wii was often referred to as not having enough of them and now Miyamoto is mentioning them.

What exactly are they, how are they used, are they hardware or software based, could you link a screen shot of a game with and without them.

Im guessing it's a technique used to make more realistic looking games as even without them the Wii had some incredible looking 'cartoony' looking games.

I could be way of the mark, i really have no idea im just interested to know what they or it is.

Can't find much on Google and i don't want to start a new thread when one of the experts in here could explain them in a single post :p.

Thanks.
 

tipoo

Banned
Could someone explain shaders to me ?.

Have heard them talked about for over a year regarding WiiU, the Wii was often referred to as not having enough of them and now Miyamoto is mentioning them.

What exactly are they, how are they used, are they hardware or software based, could you link a screen shot of a game with and without them.

Im guessing it's a technique used to make more realistic looking games as even without them the Wii had some incredible looking 'cartoony' looking games.

I could be way of the mark, i really have no idea im just interested to know what they or it is.

Can't find much on Google and i don't want to start a new thread when one of the experts in here could explain them in a single post :p.

Thanks.


The tl;dr of it is that they calculate what colour each pixel should be based on lighting effects, textures, etc. If there are no other bottlenecks, generally the more shaders you have, the more you can work on at once, which means more effects or higher resolutions.

http://en.wikipedia.org/wiki/Shader
 

z0m3le

Banned
tipoo, right GCN will be advantagous, but not anything like the link you gave above, it will be closer to 10%.

VLIW4 and VLIW5 in a console would be basically identical, and in DX they have ~20% difference, VLIW is simply very weak for high level coding because often 20-40% of the SPs go ignored, this wouldn't happen in a console. GCN benefits by making all SPs free to grab threads iirc, so its advantage comes from newer components and the way data flows through the architecture. Nothing like ~40%, more like 5-10% will be seen compared to VLIW in a console setting.
 

wsippel

Banned
The C2D is over 100mm2 on 45nm, even with half gone to SRAM, the logic for two cores with zero cache is still larger than the three cores + eDRAM of the Wii U. If you don't completely remove the SRAM from the picture and instead shrink it to 1/3rd, it's that much bigger than the entire Wii U CPU. So yes, SRAM vs eDRAM is part of it, but the CPU is still tiny per-core.
As is Jaguar, though that wasn't the point to begin with. The point is that Espresso should be roughly twice the size if it used SRAM, so the chip being as tiny as it is can be largely attributed to the fact that it uses eDRAM.

By the way: Espresso is a real RISC processor and 32bit, two more things to consider when comparing its die size to amd64 cores.
 
The tl;dr of it is that they calculate what colour each pixel should be based on lighting effects, textures, etc. If there are no other bottlenecks, generally the more shaders you have, the more you can work on at once, which means more effects or higher resolutions.

http://en.wikipedia.org/wiki/Shader

Thanks a lot, don't know how i missed that when i searched :p.

Going to be really interesting to see some big budget first party Nintendo games using shaders then !, hopefully at E3.

Well done to the guys who chipped in for these hardware results, looking forward to finally having an idea of the GPU's true power.
 

Roo

Member
Seems like most are expecting Orbis/Durango (jeez, we need a short abbreviation for that) to stay at 720p in order to push effects. I don't see why we'd need 1080p Nintendo games. In fact, i'd like nothing more than Or/Dur (sounds like hors d'ouvres pronounced by an anglo saxon, lol) to be 1080p, it would make 720p WiiU ports a tad more likely.

Ordu
Orgo
Bisgo
Durabis

I would expect most wiiU games to stay at 720p. Nintendo are not in an arms race and the weaker GPU would make else to stick with good IQ at 720p

I'm totally fine with that as long as they push the hardware to its limits
 

z0m3le

Banned
GCN is very different
SIMD2.png


vs
VLIW4.png

Good, maybe this will make it easier to explain to you, what you have there is pictures of the SIMDs for both architectures. (As blu pointed out, these are actually shader units)

Lets compare HD 5870 to HD 7950.

HD 5870 uses 5 (VLIW5) ALUs (or "SPs") per SIMD, so the HD 5870 has 1600 SPs divided by 5 would give you 320 SIMDs. (VLIW4 like the HD 6970 used 1536 SPs thus 384 SIMDs) (Again as blu points out, HD 5870 has 320 shader units, and these units are grouped in 16 to form a SIMD, so HD 5870 has 20 SIMD)

Now GCN, HD 7950 uses 1792 ALUs ("SPs") each SIMD uses 16, so to get the SIMD number, you would divide 1792 by 16 giving you 112 SIMDs. so those pictures without context is really hard to compare, also iirc GCN then takes those SIMD units and combines them in series of 4, making the CU (or compute units?) giving the HD 7950 28CUs.

Obviously Wii U isn't going to use an HD 5870, and likewise PS4 isn't HD 7950, PS4 is suppose to use 14+4 CUs which is 56+16 SIMDs. Divided into one pool for graphics and one pool for GPGPU functions.

Maybe I dug a bit too deeply into the PS4, so let me get back on track and tie this all together, VLIW5 splits its ALUs into groups of 5, GCN splits its ALUs into groups of 16, they could have the same number of ALUs (Wii U won't, just talking about architecture here) the main benefit of doing this is that when a SIMD is given a task, it usually only uses 3 or 4 of the ALUs, but since GCN is much wider, it will handle more tasks at one time, allowing it to use 12-15 maybe even all 16 ALUs in a SIMD.

That is how things work in DX, but if you were to code much closer to the metal, you could make sure that all 5 ALUs in the SIMD are used much more frequently, obviously 100% of the time would be wizard work, but it would be the overwhelming majority of the time. GCN likewise would almost always use all 16 ALUs every pass, but if VLIW5 is only ~60% efficient in DX, and GCN is say ~85% efficient, in a console both will be in the mid to high 90s, THUS VLIW5 and GCN in a console setting are going to perform much much much closer than they do in DX.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Good, maybe this will make it easier to explain to you, what you have there is pictures of the SIMDs for both architectures.
Actually those are pictures of the shader units of the two architectures. While in the GCN the shader unit is actually SIMD, in the context of ATI VLIW architectures, SIMD is a group of shader units (often 16 of those) that at each given moment execute the same op of the same shader clause. I.e. all those cores executing the same code on different data makes the group a SIMD entity, but the individual shader unit is not SIMD per se.
 

OryoN

Member
Orbis has 3-3.5x the GPU power(since we still dont know how powerful Wii U GPU is, whats up with that?!). 10x the compute power(500gflops vs ~50gflops-wait is Wii U's cpu even that much??).

Where are these figures coming from? In both cases they seem quite high, especially for the Jaguar cores in Orbis, which were never meant to be brute-force, flop-heavy designs(like Cell, for instance). According to some reasonable estimates(even mentioned in this very thread), that figure could be closer to 1/4 that 500 Gflops number you posted. Depending on where Wii U's CPU sits(15-20 Gflops), that'll put it anywhere from 5-7X disadvantage in theoretical performance, at most.

How these CPU's actually perform in real-world cases, should be interesting. Wii's CPU was at a great disadvantage in theoretical performance, yet, in actual games, I'm still asking myself; "what was all the noise about?" I'm not seeing anything particularly impressive - CPU wise - that would be impossible on Wii(which ironically, had a few pretty impressive physics based games)
 
I think he's combining the CU's from the GPU-side into the CPU figure?

I really don't think a 5-7x disadvantage is anything to scoff at anyway.
 

ahm998

Member
I notice pikmin 3 not using tessellation any idea why?

What about advanced lighting & better texture possible in pikmin ?

Maybe graphics card not included tessellation.
 
I notice pikmin 3 not using tessellation any idea why?

What about advanced lighting & better texture possible in pikmin ?

Maybe graphics card not included tessellation.

Is english not your first language? If it is...
Anyway, the card HAS tessellation. Read the last few pages
 
Top Bottom