• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.
You are new to this hardware apparently. It is an inherent feature from the GC.

The GC CPU had customization made to be able to be used in conjunction with the GPU. It was used to produce texture effects and calculate geometry in a way completely different from the PS2 and Xbox1.

When the GC launched, devs said it was incapable of doing things like bump mapping, normal mapping and bloom, but it actually did not at launch. The thing is that the architecture was radically different than what they were familiar with. The way you produced them on the GC was completely different than the Xbox1 which used modern shaders. It was harder to do but used less resources than the Xbox1 shader did. You could pretty much produce texture effects for free in the GC.

These features are still present in the Wii U CPU.

The Wii U GPU is custom made with components that seem to range from the HD4000 to the HD6000. It also has its own proprietary API known is GX2.

The 360 has none of this functionality. You can't port code from it and expect it to run on the Wii U perfectly. Things will simply not work out the same way.

I will go more in depth into it later.

I would very much appreciate that. And I don't mean that sarcastically.

Bump/normal mapping happened on the Gamecube's GPU and not the CPU though. Dot3 bump mapping was a DX7 feature and not EMBM which was a DX8 feature. I know the Gamecube was underutilized but I'm pretty sure you're attributing things to the CPU that happened on the GPU.

The API is irrelevant because any console maker exposes as much of the hardware as is possible to the API.
 
My 2 cents, Shinen say's they didn't think (paraphrasing here) eDRAM was fast enough to do deferred rendering. Turns out its more than capable. Now this is because it is believed that the amount of bandwidth lighting and shadows eat up under deferred rendering. You have to sacrifice some post processing effects. Nintendo devs probably didn't know this either or even have full knowledge and experience with a forward renderer. Now Shin'en has confirmed that all render targets fit in eDRAM. I wonder when Nintendo devs discovered this. I wonder if any third party devs knew this.

Don't expect the frame buffer to be stored in eDRAM. Post processing effects will be stored they're.

Bayonetta 2 can't in anyway judged by a single image. You have to see the Gomorrah fight to judge the console technically this early in its life cycle.
 

Argyle

Member
As for how they are different. They are simply different.

Have you ever programmed in assembly before? If you haven't then there is no way you could really understand this in full detail.

Yes, I have.

The instructions sets and what not are different for different architecture. The pipelines are different. The balance and distribution of data through the memory is different. The API's are different. You can put code that was meant for one architecture on another and expect 100% portability even amongst hardware in the same family sometimes. Heck, you can't even expect it to actually run.

Its like taking a program that was made to make use of SSSE3 on an Intel CPU and running it on an AMD CPU that is the same clock and specification. It will not run anywhere near as well. It may even crash.

99-100% of a modern game is written in a high level language, so none of this matters, as far as the instruction set. Just recompile for the target processor.

The api's are different but I'm asking what overarching changes you would have to make. The api falls under implementation details imho.

Take the PS3 CPU vs the 360 CPU. They both have the same clock, but one has 1 core with 8 SPE, and the other has 3 cores with hyper threading. The way tasks are delegated and put through are completely different. Those are just problems for porting code at the door.

I'll get back to this later, i need to take care of some business for the moment.

Sure, but we are not talking about the PS3. The 360 has a triple core cpu, as does the Wii U. Performance differences aside, a job based engine should port over. But that is neither here nor there, I was asking about what changes you would need to make to the renderer to go from 360 to Wii U.
 
Indeed. This is why many launch titles on the HD twins ran at 720p or higher, and graphically demanding games late in the gen didn't. Resolution, and sometimes frame rate, can be sacrificed for fidelity or effects. CoD in consoles runs at sub-HD, AFAIK, to keep a locked 60FPS frame rate.

Is CoD considered as one of the most technically demanding games of the current gen?
 
It's probably higher on the list than many would think when you consider it runs at a locked 60fps unlike nearly all of its contemporaries.

It's also based on the near 14 year old Quake III Arena engine which is one of the reasons they used it to be able to hit 60fps. As the years passed and more effects were added, the resolution dropped to be able to keep the 60fps 'feel' (I have read several times that it doesn't even run at 60fps and uses several tricks to fool the eye into thinking it does).
 

MegalonJJ

Banned
...those expecting 1080p / 60fps WiiU games are dreaming imo...

Ducktales Remastered & Rayman(?) run at 1080p/60fps. So there!

;P

I know what you mean though and remember 3D World has to be able to render 4 different players in a 3D level (how many platformers do this anyway? No mean feat I imagine), whereas Smash is a 2D fighting game, with 4 players at 1080p/60fps.

Of course if we use your MH3U logic, in that the Wii U can't/struggles to run 1080p/60fps titles, then by extension of your logic, 1080p/60fps wasn't possible on superior hardware for a 2 player fighting game, so of course that console can't run 1080p/60fps titles. Oh well. ;)
 
Yes, I have.



99-100% of a modern game is written in a high level language, so none of this matters, as far as the instruction set. Just recompile for the target processor.

The api's are different but I'm asking what overarching changes you would have to make. The api falls under implementation details imho.



Sure, but we are not talking about the PS3. The 360 has a triple core cpu, as does the Wii U. Performance differences aside, a job based engine should port over. But that is neither here nor there, I was asking about what changes you would need to make to the renderer to go from 360 to Wii U.

Well, post processing effects would have be coded to be stored in eDRAM. Also you would want your render targets there as well. Frame buffer should sit in MEM2. Put audio on the DSP. Any CPU intensive work must be optimized by taking advantage of cache's and if the CPU has access to eDRAM or the 1MB of high speed 1t-SRAM. If you take Iwata and Shin'en comments as facts, then you have to use the caches available.
 

StevieP

Banned
It's also based on the near 14 year old Quake III Arena engine which is one of the reasons they used it to be able to hit 60fps. As the years passed and more effects were added, the resolution dropped to be able to keep the 60fps 'feel' (I have read several times that it doesn't even run at 60fps and uses several tricks to fool the eye into thinking it does).

The current iteration of the CoD tech has about as much relation to Quake's tech as Source 2 does. It gets modified as they go, much as the case with most engines. Though some engine makers (cough epic, cough dice) have you believe they're rewriting everything from scratch with every version number.
 
It's probably higher on the list than many would think when you consider it runs at a locked 60fps unlike nearly all of its contemporaries.

What are the notable post processing effects at play, other than normal mapping, alphas and alpha transparencies, DoF, its not a lighting champ?
 

fred

Member
Absolutely. Getting solid 720p 60FPS with what is going on already is showing a huge boost over the last gen Bayonetta, but if its 1080p, there is going to be mass destruction on around here. No one would be able to dismiss the hardware after that.

There's no way that it's in 1080p native, as much as I'd like to see it. Just having the demo running in 720p native at 60fps with v-synch enabled is a great feat, particularly when you consider that the game is also displayed on the GamePad at 60fps at the same time.

Just having the demo running on the above settings is impressive enough.

It's going to be interesting to see what changes Sega make to the first game, if any, when the inevitable port of the first game is done. Even keeping it in the same res with v-synch enabled should give a solid 60fps I reckon.

I'm surprised Sega haven't announced it yet tbh, it's a no-brainer to do - it'll be quick and cheap to do and will ride the hype of the sequel.
 

AzaK

Member
No, not all of them. Some of them require more processing power than your watch has.



So now there's a "proper next gen" and, I guess, a non-proper next gen. Fantastic.

Krizzx - Bayo 2 is 720p/60. It is not 1080p/60 - at least from what we've seen of it. I doubt it will end up at 1080p/60 either. It's got too much going on per frame. And you've gotta stop with stuff like this:
You get my point though right? They are for the most part not particularly graphically intensive.
 

Argyle

Member
Well, post processing effects would have be coded to be stored in eDRAM. Also you would want your render targets there as well. Frame buffer should sit in MEM2. Put audio on the DSP. Any CPU intensive work must be optimized by taking advantage of cache's and if the CPU has access to eDRAM or the 1MB of high speed 1t-SRAM. If you take Iwata and Shin'en comments as facts, then you have to use the caches available.

I don't disagree with any of this. Focusing just on the rendering, this doesn't seem like a lot of work - in fact you may not have a choice on where your framebuffers go (can the GPU on the Wii U write directly to main memory? You cannot render directly to main memory on the 360, the framebuffer needs to be copied back to main memory, and IIRC the GameCube/Wii was the same way.)

If the WiiU has this limitation, it's the same limitation as the 360 and thus it's not a situation where you have to rewrite everything, as krizzx implied.

I'm guess I am also interested in the case MrPresident mentioned - we have a port of COD, and taking the CPU out of the equation (which could bottleneck the framerate), if the GPU was more powerful, shouldn't we have at least seen a resolution bump, even if just to 720p? IIRC it's a traditional forward renderer and so everything should fit nicely into the eDRAM right? I'm just surprised that it runs at the same sub-HD resolution as the current gen consoles (see http://www.eurogamer.net/articles/digitalfoundry-black-ops-2-wii-u-face-off).
 

ikioi

Banned
Yes, I was talking about the design when I said make, not manufacture. I can't remember the exact place they said it, but it was right after the XboxOne and PS4 were announced to be using AMD GPU and AMD commented on their work on them.

Nintendo didn't design it either.

They don't have the skill, resources, or expertise to design a chip. I don't think you understand just how complex designing a GPU would be.

Nintendo's design input would have extended as far as testing, feedback, setting performance goals. The architecture, chip layout, architecture, hands on engineering, technology, ALL AMD. Nintendo DO NOT have the capabilties to do this.

They were pretty much, "go ask Nintendo" when it came to the Wii U GPU. They said they provided the tech but Nintendo made it to their specification.

Proves nothing. AMD can't speak on matters their client hasn't permitted them too, and we all know Nintendo don't like talking hardware and specs.

Nintendo has 30 years of experience with putting hardware together.

No, they have 30 years of partnering with vendors.

IBM for CPUs, Sony for audio chips, AMD, Silicon Image, and ArtFX for GPUs, Panasonic for optical lenses, etc.

Companies like Foxconn for PCB design and assembly, Renesas and Global for chip fabircation.

You are kidding yourself if you think Nintendo designed any of the major chips in any of their consoles.
 
Please try to remember everything points to this GPU being 176GFLOPs and the soon to be released consoles (one with a close to 2 TFLOP GPU, 4x as much, much faster RAM and a much faster CPU) are struggling to hit 1080p / 30fps games...
That's a matter of complexity.

The same system than can barely run Crysis at 800x600 and 20 FPS can potentially run an old (read much simpler) game at 1080p, 16xMSAA at a constant 60FPS or more.
 
Nintendo didn't design it either.

They don't have the skill, resources, or expertise to design a chip. I don't think you understand just how complex designing a GPU would be.

Nintendo's design input would have extended as far as testing, feedback, setting performance goals. The architecture, chip layout, architecture, hands on engineering, technology, ALL AMD. Nintendo DO NOT have the capabilties to do this.



Proves nothing. AMD can't speak on matters their client hasn't permitted them too, and we all know Nintendo don't like talking hardware and specs.



No, they have 30 years of partnering with vendors.

IBM for CPUs, Sony for audio chips, AMD, Silicon Image, and ArtFX for GPUs, Panasonic for optical lenses, etc.

Companies like Foxconn for PCB design and assembly, Renesas and Global for chip fabircation.

You are kidding yourself if you think Nintendo designed any of the major chips in any of their consoles.

Well they do employ and have for a decade or more Dr. Wei Yen/ ArtX founder that was made up of former SGI engineers.
 

wsippel

Banned
The Espresso thread is even more dead than the Wii U itself, so I'll repost this here:

From the latest Bink changelog:

  • Added Wii-U support for Bink 2 - play 30 Hz 1080p or 60 Hz 720p video! We didn't think this would be possible - the little non-SIMD CPU that could!
  • Added Wii support for Bink 2 - play 30 Hz 640x480 video. Another Broadway platform we didn't initally think could handle Bink 2 - great CPU core!
http://www.radgametools.com/bnkhist.htm

It's interesting to see how this really old core still manages to surprise developers even after all those years.
 

krizzx

Junior Member
Yes, I have.



99-100% of a modern game is written in a high level language, so none of this matters, as far as the instruction set. Just recompile for the target processor.

The api's are different but I'm asking what overarching changes you would have to make. The api falls under implementation details imho.



Sure, but we are not talking about the PS3. The 360 has a triple core cpu, as does the Wii U. Performance differences aside, a job based engine should port over. But that is neither here nor there, I was asking about what changes you would need to make to the renderer to go from 360 to Wii U.

You are only seeing the surface.

The 360/PS3 CPU were made to do more floating point in coding, on the other hand, the Wii U CPU can run circles around them in integer performance. If you port 360 or PS3(which leverages floating point) to the Wii U then it will run horribly. On the other hand, if that code was rewritten with integer performance in mind, then the Wii U CPU would outperform the 360 and more than likely even the Cell. It is also out-of-order where they were in-order leading to more misses.

This is what is meant by not utilizing the hardware to its strengths. To actually reprogram the game for the Wii U CPU, though, greatly increase dev times, and subsequently costs.

As for the GPU, its not using the same shading API, the the actual component balance is a lot different. It also leverages its RAM differently.

Most Wii U games need to make extensive use of the 32 MB of eDRAM. the PS3/360 don't have that, so the games code was written to their strengths and bottleknecks. When you try to port this to the Wii U, the entire code would have to be rebalanced. If most of the texture data was written in the main system memory, then you would have to completely throw all of that code out and rewrite it to leverage the Wii U eDRAM
That takes time and likely costs more. You could up a game to 1080p this way but what dev is going to spend that kind of money.

The fact of the matter is, most game would have to rebuilt from scratch almost to output at 1080p if they were 720p on the Wii U. On the other hand, the devs can easily cram the code in to get mostly equivalent performance at under 1 million(this was the number given for the one of the early Wii U ports).

To make effective use of the hardware in a port would be too expensive see Skyrim PS3 for details.
http://kotaku.com/5885358/why-skyrim-didnt-play-nice-with-the-ps3

This game ran great on the 360 because it was the lead platform. It ran absolutely horribly on the PS3 even though they are so similar.

Same with Bayonetta on the PS3 and 360.
http://www.lensoftruth.com/head2head-bayonetta/


This is why I disregard 360/PS3 ports to the Wii U in comparison. No dev is going to drop the money to reprogram to use the hardware design to its strengths.

Now, I expect games ported from the XbonxOne to the Wii U to show much better performance on the Wii U's end than from the 360 to the Wii U because the hardware is more similar.

They both have 32 MB of embedded RAM on the GPU. They are both GPGPU. They both use out-of-order CPU's with low clock CPU that are made to do more efficient code as opposed to brute force.

Ports of last gen games will always look and run mostly the same as they did on the 360 and PS3. That is probably never going to change. Only games that are built with the Wii U as its target lead platform will run at 1080p.
 

ikioi

Banned
Well they do employ and have for a decade or more a former Silicon Graphics/ ArtX expert exec who I believe is one of the founders of ArtX.

Well aware. Nintendo need experts to provide guidance, feedback, liase with vendors, etc.

But that's a far cry from Nintendo having the inhouse resources and capabilities to build and design their own GPUs. Nintendo didn't simply license AMD's technologies and then go off and build the Wii U's GPU.

What Nintendo did with the Wii U is no different from their previous consoles. They approached companies like AMD and Renesas and had them tailor a product to their requirement.

Same way Nintendo didn't built the tri core PPC CPU either, IBM did.
 

wsippel

Banned
Well aware. Nintendo need experts to provide guidance, feedback, liase with vendors, etc.

But that's a far cry from Nintendo having the inhouse resources and capabilities to build and design their own GPUs. Nintendo didn't simply license AMD's technologies and then go off and build the Wii U's GPU.

What Nintendo did with the Wii U is no different from their previous consoles. They approached companies like AMD and Renesas and had them tailor a product to their requirement.

Same way Nintendo didn't built the tri core PPC CPU either, IBM did.
Nintendo actually designed several low level features for their previous systems, including CPU and GPU features. They have in-house engineers or work with contractors who then work with their partners on implementing the stuff they came up with.
 
The Espresso thread is even more dead than the Wii U itself, so I'll repost this here:


Punching above its weight, seems to be the appropriate phrase. This console and its memory focused design.

I wonder what started this push in the GPU market of power over efficiency. Where something like HUMA appears this late in the GPU design roadmap. WiiU seems to be a major focus on removing bottlenecks and increasing efficiency overall.
 

wsippel

Banned
Punching above its weight, seems to be the appropriate phrase. This console and its memory focused design.
Quite a bit above its weight, apparently. And this particular case isn't memory focused - the whole idea behind Bink is that it uses as little memory and bandwidth as possible, because Bink videos are typically used to hide load times. If you want quality and don't care about load times, you use h264 or something.

Bink 2, from what I understand, is very much a performance hog - and almost pure SIMD. It also doesn't scale beyond two cores, so the per-core performance is critical (and one Espresso core does nothing while decoding a Bink 2 video). Yes, any modern smartphone can decode Bink 2, but those have "proper" SIMD extensions. Espresso doesn't - and still manages to handle the format even at full HD.
 
So am I the only one who wouldn't particularly be phased if Bayo2 was 1080p? Not saying the game is shit but just having a high resolution is one piece of the pie.

Wow, I am sure you have not seen the gamersyde 12 minute video of Bayo 2.

I accept 1080p is not the whole picture as if all devs wanted all games would be 1080p, with major sacrifices in other areas. They always look for the best trade offs.

I do find a very good performance for such a "shit hardware" to be hitting 720p60 constantly in the next batch of games. I would prefer Smash to go 720p60 and with more bells and whistles.

What I also find more amazing are the drops in some games expectations from PS4 and XB1.

Killer Instinct has gone from 1080p60 to 720p60
Killzone Shadow Fall from 1080p60 to 1080p30 in the SP
Dead Rising is having trouble reaching 1080p30
BF4 is 720p60 or a resolution in between like 900p60 it is still WIP but 1080p60 is looking like it will not be possible.

Quite a bit above its weight, apparently. And this particular case isn't memory focused - the whole idea behind Bink is that it uses as little memory and bandwidth as possible, because Bink videos are typically used to hide load times. If you want quality and don't care about load times, you use h264 or something.

Bink 2, from what I understand, is very much a performance hog - and almost pure SIMD. It also doesn't scale beyond two cores, so the per-core performance is critical (and one Espresso core does nothing while decoding a Bink 2 video). Yes, any modern smartphone can decode Bink 2, but those have "proper" SIMD extensions. Espresso doesn't - and still manages to handle the format even at full HD.

Very good news indeed!
 
Quite a bit above its weight, apparently. And this particular case isn't memory focused - the whole idea behind Bink is that it uses as little memory and bandwidth as possible, because Bink videos are typically used to hide load times. If you want quality and don't care about load times, you use h264 or something.

Bink 2, from what I understand, is very much a performance hog - and almost pure SIMD. It also doesn't scale beyond two cores, so the per-core performance is critical (and one Espresso core does nothing while decoding a Bink 2 video). Yes, any modern smartphone can decode Bink 2, but those have "proper" SIMD extensions. Espresso doesn't - and still manages to handle the format even at full HD.

So where does this put Espresso in the scheme of things concerning common game uses. A.I, physics, animation. Although some those functions are slowly but surely being moved on to GPGPU. What if Espresso has access to all the eDRAM or will with SDK updates?
 

atbigelow

Member
No contradiction. If you read what Shinen said, they dropped the resolution because the difference between 1080p vs 720p for Nano Assaust wasn't discernable. After going with 720p they added extra effects. It dosn't say hey we went with 1080p and then tried to add extra effects but performance suffered. There fore what you are saying is wrong. When I find the link I will post so you can read for yourself.
I've read what you speak of. I am making an assumption that there's no reason to do 720p if you can do 1080p. You are correct that they didn't state it in the exact manner in which I did.

In a lot of cases, 720p and 1080p are indiscernable (especially after a certain amount of feet back from the display). But that isn't really what I am arguing here. Here's the quote from Shinen:

Nano Assault Neo is running in 720p yes. We had the game also running in 1080p but the difference was not distinguishable when playing. Therefore we used 720p and put the free GPU cycles into higher resolution post-Fx. This was much more visible.
Using some pretty straight-forward logic here, you can assume those free GPU cycles were not free when running at 1080p. Since they were available in 720p, they used it for post-processing effects.

Ergo, those effects were not possible in 1080p at that time.
 

wsippel

Banned
So where does this put Espresso in the scheme of things concerning common game uses. A.I, physics, animation. Although some those functions are slowly but surely being moved on to GPGPU. What if Espresso has access to all the eDRAM or will with SDK updates?
That's a very good question. Sadly, RAD doesn't have benchmarks up for other platforms, so there's nothing to compare it to. We can only take their word for it that the level of performance they managed to squeeze out of the chip is rather surprising.

I have no idea what that means for other applications. Bink 2 is SIMD, and Nintendo CPUs kinda sorta do have SIMD support with paired singles. Technically. But it's completely custom and quite limited. There are only a handful of instructions available in paired single mode.

Either way, I'm pretty sure Espresso has full access to the eDRAM.
 
I've read what you speak of. I am making an assumption that there's no reason to do 720p if you can do 1080p. You are correct that they didn't state it in the exact manner in which I did.

In a lot of cases, 720p and 1080p are indiscernable (especially after a certain amount of feet back from the display). But that isn't really what I am arguing here. Here's the quote from Shinen:


Using some pretty straight-forward logic here, you can assume those free GPU cycles were not free when running at 1080p. Since they were available in 720p, they used it for post-processing effects.

Ergo, those effects were not possible in 1080p at that time.

Recent comments, some what implies that with optimization and the use of more than one CPU core could very well have made 1080p possible with accompanying high resolution post-fx, not discounting the move to a deferred render based engine.
 
That's a very good question. Sadly, RAD doesn't have benchmarks up for other platforms, so there's nothing to compare it to. We can only take their word for it that the level of performance they managed to squeeze out of the chip is rather surprising.

I have no idea what that means for other applications. Bink 2 is SIMD, and Nintendo CPUs kinda sorta do have SIMD support with paired singles. Technically. But it's completely custom and quite limited. There are only a handful of instructions available in paired single mode.

Either way, I'm pretty sure Espresso has full access to the eDRAM.

Out of curiosity, can a SIMD core be placed next to L1, L2, or L3 cache while on the same die with paired singles?
 

Argyle

Member
You are only seeing the surface.

The 360/PS3 CPU were made to do more floating point in coding, on the other hand, the Wii U CPU can run circles around them in integer performance. If you port 360 or PS3(which leverages floating point) to the Wii U then it will run horribly. On the other hand, if that code was rewritten with integer performance in mind, then the Wii U CPU would outperform the 360 and more than likely even the Cell. It is also out-of-order where they were in-order leading to more misses.

This is what is meant by not utilizing the hardware to its strengths. To actually reprogram the game for the Wii U CPU, though, greatly increase dev times, and subsequently costs.

If the PS3/360 games are doing heavy floating point lifting, then you are correct, the CPU in the WiiU cannot keep up and you're pretty much screwed barring a total rewrite. (IMHO this is why there is no Frostbite/UE4 on Wii U, etc.)

Taking advantage of the other benefits you mentioned on the WiiU's CPU (out of order execution, etc.) requires...recompiling for the target processor.

I'm still not convinced that the WiiU's CPU can outperform a current gen CPU on a typical game workload due to its low clock speed and weak floating point performance negating any architectural advantages...but we are getting off topic again. Please try to focus :)

As for the GPU, its not using the same shading API, the the actual component balance is a lot different. It also leverages its RAM differently.

Most Wii U games need to make extensive use of the 32 MB of eDRAM. the PS3/360 don't have that, so the games code was written to their strengths and bottleknecks. When you try to port this to the Wii U, the entire code would have to be rebalanced. If most of the texture data was written in the main system memory, then you would have to completely throw all of that code out and rewrite it to leverage the Wii U eDRAM
That takes time and likely costs more. You could up a game to 1080p this way but what dev is going to spend that kind of money.

The fact of the matter is, most game would have to rebuilt from scratch almost to output at 1080p if they were 720p on the Wii U. On the other hand, the devs can easily cram the code in to get mostly equivalent performance at under 1 million(this was the number given for the one of the early Wii U ports).

Are you sure it is not using the same shading API? If that is true, that would be phenomenally boneheaded. People know HLSL/Cg and its variants and asking people to learn something completely new when they probably could have asked AMD for a functional shader compiler would be utter madness.

Do we know if all texture data must live inside the eDRAM for the GPU to see it? IIRC this was true on GameCube/Wii, but it was a cache, so texture management was automatic.

If this is true: http://www.vgleaks.com/wii-u-memory-map/

Then textures obviously are stored in main memory, but it's possible that there is still a GPU limitation where they must get copied to MEM1. Perhaps the reason why the graphics libraries take over all of MEM1 is because they have implemented a transparent cache in the graphics libraries. This would not be optimal in all cases, but the alternative would be some PS2-style manual management for textures and that would probably mean a lot of work for someone to port their engine over...I suspect that Nintendo has done something more automagic if the leak is accurate as it really seems to imply MEM1 is off-limits to developers, but maybe they will let people manage MEM1 themselves if they are working on a Wii U exclusive...

Of course, if something like MEM1 management is automatic, that would mean that the amount of code that needs to be rewritten would be relatively minor. But maybe you are right, you'd still have to tweak things to optimize for the cache, even if the details were hidden from you.

To make effective use of the hardware in a port would be too expensive see Skyrim PS3 for details.
http://kotaku.com/5885358/why-skyrim-didnt-play-nice-with-the-ps3

This game ran great on the 360 because it was the lead platform. It ran absolutely horribly on the PS3 even though they are so similar.

Same with Bayonetta on the PS3 and 360.
http://www.lensoftruth.com/head2head-bayonetta/


This is why I disregard 360/PS3 ports to the Wii U in comparison. No dev is going to drop the money to reprogram to use the hardware design to its strengths.

Now, I expect games ported from the XbonxOne to the Wii U to show much better performance on the Wii U's end than from the 360 to the Wii U because the hardware is more similar.

They both have 32 MB of embedded RAM on the GPU. They are both GPGPU. They both use out-of-order CPU's with low clock CPU that are made to do more efficient code as opposed to brute force.

Ports of last gen games will always look and run mostly the same as they did on the 360 and PS3. That is probably never going to change. Only games that are built with the Wii U as its target lead platform will run at 1080p.

I don't know if I would call the PS3 and 360 "similar." They are similar in some ways and not in others. IMHO the WiiU is closer to the 360 than the PS3 is.

My understanding was that the main differences for Skyrim was that that there is somewhat less memory to use on PS3, making it run out of memory sooner.

I don't really know what happened with Bayonetta, most likely the engine wasn't parallelized in a way that worked well with the SPUs, and they targetted the 360 GPU as a baseline (didn't another team port it over to PS3?)

I am not so optimistic about ports from Xbox One to WiiU. Things on the GPU might not be so ugly as long as the min-spec PC version is low enough. The Xbox One CPU is probably at least 2-3x faster than the CPU in the WiiU all told. Moving to WiiU might bottleneck gameplay code, which doesn't scale nicely (and IMHO there is already evidence of this bottlenecking on current gen ports). We went through this discussion on the CPU thread, I suspect that the WiiU will get either completely different games ("Call of Duty: Gaiden") or nothing at all once the PS3 and 360 stops getting cross-gen games.

You all are gonna hate me for saying this but perversely enough WiiU's best hope might be for the Vita to succeed, together they might represent a large enough target for 3rd parties to aim for. We saw this last gen when quite a few 3rd parties lumped the Wii and PSP together.
 

wsippel

Banned
Out of curiosity, can a SIMD core be placed next to L1, L2, or L3 cache while on the same die with paired singles?
Huh? The physical placement doesn't matter. If your question is "can you have both paired singles and SIMD (VMX) in a single core?", then the answer would be "I guess you could". :)
 

bomblord

Banned
The grass and roof textures are low res and all the buildings look like boxes. The walls too are also pretty flat.

Yea, I really do not see it, the grass textures maybe, but the roof textures and very crisp (and aliased but that's not the textures fault). Almost all of those textures look to be pretty standard for HD resolutions.
 
Huh? The physical placement doesn't matter. If your question is "can you have both paired singles and SIMD (VMX) in a single core?", then the answer would be "I guess you could". :largestca

Yeah, you answered it. Forget about placement, I just wonder if Nintendo had IBM put SIMD on the core with the largest cache.
 
If the PS3/360 games are doing heavy floating point lifting, then you are correct, the CPU in the WiiU cannot keep up and you're pretty much screwed barring a total rewrite. (IMHO this is why there is no Frostbite/UE4 on Wii U, etc.)

Taking advantage of the other benefits you mentioned on the WiiU's CPU (out of order execution, etc.) requires...recompiling for the target processor.

I'm still not convinced that the WiiU's CPU can outperform a current gen CPU on a typical game workload due to its low clock speed and weak floating point performance negating any architectural advantages...but we are getting off topic again. Please try to focus :)



Are you sure it is not using the same shading API? If that is true, that would be phenomenally boneheaded. People know HLSL/Cg and its variants and asking people to learn something completely new when they probably could have asked AMD for a functional shader compiler would be utter madness.

Do we know if all texture data must live inside the eDRAM for the GPU to see it? IIRC this was true on GameCube/Wii, but it was a cache, so texture management was automatic.

If this is true: http://www.vgleaks.com/wii-u-memory-map/

Then textures obviously are stored in main memory, but it's possible that there is still a GPU limitation where they must get copied to MEM1. Perhaps the reason why the graphics libraries take over all of MEM1 is because they have implemented a transparent cache in the graphics libraries. This would not be optimal in all cases, but the alternative would be some PS2-style manual management for textures and that would probably mean a lot of work for someone to port their engine over...I suspect that Nintendo has done something more automagic if the leak is accurate as it really seems to imply MEM1 is off-limits to developers, but maybe they will let people manage MEM1 themselves if they are working on a Wii U exclusive...

Of course, if something like MEM1 management is automatic, that would mean that the amount of code that needs to be rewritten would be relatively minor. But maybe you are right, you'd still have to tweak things to optimize for the cache, even if the details were hidden from you.



I don't know if I would call the PS3 and 360 "similar." They are similar in some ways and not in others. IMHO the WiiU is closer to the 360 than the PS3 is.

My understanding was that the main differences for Skyrim was that that there is somewhat less memory to use on PS3, making it run out of memory sooner.

I don't really know what happened with Bayonetta, most likely the engine wasn't parallelized in a way that worked well with the SPUs, and they targetted the 360 GPU as a baseline (didn't another team port it over to PS3?)

I am not so optimistic about ports from Xbox One to WiiU. Things on the GPU might not be so ugly as long as the min-spec PC version is low enough. The Xbox One CPU is probably at least 2-3x faster than the CPU in the WiiU all told. Moving to WiiU might bottleneck gameplay code, which doesn't scale nicely (and IMHO there is already evidence of this bottlenecking on current gen ports). We went through this discussion on the CPU thread, I suspect that the WiiU will get either completely different games ("Call of Duty: Gaiden") or nothing at all once the PS3 and 360 stops getting cross-gen games.

You all are gonna hate me for saying this but perversely enough WiiU's best hope might be for the Vita to succeed, together they might represent a large enough target for 3rd parties to aim for. We saw this last gen when quite a few 3rd parties lumped the Wii and PSP together.

You must of missed Wssipel post on Bink2 running 1080p 30 and 720p 60. There's more to the chip then what we think we know.
 

StevieP

Banned
Nintendo's design input would have extended as far as testing, feedback, setting performance goals. The architecture, chip layout, architecture, hands on engineering, technology, ALL AMD. Nintendo DO NOT have the capabilties to do this.

You are kidding yourself if you think Nintendo designed any of the major chips in any of their consoles.

I know this is your schtick nowadays, but it's not accurate. They both contracted out and did some stuff internally for this one. What that stuff is we may never know until far in the future, but it wasn't all AMD.
 

Lord Error

Insane For Sony
Anywho, I find this pointless, I agree do disagree.I'm not sure, someone pinged me on messenger last week regarding it; I reckon it's still 1080p, but it was downgraded from 60 frames per second target into a 30 frames per second one for single player whilst keeping that goal for multiplayer only.

Not a resolution downgrade but a framerate one; still, for a first person shooter it means more latency; not the best tradeoff in my book, I'm assuming they had to do it.
60FPS was not a target for the game originally. First time they showed it, it was 30FPS, and they said as much. They have said later that their target for multiplayer would be 60FPS, so if they deliver that, it would mean they've in fact upgraded it from what they originally planned.
 

Mildudon

Member
Shin'en has added some effects to there wii u game. Occlusion culling and cascade filtered soft shadows.

ugthcaF.png
 
Yeah, you answered it. Forget about placement, I just wonder if Nintendo had IBM put SIMD on the core with the largest cache.

I would think it's possible, but I also would think that we would have heard something about it if that was the case.

In regards to all the Bayo 2 resolution talk, I don't trust reviewers. You would think these would be the people to have the eyes, and knowledge for this sort of thing, but they don't. Specially in this day and age, where the quality of gaming media has dropped to rock bottom for the most part. I'll wait till the game is out to make any kind of claims on it's resolution or any other games. That said if it's 720p60fps it's a beautiful game, and quite the step up from Bayo 1, if it's 1080p then it's a holy shit step up.
 
60FPS was not a target for the game originally. First time they showed it, it was 30FPS, and they said as much. They have said later that their target for multiplayer would be 60FPS, so if they deliver that, it would mean they've in fact upgraded it from what they originally planned.
That I don't know, nor what they planned originally, I guess I probably could via google, but truth to be told they showed a beta and that was probably 30 frames per second at that point even if they were still looking at the 60 frames per second prospects. I know though it's been touted around lately as a downgrade; and even if it isn't it's still telling for a launch game; if it chooses to go to those lenghts.

But fact is, yes, all Killzone's have been 30 frames per second and suffered from input lag against other first person shooters for it, if it was a deliberate choice by them on a new platform I'd say they're bollocks and putting graphical prowess above gameplay at this point.

Megabytecr also said as much so there's that notion at least. I don't really pay much attention to Killzone because I never found it appealing.
 

tipoo

Banned
Nintendo didn't design it either.

They don't have the skill, resources, or expertise to design a chip. I don't think you understand just how complex designing a GPU would be.

Nintendo's design input would have extended as far as testing, feedback, setting performance goals. The architecture, chip layout, architecture, hands on engineering, technology, ALL AMD. Nintendo DO NOT have the capabilties to do this.



Proves nothing. AMD can't speak on matters their client hasn't permitted them too, and we all know Nintendo don't like talking hardware and specs.



No, they have 30 years of partnering with vendors.

IBM for CPUs, Sony for audio chips, AMD, Silicon Image, and ArtFX for GPUs, Panasonic for optical lenses, etc.

Companies like Foxconn for PCB design and assembly, Renesas and Global for chip fabircation.

You are kidding yourself if you think Nintendo designed any of the major chips in any of their consoles.

You're wasting your time trying to convince him that the chip design was actually left to the chip designers, I've tried. Nintendo setting goalposts and tweaking functionality != Nintendo designing the bulk of chip, but some people will never believe that.


You are only seeing the surface.

The 360/PS3 CPU were made to do more floating point in coding, on the other hand, the Wii U CPU can run circles around them in integer performance. If you port 360 or PS3(which leverages floating point) to the Wii U then it will run horribly. On the other hand, if that code was rewritten with integer performance in mind, then the Wii U CPU would outperform the 360 and more than likely even the Cell. It is also out-of-order where they were in-order leading to more misses.
.

Yes, and unfortunately the bulk of game code is and has to be floating point heavy, not integer heavy. You can't just "optimize" code that needs FP to use integer.

Nintendo actually designed several low level features for their previous systems, including CPU and GPU features. They have in-house engineers or work with contractors who then work with their partners on implementing the stuff they came up with.

No doubt they add a lot and do a lot of work on it. But the debate in the last few pages was on who does the bulk of the engineering work, which is assuredly AMD. If Nintendo could make better GPUs, they would.
 

krizzx

Junior Member
If the PS3/360 games are doing heavy floating point lifting, then you are correct, the CPU in the WiiU cannot keep up and you're pretty much screwed barring a total rewrite. (IMHO this is why there is no Frostbite/UE4 on Wii U, etc.)

Taking advantage of the other benefits you mentioned on the WiiU's CPU (out of order execution, etc.) requires...recompiling for the target processor.

I'm still not convinced that the WiiU's CPU can outperform a current gen CPU on a typical game workload due to its low clock speed and weak floating point performance negating any architectural advantages...but we are getting off topic again. Please try to focus :)
That is incorrect. EA later came back and stated that it "can" run the Frostbite Engine. It was never true that it couldn't just as with the Cryenegine 3 and Dead Island game engine which both had people later come back and say the Wii U could run it. In fact, they had Crysis 3 up and running but EA themselves blocked its release. That was a matter of issues with the companies, not hardware strength.

Also, don't misunderstand the Wii U CPU not being geared towards floating points as not being able to take them. Most port still run better on the Wii U overall. The CPU is simply much better with integer. Its integer performance is better than its floating point, but its floating point is in no way bad.

The Wii U CPU is overall more efficient than the others and especially x86 CPUs. It doesn't have brute force power, but that is because if you program it right, it doesn't need it.
Are you sure it is not using the same shading API? If that is true, that would be phenomenally boneheaded. People know HLSL/Cg and its variants and asking people to learn something completely new when they probably could have asked AMD for a functional shader compiler would be utter madness.

Do we know if all texture data must live inside the eDRAM for the GPU to see it? IIRC this was true on GameCube/Wii, but it was a cache, so texture management was automatic.

If this is true: http://www.vgleaks.com/wii-u-memory-map/

Then textures obviously are stored in main memory, but it's possible that there is still a GPU limitation where they must get copied to MEM1. Perhaps the reason why the graphics libraries take over all of MEM1 is because they have implemented a transparent cache in the graphics libraries. This would not be optimal in all cases, but the alternative would be some PS2-style manual management for textures and that would probably mean a lot of work for someone to port their engine over...I suspect that Nintendo has done something more automagic if the leak is accurate as it really seems to imply MEM1 is off-limits to developers, but maybe they will let people manage MEM1 themselves if they are working on a Wii U exclusive...

Of course, if something like MEM1 management is automatic, that would mean that the amount of code that needs to be rewritten would be relatively minor. But maybe you are right, you'd still have to tweak things to optimize for the cache, even if the details were hidden from you.



I don't know if I would call the PS3 and 360 "similar." They are similar in some ways and not in others. IMHO the WiiU is closer to the 360 than the PS3 is.

My understanding was that the main differences for Skyrim was that that there is somewhat less memory to use on PS3, making it run out of memory sooner.

I don't really know what happened with Bayonetta, most likely the engine wasn't parallelized in a way that worked well with the SPUs, and they targetted the 360 GPU as a baseline (didn't another team port it over to PS3?)

I am not so optimistic about ports from Xbox One to WiiU. Things on the GPU might not be so ugly as long as the min-spec PC version is low enough. The Xbox One CPU is probably at least 2-3x faster than the CPU in the WiiU all told. Moving to WiiU might bottleneck gameplay code, which doesn't scale nicely (and IMHO there is already evidence of this bottlenecking on current gen ports). We went through this discussion on the CPU thread, I suspect that the WiiU will get either completely different games ("Call of Duty: Gaiden") or nothing at all once the PS3 and 360 stops getting cross-gen games.

You all are gonna hate me for saying this but perversely enough WiiU's best hope might be for the Vita to succeed, together they might represent a large enough target for 3rd parties to aim for. We saw this last gen when quite a few 3rd parties lumped the Wii and PSP together.

For the specifics of the Wii U memory working, you would need to talk to one of the people who are more familiar with that field in this thread like blu.

I only know what I have learned from their analysis and comments by devs like Shin'en. One thing is certain. No dev has ever mentioned a memory bottleneck like people were insisting there would be going by the RAM clocks. There has only been praise for the Wii U memory. Its the most praised feature to be honest.

As for the Wii U surival needing VIta or not getting port, that is ridiculous.

Ports are what's hurting the Wii U. Everyone is looking at them and misjudging the capabilities. The best games on Nintendo hardware are always its exclusives. As long as its still getting good exclusive games then it will be no danger.

The PS3 version of Bayonetta was a shoddy port done by Sega, not Platinum.

That is the point. A port is a port. It will not run better on one console than it did on the lead platform unless the game is built from the ground up for that other console as well effectively doubling the cost of development.
 

krizzx

Junior Member
any particular reason why?

There isn't one.

This is just what most hardware is geared toward in the current day. It would make the code the most portable. That doesn't matter in exclusives though, whit is why the Wii U has so many 1080p 60fps retail exclusives.
 

tipoo

Banned
any particular reason why?

Just the type of calculation that has to be done. You can't magically change one to the other, games *need* floating point.

There isn't one.

This is just what most hardware is geared toward in the current day. It would make the code the most portable. That doesn't matter in exclusives though, whit is why the Wii U has so many 1080p 60fps retail exclusives.

What would I know, I'm just a programmer. I could have sworn we explained fp vs integer to you a while back and you acquiesced saying you understood now, and now you're back to saying they are magically interchangeable.

1080p 60fps exclusives? Beyond 2.5D platformers, what is that?
 
Just the type of calculation that has to be done. You can't magically change one to the other, games *need* floating point.

I think he meant, how would that specifically impact a particular game scenario.

And games "need" floating point makes it sound like Wii U can only perform operations on integers. It might not be as powerful operating on floating point numbers, but it certainly can do them. So then you get to the realization that with optimizing between integers and floating point numbers, there is plenty that it is capable of if one made the effort.

That's the real issue though... people think that this effort is not worth the time and/or money, rightfully or wrongfully.
 

Jrs3000

Member
I've read what you speak of. I am making an assumption that there's no reason to do 720p if you can do 1080p. You are correct that they didn't state it in the exact manner in which I did.

In a lot of cases, 720p and 1080p are indiscernable (especially after a certain amount of feet back from the display). But that isn't really what I am arguing here. Here's the quote from Shinen:


Using some pretty straight-forward logic here, you can assume those free GPU cycles were not free when running at 1080p. Since they were available in 720p, they used it for post-processing effects.

Ergo, those effects were not possible in 1080p at that time.

Your logic is fine, I just had a issue with the manner you stated the info contrary to the manner they stated. Also, consider the things they said in the other interviews they've done as far as the game not doing a lot of optimazation, resources being free, and mainly using a single core. Do you think if they worked more on it 1080p with whatever extra they added could be reached?
 
Status
Not open for further replies.
Top Bottom