• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

JordanN

Banned
Agni's Philosophy ran on a high end Intel CPU and a Nvidia GTX 680. If you think the Xbox One and PS4 will match them, I have a bridge to sell you in Brooklyn.
They said they're going to show another Agni's not running on a PC this coming e3.

Will that version match it? I dunno but the point is clear. What happens when Wii U is not the one running it?

Squenix also has a history of pushing their games so I'm not sure if they'll even consider the lowest tech.
 

prag16

Banned
I agree that it is partly a content problem, but that is beyond this thread. Anyway, I think power helps sell devices. That and features, but without power, you can't promote newer, better features.

It's like if Apple tried to sell an iPhone 6, but it was only a half step above iPhone 5 in hardware. They need to see the benefits. And not being able to sell the idea of the gamepad really causes a problem for Nintendo.

Wat... all of Apple's recent refreshes have been "half steps".. lol
 

Schnozberry

Member
They said they're going to show another Agni's not running on a PC this coming e3.

Will that version match it? I dunno but the point is clear. What happens when Wii U is not the one running it?

I don't know? Smug gamer outrage that it doesn't look as good as the original demo? Luminous Engine is supposedly widely scalable, so it will run in some form on Wii U. Silicon Studio already confirmed they are working on the hardware.
 
Wat... all of Apple's recent refreshes have been "half steps".. lol

It would explain why more people are going Android in recent iterations? Well, that AND Android in general is a much larger base now than it ever was.

I personally think Apple sells shit for premium cost, but that's for another thread. ;]
 

JordanN

Banned
I don't know? Smug gamer outrage that it doesn't look as good as the original demo? Luminous Engine is supposedly widely scalable, so it will run in some form on Wii U. Silicon Studio already confirmed they are working on the hardware.
Scalability has only been one part of this equation. I already talked about the pains of downporting so I'm not really sure if clinging onto "scalability" is a good thing.

Either way, my gripe comes from Wii U continues to miss out on demos/games despite the claim you need something 2 or 3x more powerful than PS4 to not happen. So far, scaling doesn't seem to refute this.
 

Schnozberry

Member
Scalability has only been one part of this equation. I already talked about the pains of downporting so I'm not really sure if clinging onto "scalability" is a good thing.

Either way, my gripe comes from Wii U continues to miss out on demos/games despite the claim you need something 2 or 3x more powerful than PS4 to not happen. So far, scaling doesn't seem to refute this.

Do you believe the only reason Wii U is not getting all multi-platform releases is the relative weakness of its hardware? Or do you not understand that the industry is in the middle of a difficult transition and that current business models are becoming outmoded and will not likely last another five years? Third party publishers are making low cost and low risk bets at the moment, because more expensive failures will mean a gaming dark age. They bad start for Wii U means that it is a high risk situation until Nintendo proves otherwise. That has more to do with the lack of third party effort than technical limitations.

Edit: Also, take a look at the indie situation. People with smaller budgets and ambitious ideas are being welcomed and supported by Nintendo. The response is great. There are over 50 indie games in development for Wii U, and a lot of the stuff coming out of Ideame and what not has been very positive regarding the hardware and the programming environment. Things have improved significantly on that front since launch, it seems.
 

Absinthe

Member
It would explain why more people are going Android in recent iterations? Well, that AND Android in general is a much larger base now than it ever was.

I personally think Apple sells shit for premium cost, but that's for another thread. ;]

Saying that a company sells a product for a premium (justified or not) is one thing, but selling "shit" for a premium in regards to Apple? Come on now. Let me know when you get that thread up an running and we can continue this conversation there. Sorry for the slight thread derailment.
 
Saying that a company sells a product for a premium (justified or not) is one thing, but selling "shit" for a premium in regards to Apple? Come on now. Let me know when you get that thread up an running and we can continue this conversation there. Sorry for the slight thread derailment.

youropinionsarewrong.jpeg
 

JordanN

Banned
Do you believe the only reason Wii U is not getting all multi-platform releases is the relative weakness of its hardware? Or do you not understand that the industry is in the middle of a difficult transition and that current business models are becoming outmoded and will not likely last another five years? Third party publishers are making low cost and low risk bets at the moment, because more expensive failures will mean a gaming dark age. They bad start for Wii U means that it is a high risk situation until Nintendo proves otherwise. That has more to do with the lack of third party effort than technical limitations.
Sheesh, I don't want to even go there.

Hardware is obviously apart of this since we had demos shown before the Wii U even began selling. Someone claims PS4/XBO cannot bury Wii U and now there are claims of "scaling" yet the end result is still no demos/games for Wii U.
 
I would say this is what we know. The ALU count is pretty much 160-320. The TMU count is either 8 or 16 and very likely 8 ROPs. Whether the J blocks are TMUs or Interpolators, there seems to be some kind of modification to them. There also seems to be an extraordinary amount of Constant and Instruction cache in Latte compared to other AMD GPUs. Latte retains the 1MB and 2MB eMemories from Flipper/Hollywood. The ARM chip was located. There seems to be a Southbridge in the chip.
When you say that the amount of Constant and Instruction cache in Latte is extraordinary, do you refer to the big pools of eDram or to other parts of the GPU?
Because no matter how I look at it, at first glance y swear that I see the SRAM blocks of the Latte design a lot bigger that the ones on Brazos for example.
 

Schnozberry

Member
Sheesh, I don't want even want to go there.

Hardware is obviously apart of this since we had demos shown before the Wii U even began selling. Someone claims PS4/XBO cannot bury Wii U and now there are claims of "scaling" yet the end result is still no demos/games for Wii U.

Someone claiming something is not a reason to put those words in everyone else's mouth. I said that a particular engine is supposed to be scalable to different hardware. Luminous engine is scalable down to mobile devices. Agni's philosophy is not a game yet, it is an engine demo that was shown on what was bleeding edge PC gaming hardware at the time. When a game is actually made using the engine, it will likely appear on a bunch of platforms and scale across different hardware. Whether Square Enix decides to put that game on Wii U will likely be a business decision weighing in all the factors available at the time.
 

Schnozberry

Member
Did we determine just how much more powerful the GPU is than last generation machines? 25%, 50%?

Nope. No one is certain of the core config just yet, so it's hard to tell. It's at least on par, probably slightly better, but it has some newer features that should make certain graphical effects easier to implement with less of a performance penalty.
 

Log4Girlz

Member
Nope. No one is certain of the core config just yet, so it's hard to tell. It's at least on par, probably slightly better, but it has some newer features that should make certain graphical effects easier to implement with less of a performance penalty.

So this thing is geared towards last gen engines at 720 p, 30 fps with some better lighting, perhaps some tessellation thrown in?
 
So this thing is geared towards last gen engines at 720 p, 30 fps with some better lighting, perhaps some tessellation thrown in?
No. I would say that its geared to modern-future engines with 720p on mind. Modern architecture and modest power (although really high efficiency).
 

StevieP

Banned
So this thing is geared towards last gen engines at 720 p, 30 fps with some better lighting, perhaps some tessellation thrown in?

After reading the naughty dog and COD threads, the word "engine" makes me angry. lol

It's geared for any renderer that supports the shader model that the Wii U sports (which is greater than that of last gen) but with a power envelope similar to the previous generation of consoles. That can mean whatever you want it to mean. It will run whatever you scale its way, unlike the Wii. The content on display will be dependent on its raw power, sure, if that's what you meant.
 

JordanN

Banned
Someone claiming something is not a reason to put those words in everyone else's mouth. I said that a particular engine is supposed to be scalable to different hardware. Luminous engine is scalable down to mobile devices. Agni's philosophy is not a game yet, it is an engine demo that was shown on what was bleeding edge PC gaming hardware at the time. When a game is actually made using the engine, it will likely appear on a bunch of platforms and scale across different hardware. Whether Square Enix decides to put that game on Wii U will likely be a business decision weighing in all the factors available at the time.
It wasn't. But if you support those claims I'll treat you as such.

I already said why scaling doesn't mean much. You still have to put resources in downgrading everything. This wouldn't be the case if Wii U wasn't 8x weaker.

You also miss the point of Agni's. It is showing up on hardware that's not a PC.

Using business as an excuse seems too heavy handed, since making games for PS4/XBO is also a business.
 

Log4Girlz

Member
No. I would say that its geared to modern-future engines with 720p on mind. Modern architecture and modest power (although really high efficiency).

If it can do modern/future engines at 720 p, why isn't it running previous engines at 1080 p? Seems to me it is not geared towards future engines at all.
 

Schnozberry

Member
So this thing is geared towards last gen engines at 720 p, 30 fps with some better lighting, perhaps some tessellation thrown in?

I don't know if it was geared towards last gen engines or a particular resolution/frame rate. It seems to have been designed around Nintendo's ideas for the controller, backwards compatibility, and being as power efficient as humanly possible. Outside of that would be more speculation on my part. We'll know in 11 days based on Nintendo's first party output what we can expect going forward. Based on what people have learned and speculated in this thread, I would think it's in between the 25% and 50% numbers you proposed in your previous comment.
 
Here is why I think porting to Wii U will be hard
Warning, stupid multipliers coming up!

If the Wii U is 2x the strength of the 360 (360 = 250gfl, Wii U = ~350+ unknown hardware, lets say 500, just for a solid number)
Then the Xbone = still over twice the Wii U (Xbone = 1.23/.5=2.46) + efficiencies of HSA/APU.

Let's factor in ram differences, capacity and bandwidth...

the gap between the 360 and Wii U is smaller than the gap between the Wii U and Xbone.

This is why porting will be difficult.

360 x 2 = Wii U
Wii U x 2.5-3 = Xbone

Of course... without solid figures I can only assume that the Wii U is about 2x as powerful as the 360.

/Stupid multipliers.
the biggest thing separating the WiiU from the Durango is the RAM. The Durango still has 5GB of faster RAM (Though IMO RAM speed isn't quite as important after a certain speed at the level we're talking about) versus the Wii U's 1. I'm working with stuff now for our workstations, and cramming as much data as we can into usable RAM now is singlehandedly the biggest task we face.

I think it'd be almost easier to recode CPU tasks from x86 to PPC than it is to cut the used RAM amount to just 1/5th for games, since games really do use a lot of RAM for assets nowadays.

The CPU, however, just has to be capable enough of passing the code at a fast enough rate to keep up with the game. Changing certain code from FP to Int probably kills any performance gain

Graphically, everything can be scaled down. using multipliers based on shader ALUs isn't a great basis
 

Schnozberry

Member
It wasn't. But if you support those claims I'll treat you as such.

I already said why scaling doesn't mean much. You still have to put resources in downgrading everything. This wouldn't be the case if Wii U wasn't 8x weaker.

You also miss the point of Agni's. It is showing up on hardware that's not a PC.

Using business as an excuse seems too heavy handed, since making games for PS4/XBO is also a business.

I don't even think you're reading what I'm saying. Have a good day.
 
If it can do modern/future engines at 720 p, why isn't it running previous engines at 1080 p? Seems to me it is not geared towards future engines at all.
Because this is not how it works. Modern engines are for modern architectures, while doing something at 1080p requires power alone.

In other words, modern means WHAT you can do, while powerful translates into HOW MUCH you will be able to do.
 

JordanN

Banned
I don't even think you're reading what I'm saying. Have a good day.
Bailing out, really?

There was no need to bring up scaling. I already said this wasn't about an engine, but the tech. Did you even know what you were refuting? Because I specifically outlined the engine being on Wii U has little do with this from the beginning.

All this talk about PC and Business, is far away from the point as possible.
 

Log4Girlz

Member
Because this is not how it works. Modern engines are for modern architectures, while doing something at 1080p requires power alone.

In other words, modern means WHAT you can do, while powerful translates into HOW MUCH you will be able to do.

So Wii U won't do much due to its power level correct? I hope whatever Retro is working on is a good indicator of what the Wii U can do. I hope its not a cutesy game.
 
So Wii U won't do much due to its power level correct? I hope whatever Retro is working on is a good indicator of what the Wii U can do. I hope its not a cutesy game.
It's not that simple either. The "how much" I was speaking of was in terms of ops per second, but that doesn't have to translate in a "how much" in terms of visual quality because that depends entirely on what anyone expects AND whatever new techniques can be done on the WiiU to do "more with less".

First things first, we can't possibly know how much the WiiU will be able to do, since we still don't know WHAT it can do. We should see interesting things this E3 thanks to the fact that the games being developed for the WiiU will surely be much more suited to it's architecture than the ones we've seen since launch (at least the ones being exclusive for the platform).
 

ozfunghi

Member
If it can do modern/future engines at 720 p, why isn't it running previous engines at 1080 p? Seems to me it is not geared towards future engines at all.

I believe the issue is that WiiU's memory setup was envisioned with mainly 720p in mind. Even the latest/modern GPU chips won't be doing high resolutions without the needed memory.
 

Log4Girlz

Member
I believe the issue is that WiiU's memory setup was envisioned with mainly 720p in mind. Even the latest/modern GPU chips won't be doing high resolutions without the needed memory.

Yeah, it is in line with Nintendo's philosophy. Aim for the lowest performance you can get away with. There is a sound logic to this philosophy, but whether it pays dividends is to be seen. Currently, it doesn't appear to be doing so, while with the Wii it was a wildly successful strategy which had always worked in its handhelds.
 
The problem the WiiU has is , its not a brute force console. You can't throw any thing current gen at it without tailoring data to its memory architecture, that's what most here equate power to.
 

69wpm

Member
Yeah, it is in line with Nintendo's philosophy. Aim for the lowest performance you can get away with. There is a sound logic to this philosophy, but whether it pays dividends is to be seen. Currently, it doesn't appear to be doing so, while with the Wii it was a wildly successful strategy which had always worked in its handhelds.

Well, Rayman is running at 1080p and 60 fps. How can the Ubisoft wizards do this and other developers not? Does somebody know what kind of engine they use? Something new maybe?
 

Schnozberry

Member
And it will probably run at 1080p60 on PS360 too. What did Origins run at?

Origins was 1080p60 as far as I know on platforms that support the resolution. Ancel said the Wii U supported more detailed textures compared to the PS3 and 360 due to having more RAM, but that's it as far as I know.
 
Please correct me if I am wrong, but from reading the thread I think Wii U strengths are:

Pros: Modern features like depth of field, lightning, texture compression, tesellation and more RAM.

Cons: powerwise, more like PS360. This means 720p, 30fps.

Now compared to XB1 and PS4 what could Wii U lack in a hypothetical port.

1080p, 60fps, more particle effects, AI advancements(IMO this would take time), more AA, more enemies on screen. All of these seems like not so bad trade offs in regards to Wii PS360 situation. It will be easier on the eye and a compromise that will seem easier to take as a consumer.

Chime in, what am I missing?
 

Hermii

Member
It will be interesting to see how watch dogs compares across the different platforms. Since it is a game developed for modern architectures maybe the Wii U version will show notable improvements to ps360. It will be cool to see the differences between ps360 wii u xbone and ps4.
 
Please correct me if I am wrong, but from reading the thread I think Wii U strengths are:

Pros: Modern features like depth of field, lightning, texture compression, tesellation and more RAM.

Cons: powerwise, more like PS360. This means 720p, 30fps.

Now compared to XB1 and PS4 what could Wii U lack in a hypothetical port.

1080p, 60fps, more particle effects, AI advancements(IMO this would take time), more AA, more enemies on screen. All of these seems like not so bad trade offs in regards to Wii PS360 situation. It will be easier on the eye and a compromise that will seem easier to take as a consumer.

Chime in, what am I missing?

Most PS4/X1 games are going to be 30 fps.
 

tipoo

Banned
Well, Rayman is running at 1080p and 60 fps. How can the Ubisoft wizards do this and other developers not? Does somebody know what kind of engine they use? Something new maybe?

A 2.5 dimension side scroller is a bit easier on hardware than a fully three dimensional game with an engine that at least tries to look realistic? Not knocking the look at all of course, but this is like asking why Mario would be easier to render than Crysis.

Flat textures, much more simplistic physics calculations, less complex shading, etc etc.

My laptop can run an old game like Halo 2 at 1080p 60FPS, that doesn't mean everything can run like that.
 
A

A More Normal Bird

Unconfirmed Member
Please correct me if I am wrong, but from reading the thread I think Wii U strengths are:

Pros: Modern features like depth of field, lightning, texture compression, tesellation and more RAM.

Cons: powerwise, more like PS360. This means 720p, 30fps.

Now compared to XB1 and PS4 what could Wii U lack in a hypothetical port.

1080p, 60fps, more particle effects, AI advancements(IMO this would take time), more AA, more enemies on screen. All of these seems like not so bad trade offs in regards to Wii PS360 situation. It will be easier on the eye and a compromise that will seem easier to take as a consumer.

Chime in, what am I missing?

It doesn't work like this. You can't just say, oh this game has X effect therefore the platform can do it. There are Wii-U games with a DoF effect, does this mean ED2 will use Crytek's sprite based bokeh? It's essentially a DX11 compliant part, you could run pretty much any shader function you wanted to on it. The question is if it would have the grunt to do so.

The Wii-U supports tessellation and GPU-Compute but extensive use of either is notoriously hard on performance for pre-GCN AMD GPUs (though obviously more can be eked from less in a console environment). A game could conceivably require more compute power for physics etc... than Latte possesses as a whole. This is an example where scaling something down has direct gameplay implications.

Listing more RAM as a pro for the Wii-U when the RAM advantage it holds over the 360 is significantly smaller than the one the XB1 has over it seems like a bit of a stretch.
 
It doesn't work like this. You can't just say, oh this game has X effect therefore the platform can do it. There are Wii-U games with a DoF effect, does this mean ED2 will use Crytek's sprite based bokeh? It's essentially a DX11 compliant part, you could run pretty much any shader function you wanted to on it. The question is if it would have the grunt to do so.

The Wii-U supports tessellation and GPU-Compute but extensive use of either is notoriously hard on performance for pre-GCN AMD GPUs (though obviously more can be eked from less in a console environment). A game could conceivably require more compute power for physics etc... than Latte possesses as a whole. This is an example where scaling something down has direct gameplay implications.

Listing more RAM as a pro for the Wii-U when the RAM advantage it holds over the 360 is significantly smaller than the one the XB1 has over it seems like a bit of a stretch.

It may have the grunt but things like draw distances or polygon p/sec take a hit, wouldn't it just be trade offs in play.
 

AzaK

Member
the biggest thing separating the WiiU from the Durango is the RAM. The Durango still has 5GB of faster RAM (Though IMO RAM speed isn't quite as important after a certain speed at the level we're talking about) versus the Wii U's 1. I'm working with stuff now for our workstations, and cramming as much data as we can into usable RAM now is singlehandedly the biggest task we face.

I think it'd be almost easier to recode CPU tasks from x86 to PPC than it is to cut the used RAM amount to just 1/5th for games, since games really do use a lot of RAM for assets nowadays.

The CPU, however, just has to be capable enough of passing the code at a fast enough rate to keep up with the game. Changing certain code from FP to Int probably kills any performance gain

Graphically, everything can be scaled down. using multipliers based on shader ALUs isn't a great basis

If you look at that breakdown presentation Guerrilla Games did of the Kill Zone PS4 demo, it's quite surprising at how little (relatively speaking) was used for actual data structures and engine execution compared to assets. Assets can be downscaled considerably so I think that if Nintendo could free up another 1/2 GB or a little more then that would really help.
 
It doesn't work like this. You can't just say, oh this game has X effect therefore the platform can do it. There are Wii-U games with a DoF effect, does this mean ED2 will use Crytek's sprite based bokeh? It's essentially a DX11 compliant part, you could run pretty much any shader function you wanted to on it. The question is if it would have the grunt to do so.

The Wii-U supports tessellation and GPU-Compute but extensive use of either is notoriously hard on performance for pre-GCN AMD GPUs (though obviously more can be eked from less in a console environment). A game could conceivably require more compute power for physics etc... than Latte possesses as a whole. This is an example where scaling something down has direct gameplay implications.

Listing more RAM as a pro for the Wii-U when the RAM advantage it holds over the 360 is significantly smaller than the one the XB1 has over it seems like a bit of a stretch.

The Shin'en dev when asked about it for Wii U said that tessellation isn't resource heavy.

‘Tessellation itself is not resource heavy on recent GPUs but it depends on actual usage. Although even previous consoles had these features you saw it only very rarely used. People often think of it as an easy way to get free ‘level of detail’. That doesn’t work. It’s because of certain visual problems associated with adaptive tessellation.
We already tried various tessellation ideas and it is a very handy tool for certain situations.’
 

Ikaruga!

Neo Member
Well, Rayman is running at 1080p and 60 fps. How can the Ubisoft wizards do this and other developers not? Does somebody know what kind of engine they use? Something new maybe?

Hurray!

Well not really. I will hype the WiiU if it gets a title like Wipeout HD running at 60FPS@1080P -2D doesn't really cut it you see?
 

tipoo

Banned
Anandtech brings up an interesting point on the 32MB, even if he didn't consider the Wii U in it

http://www.anandtech.com/show/6993/intel-iris-pro-5200-graphics-review-core-i74950hq-tested

the 32MB number is particularly interesting because it’s the same number Microsoft arrived at for the embedded SRAM on the Xbox One silicon. If you felt that I was hinting heavily at the Xbox One being ok if its eSRAM was indeed a cache, this is why. I’d also like to point out the difference in future proofing between the two designs.
 
That picture actually helped me think that those Q blocks might be UVD related. That diagram doesn't seem to be worried about precise borders, as I doubt that UVD is only half a block. The block on Brazos is labeled clearly enough. But that diagram has one of those small blocks (Llano's H/G) included in the UVD partition. So we've got the big UVD block and two small identical blocks adjacent to it. Coincidence? On Latte, the arrangement of the SRAM on the left of the Q blocks might hint at there being some interaction between those three blocks as well.

I had figured that Brazos' TAVC block was texture-related, but now I'm not so sure. The TD block does seem to have about the same amount of SRAM that the two T blocks in Latte have - minus those 32 banks, which I located in the TC block. Barring a better guess at what TAVC might stand for, something video related seems like it could work (target analytics, vision, video, catalyst, codec, control, compress, *sigh* hahaha). But there appears to be some SRAM on the border of UVD/TAVC that also might indicate them working together in some fashion. Strange that the UVD on Brazos seems to have less SRAM than the block on Llano, but maybe that's just my eyes playing tricks on me. Both blocks appear to hold a good amount of logic.

Q isn't an exact match for H/G on Llano, but it looks pretty close to me. Llano is 32nm SOI from GF, so again, size comparisons are not likely applicable. Actually, if I didn't know better, I'd think that some of the Latte blocks are stretched just to make the puzzle pieces all fit lol. But that's crazy talk. But there is a similar hierarchy of SRAM in those blocks - 4 different sizes.

The strange thing about that picture, which you did mention in part, is only one of the H/G duplicates seemed to be included in the UVD. It also makes Llano's seem smaller than Brazos' because of the way that picture breaks it up. I still believe the Q blocks match the one in Llano under the block I feel is like W. The block that has it's memory to one side. I think both from a fab size (I understand your other post on this, but I think H/G are way too small. 1 block is 19% of 1 Q's area), SRAM block sizes (I had counted five different sizes in Llano and Latte), and total SRAM amount perspectives it makes more sense. It would just be trying to figure out what it does in Llano.

It's just got a huge stack of SRAM compared to the other blocks! It ended working out nicely too when I was attempting to identify the others yesterday. That configuration seems to make a good deal of sense with the memory controller placement.

I gotcha. I think my opinion of no MCs in Latte is why I don't see the same thing.

Warning, stupid multipliers coming up!

No, you're doing it wrong. It's BS multipliers. :p

When you say that the amount of Constant and Instruction cache in Latte is extraordinary, do you refer to the big pools of eDram or to other parts of the GPU?
Because no matter how I look at it, at first glance y swear that I see the SRAM blocks of the Latte design a lot bigger that the ones on Brazos for example.

I was putting this together anyway (attempting to reaffirm the discussion between Fourth and myself), so it should help answer your question at the same time. I'm referring to that large pool of SRAM within the block itself. Here is a picture of the Sequencer (UTDP) in Brazos compared to the block I think is the same in Latte. Also here is picture for the Shader Export. I rotated and flipped both blocks from Brazos to a similar position as the ones in Latte.

UTDP

STV92IT.jpg


Shader Export
gLfH7aA.jpg


The Wii-U supports tessellation and GPU-Compute but extensive use of either is notoriously hard on performance for pre-GCN AMD GPUs (though obviously more can be eked from less in a console environment).

As I understand it, it was the tessellator itself in ATi/AMD GPUs that sucked. And that it wasn't till the introduction of the 7th generation of tessellators in some of the 6000 series that they were deemed useable.
 

z0m3le

Banned
The strange thing about that picture, which you did mention in part, is only one of the H/G duplicates seemed to be included in the UVD. It also makes Llano's seem smaller than Brazos' because of the way that picture breaks it up. I still believe the Q blocks match the one in Llano under the block I feel is like W. The block that has it's memory to one side. I think both from a fab size (I understand your other post on this, but I think H/G are way too small. 1 block is 19% of 1 Q's area), SRAM block sizes (I had counted five different sizes in Llano and Latte), and total SRAM amount perspectives it makes more sense. It would just be trying to figure out what it does in Llano.



I gotcha. I think my opinion of no MCs in Latte is why I don't see the same thing.



No, you're doing it wrong. It's BS multipliers. :p



I was putting this together anyway (attempting to reaffirm the discussion between Fourth and myself), so it should help answer your question at the same time. I'm referring to that large pool of SRAM within the block itself. Here is a picture of the Sequencer (UTDP) in Brazos compared to the block I think is the same in Latte. Also here is picture for the Shader Export. I rotated and flipped both blocks from Brazos to a similar position as the ones in Latte.

UTDP

STV92IT.jpg


Shader Export
gLfH7aA.jpg




As I understand it, it was the tessellator itself in ATi/AMD GPUs that sucked. And that it wasn't till the introduction of the 7th generation of tessellators in some of the 6000 series that they were deemed useable.

The HD 5870 could run unigen heaven's tessellation benchmark around 20fps at the highest setting iirc. http://www.youtube.com/watch?v=S0gRqaHXuJI

HD 4870 had the moblins Tessellation Demo but the above demo was DX11 and ATI didn't have tessellation engine 3 in the HD 4870 which was one of the main reasons it couldn't be upgraded to DX11, however 4800's Tessellation was clearly usable for gaming just it was programmed to general to use such a unique component.

http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/26

The real reason tessellation might of improved so drastically too was from the dual graphics engine, pushing 2 triangles per clock allowed the card under tessellation to bury the HD 5870 for instance.

http://www.tomshardware.com/reviews/radeon-hd-6970-radeon-hd-6950-cayman,2818-3.html This one explains it better. Basically no matter how good a tessellation unit you might have is, you can't exceed the theoretical triangles per second. This makes sense and I've been leaning this way for a couple weeks but just now read back to make sure I was right on this, tessellation is after all, just splitting simple polygons into smaller polygons to create more detail in an object.

I'm not sure if Wii U has a dual graphics engine, I don't bring it up because I'm looking for a magic bullet but if geometry powers tessellation, that would explain why they would go this route, and could also explain why it is an absolutely huge GPU compared to its performance, even cutting out the eDRAM. Brazo for instance is 75mm^2 with 2 bobcat cores along for the ride.
 
A

A More Normal Bird

Unconfirmed Member
The Shin'en dev when asked about it for Wii U said that tessellation isn't resource heavy.

‘Tessellation itself is not resource heavy on recent GPUs but it depends on actual usage. Although even previous consoles had these features you saw it only very rarely used. People often think of it as an easy way to get free ‘level of detail’. That doesn’t work. It’s because of certain visual problems associated with adaptive tessellation.
We already tried various tessellation ideas and it is a very handy tool for certain situations.’
In a console environment a fixed function unit like a tessellator would ideally have little to no impact on your baseline performance (extra memory use etc... notwithstanding). But my post was about a comparison to the XB1/PS4. If the Wii-U's tessellator is as far removed from their tessellation units as earlier Radeon tessellators were from those in GCN cards, any tessellation implementation on a Wii-U port would have to significantly reduce either tessellation level or frequency - most likely both.


As I understand it, it was the tessellator itself in ATi/AMD GPUs that sucked. And that it wasn't till the introduction of the 7th generation of tessellators in some of the 6000 series that they were deemed useable.

Exactly - we have no idea if the tessellator in Latte is a standard R700 or R800 one, if it's based on more recent designs or if it's a more custom unit.


The HD 5870 could run unigen heaven's tessellation benchmark around 20fps at the highest setting iirc. http://www.youtube.com/watch?v=S0gRqaHXuJI

HD 4870 had the moblins Tessellation Demo but the above demo was DX11 and ATI didn't have tessellation engine 3 in the HD 4870 which was one of the main reasons it couldn't be upgraded to DX11, however 4800's Tessellation was clearly usable for gaming just it was programmed to general to use such a unique component.

http://www.anandtech.com/show/5261/a...7970-review/26

The real reason tessellation might of improved so drastically too was from the dual graphics engine, pushing 2 triangles per clock allowed the card under tessellation to bury the HD 5870 for instance.

http://www.tomshardware.com/reviews/...an,2818-3.html This one explains it better. Basically no matter how good a tessellation unit you might have is, you can't exceed the theoretical triangles per second. This makes sense and I've been leaning this way for a couple weeks but just now read back to make sure I was right on this, tessellation is after all, just splitting simple polygons into smaller polygons to create more detail in an object.

I'm not sure if Wii U has a dual graphics engine, I don't bring it up because I'm looking for a magic bullet but if geometry powers tessellation, that would explain why they would go this route, and could also explain why it is an absolutely huge GPU compared to its performance, even cutting out the eDRAM. Brazo for instance is 75mm^2 with 2 bobcat cores along for the ride.
Good stuff, but it's still only part of the picture. GCN saw a significant improvement in tessellation scaling over HD6xxx series cards with the same poly-pushing capability and even outperformed the GTX580 which could push 6 polys/clock more.
 
The strange thing about that picture, which you did mention in part, is only one of the H/G duplicates seemed to be included in the UVD. It also makes Llano's seem smaller than Brazos' because of the way that picture breaks it up. I still believe the Q blocks match the one in Llano under the block I feel is like W. The block that has it's memory to one side. I think both from a fab size (I understand your other post on this, but I think H/G are way too small. 1 block is 19% of 1 Q's area), SRAM block sizes (I had counted five different sizes in Llano and Latte), and total SRAM amount perspectives it makes more sense. It would just be trying to figure out what it does in Llano.



I gotcha. I think my opinion of no MCs in Latte is why I don't see the same thing.

Alright, I've been rethinking this. I think we really need to reconsider the necessity for discreet DDR3 memory controllers on this thing. Was digging around and I came across this diagram of the 360 slim's cgpu:

cgpu360.jpg


This has two separate memory controllers (one bordering each interface/phy) even with the FSB replacement block. The original Xenos also had two memory controllers. I would reason that even with a NB block on Latte, it would still need a couple of DDR3 controllers. Even Brazos and Llano have separate graphics memory controller blocks apart from the NB.

Assuming this is true, things get a bit reshuffled in my assignments. It would make sense for the memory controllers to border both phys, so they would have to be the W blocks. In the few memory controllers I've seen (Brazos, RV770 if I'm right), they seem to contain a decent amount of SRAM as well. The ROPs could actually be Q then, which might make more sense, placement-wise, being close to the shaders. They don't seem to have much SRAM in them, but if my RV770 labeling is correct, that might be normal. If I'm looking at the Tahiti die correctly, its ROPs appear similarly small and low in SRAM. I actually dug up one statement that has Llano's Z/Stencil and color caches at 4kB and 16kB each. That's the only place that I've ever read mention the capacity of those two caches, but if it is correct, they would fit into Q.

http://www.realworldtech.com/fusion-llano/

The only thing that doesn't make sense are the ROPs on Brazos. That's just an incredible amount of die area and memory for 4 ROPs! I don't quite know how to make sense of it.
 

fred

Member
Do you believe the only reason Wii U is not getting all multi-platform releases is the relative weakness of its hardware? Or do you not understand that the industry is in the middle of a difficult transition and that current business models are becoming outmoded and will not likely last another five years? Third party publishers are making low cost and low risk bets at the moment, because more expensive failures will mean a gaming dark age. They bad start for Wii U means that it is a high risk situation until Nintendo proves otherwise. That has more to do with the lack of third party effort than technical limitations.

Edit: Also, take a look at the indie situation. People with smaller budgets and ambitious ideas are being welcomed and supported by Nintendo. The response is great. There are over 50 indie games in development for Wii U, and a lot of the stuff coming out of Ideame and what not has been very positive regarding the hardware and the programming environment. Things have improved significantly on that front since launch, it seems.

Yup, some people may be surprised by publishers losing a fair amount of money on their support of the PS4 and One, particularly if they're daft enough to release high budget launch/launch window titles for them. Both the PS4 and One are going to be supply constrained, I can't see either one of them selling over 2m before the end of the year. And then at the start of next year people are going to have bugger all money to spend due to the world and his wife being skint after Christmas.

That's where the Wii U has a huge advantage over the other two consoles, with the releases of the likes of NSL U, Pikmin 3, The Wonderful 101, Wii Fit U, Game & Wario, the rumoured 3D Mario and Mario Kart 8 all releasing before Christmas the Wii U's sales momentum will pick right up again, giving the Wii U a huge marketshare advantage that will probably take Sony and Microsoft years to close. They might not even manage to do that before the Wii 3 is released in 2017/2018.

Any publisher not supporting the Wii U after Christmas this year will have shareholders going apeshit.

Sorry, I've digressed a bit there. One thing I've been meaning to ask is if bg is right about the dual graphics engines what does that mean for the voltage of the console..? I've seen previous reports that the Wii U has been running at an average of 30-35 volts, I'm guessing if it does have dual graphics engines then it should be using twice the power..? Or have I got that wrong..? Maybe the launch and launch window games have only used one of them and using both in the future will increase the power draw..? Haven't they got 75W to play with altogether..?

Or have I just got everything terribly confused lol
 
Status
Not open for further replies.
Top Bottom