• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

z0m3le

Banned
Yeah, but the polygons man!

It's all about the polygons.

Polygons, polygons, polygons.

Just pointing out that there can be 16 Mario, Luigi, Peach and toads all on the screen at any one time. Even if they have an average polygon count of only 10k, that's 160k for the game characters. I don't think Mario 3D World is somewhere we should be counting Polygons, besides isn't the Wii U capable of 550 Million Polygons a second? PS3's GPU is only capable of 333 million if I remember right.

360's is 500 million, just noting this as to not misrepresent an edge for Wii U when it comes to polygon count.
 
Those are theoretical polygon throughputs. Having *that* and actually outputting them being very different things.

120 million polygons per second were the high end polygon pushing norm, really.
 

efyu_lemonardo

May I have a cookie?
Those are theoretical polygon throughputs. Having *that* and actually outputting them being very different things.

120 million polygons per second were the high end polygon pushing norm, really.

are there games today that really need to draw more than a million polygons on screen per frame? I mean it sounds like Xbone and PS4 main characters aren't even 100k total, and a large amount of those will always be occluded anyway..
 
are there games today that really need to draw more than a million polygons on screen per frame? I mean it sounds like Xbone and PS4 main characters aren't even 100k total, and a large amount of those will always be occluded anyway..
But in forward rendering, if you're using multiple-passes for each light, you will render an object each as many times as lights are on the scene. In other words, if you have 3 objects and 3 lights, then you will render those 3 objects 3 times each, so it's the same as rendering 9 objects.
 

ozfunghi

Member
So... i haven't really visited this thread in a couple of months (and with good reason still seems). Has anything noteworthy been brought up since Fourthstorm and BG were discussing the potential number of ALU/SPU and Fourthstorms low expectations? Or has this kindergarden blocked out any form of substantial discussion and/or investigation?

Thanks in advance.
 
are there games today that really need to draw more than a million polygons on screen per frame? I mean it sounds like Xbone and PS4 main characters aren't even 100k total, and a large amount of those will always be occluded anyway..
I think Forza dudes will grab all the polygons they can at any given circunstance and put them into cars.

Other than that, no... not really; most devs are not really focusing on polygon count anymore.

My point though, was that even if theoretical throughput is the same efficiency texturing and the like probably isn't; the less bottlenecked the platform the closest it can come to it's calculated capacity. I don't think Wii U is gonna be anywhere past 250 million polygons per second on a normal basis, but perhaps it can hit 200 no problem. I'm throwing numbers here, but I really think seeing some games out there than it can have a palpable polycount edge to it; not huge but certainly noticeable.


In regards to the million polygon question, some PS3/X360 games draw 3/4 million polygons per frame @ 30 fps. We're easily past the 1 million mark per frame on a normal basis.
 

prag16

Banned
But in forward rendering, if you're using multiple-passes for each light, you will render an object each as many times as lights are on the scene. In other words, if you have 3 objects and 3 lights, then you will render those 3 objects 3 times each, so it's the same as rendering 9 objects.

I thought things were starting to move toward deferred rendering. I think there was some discussion on that earlier in the thread; wsippel I believe said he thinks Wii U's setup lends itself more to deferred rendering than forward.
 

efyu_lemonardo

May I have a cookie?
I think Forza dudes will grab all the polygons they can at any given circunstance and put them into cars.

Other than that, no... not really; most devs are not really focusing on polygon count anymore.

My point though, was that even if theoretical throughput is the same efficiency texturing and the like probably isn't; the less bottlenecked the platform the closest it can come to it's calculated capacity. I don't think Wii U is gonna be anywhere past 250 million polygons per second on a normal basis, but perhaps it can hit 200 no problem. I'm throwing numbers here, but I really think seeing some games out there than it can have a palpable polycount edge to it; not huge but certainly noticeable.


In regards to the million polygon question, some PS3/X360 games draw 3/4 million polygons per frame @ 30 fps. We're easily past the 1 million mark per frame on a normal basis.

thanks for the detailed reply. could you give me an example of a scene from a game with over a million polygons in it?
 
I thought things were starting to move toward deferred rendering. I think there was some discussion on that earlier in the thread; wsippel I believe said he thinks Wii U's setup lends itself more to deferred rendering than forward.
Yes, yes, that's absolutely true. But in terms of third party support I think that the forward renderers may still be of a great importance since most games are still developed on past-gen engines.

With deferred rendering the WiiU strengths will be much more noticeable than what are now at this point.
 
Dead Rising 1 claimed 4 million polygons peak per frame, and Lost Planet 1 claimed up to 3 million per frame; those being 30 frame per second games.

Not many developers use that metric openly, but 4 million @ 30 fps and 2 million @ 60 fps seem to be the achievable limits of the X360, more than that is not really feasible IMO.

That amounts to 120 million polygons per second on both (30 and 60 fps) accounts.
 

efyu_lemonardo

May I have a cookie?
Dead Rising 1 claimed 4 million polygons peak per frame, and Lost Planet 1 claimed up to 3 million per frame; those being 30 frame per second games.

Not many developers use that metric openly, but 4 million @ 30 fps and 2 million @ 60 fps seem to be the achievable limits of the X360, more than that is not really feasible IMO.

That amounts to 120 million polygons per second on both (30 and 60 fps) accounts.

and these are unique, visible polygons? not multiple draws of the same ones or unoptimised occlusion culling?
 
and these are unique, visible polygons? not multiple draws of the same ones or unoptimised occlusion culling?
I can't speak for the development team but I can trace the source for it:

Main character [in Lost Planet] is about 10.000-20.000 polygons. The VS Robot is about 40.000-30.000. Background is about 500.000 polygons. That amounts to about 3 million polygons per scene and every frame including the rendering cost that is invisible to the naked eye like shadow generation.

(...) "On the other hand, Dead Rising is about 4 million polygons [per frame].
Source: http://game.watch.impress.co.jp/docs/20070131/3dlp.htm

Translation cleared up a little, but it's representative.

Lost Planet is using 2.5 motion blur, hence, the "invisible to the eye" claim is most likely in accordance to that fact, Wayne model is 12.392 polygons but the game adds it all the way up to 17.765 polygons in order for the 2.5 motion blur effect to work.

Those polygons are not visible, outside of the effect (this gets explained by images in the source, it's pretty interesting actually) anywho, said effect is not limited to the main character but applies to other things as well; they're having that into account with their 3 million per frame polygon projections.

They're not all necessarily visible although the result of them being there is (it's the motion blur) as for being unique... I think so, they aren't rendering a character twice like some games do for things like reflections, the thing they're doing is essentially 2D vector manipulation on top of the 3D models (hence the term 2.5D motion blur). They're byproducts of the characters, yet they're different (2D silhouettes, basically); I believe that that's as far as it goes in regards to "multiple draws", and I'll assume the game is using culling properly, but I really don't know; but regardless their count would be still taking into account every single polygon drawn, that much is evident.

This said, I remember tinkering with this source several years ago, as of now I only glanced over; I might be stepping on some mine in regards to very specific details in there. The translated part is accurate though.
 
I think Forza dudes will grab all the polygons they can at any given circunstance and put them into cars.

Other than that, no... not really; most devs are not really focusing on polygon count anymore.

My point though, was that even if theoretical throughput is the same efficiency texturing and the like probably isn't; the less bottlenecked the platform the closest it can come to it's calculated capacity. I don't think Wii U is gonna be anywhere past 250 million polygons per second on a normal basis, but perhaps it can hit 200 no problem. I'm throwing numbers here, but I really think seeing some games out there than it can have a palpable polycount edge to it; not huge but certainly noticeable.


In regards to the million polygon question, some PS3/X360 games draw 3/4 million polygons per frame @ 30 fps. We're easily past the 1 million mark per frame on a normal basis.

Dead Rising 1 claimed 4 million polygons peak per frame, and Lost Planet 1 claimed up to 3 million per frame; those being 30 frame per second games.

Not many developers use that metric openly, but 4 million @ 30 fps and 2 million @ 60 fps seem to be the achievable limits of the X360, more than that is not really feasible IMO.

That amounts to 120 million polygons per second on both (30 and 60 fps) accounts.

I can't speak for the development team but I can trace the source for it:

Source: http://game.watch.impress.co.jp/docs/20070131/3dlp.htm

Translation cleared up a little, but it's representative.

Lost Planet is using 2.5 motion blur, hence, the "invisible to the eye" claim is most likely in accordance to that fact, Wayne model is 12.392 polygons but the game adds it all the way up to 17.765 polygons in order for the 2.5 motion blur effect to work.

Those polygons are not visible, outside of the effect (this gets explained by images in the source, it's pretty interesting actually) anywho, said effect is not limited to the main character but applies to other things as well; they're having that into account with their 3 million per frame polygon projections.

They're not all necessarily visible although the result of them being there is (it's the motion blur) as for being unique... I think so, they aren't rendering a character twice like some games do for things like reflections, the thing they're doing is essentially 2D vector manipulation on top of the 3D models (hence the term 2.5D motion blur). They're byproducts of the characters, yet they're different (2D silhouettes, basically); I believe that that's as far as it goes in regards to "multiple draws", and I'll assume the game is using culling properly, but I really don't know; but regardless their count would be still taking into account every single polygon drawn, that much is evident.

This said, I remember tinkering with this source several years ago, as of now I only glanced over; I might be stepping on some mine in regards to very specific details in there. The translated part is accurate though.

Thanks for sharing your analysis and logical guesses. I'm very interested in how well the Wii U and other next-gen systems will be rendering polygons against their rendering limit. The Wii U is a bit more intriguing due to its rendering limit being close to current-gen on paper. Efficiency will play a good role its achievements on performing notably beyond current-gen capabilities.
 

OryoN

Member
posted that a while back.... nothing to get from the site.

gaf -> internet -> gaf?

I don't get it. After months if speculating, debating over what may be in Latte, people are just going to strug off the details on the site as though we knew it all along? What's the matter, website not credible? Or did GAF confirm this already?(I may have missed that)

That's some pretty specific 'info' they've got there. Transistors count(Espresso should bring the total to slightly over 1 billion), # of shaders(the subject of countless debates), and even the number of compute units(which some insisted wasn't in Latte). Why aren't we discussing this? Did they pull it out of thin air? All the other info seems spot on. Someone please fill me in, cause I'm a bit confused.
 

z0m3le

Banned
I think Forza dudes will grab all the polygons they can at any given circunstance and put them into cars.

Other than that, no... not really; most devs are not really focusing on polygon count anymore.

My point though, was that even if theoretical throughput is the same efficiency texturing and the like probably isn't; the less bottlenecked the platform the closest it can come to it's calculated capacity. I don't think Wii U is gonna be anywhere past 250 million polygons per second on a normal basis, but perhaps it can hit 200 no problem. I'm throwing numbers here, but I really think seeing some games out there than it can have a palpable polycount edge to it; not huge but certainly noticeable.


In regards to the million polygon question, some PS3/X360 games draw 3/4 million polygons per frame @ 30 fps. We're easily past the 1 million mark per frame on a normal basis.

Thanks to the Wii U's Tessellation unit, Polygon count will likely exceed anything on the PS3/360, same with PS4 and XB1 and their (expected) even more advanced tessellation units. Unless I completely don't understand what a tessellation unit does (creating more polygons from a "map"? I didn't know 360 and PS3 were only using ~120 polygons per second, but I guess that is why the PS3 never slowed down vs the 360 since it could only output ~2/3rds as many polygons. Honestly I think norms will be Wii U running ~2x as many polygons on average (think a 30fps 360/ps3 game vs a 60fps Wii U version) If tessellation is used, this number could jump quite a bit higher but i wouldn't be surprised if Wii U pushes 4 million polygons a frame @ 60 frames per second.
 

MDX

Member
So it's not "GAF > Internet > GAF"

So where have these figures come from and why are they being dismissed?

I dont know, but its being tossed around like its a fact:

compairison-chart1.jpg
 

MDX

Member
They left out a couple MBs of eDram didn't they? Isn't it 32+2?

Yep

3 Separate Blocks of memory:
1 Block of 32MB of eDRAM,
1 Block of 1MB of SRAM - Possibly used in "Wii mode",
1 Block of 2MB of eDRAM - Also likely used in Wii compatibility mode

Though I dont understand the need for the different blocks for "Wii-mode".
But then what else could these two other blocks offer in benefits?
 

Hermii

Member
Yep



Though I dont understand the need for the different blocks for "Wii-mode".
But then what else could these two other blocks offer in benefits?

Maybe they are needed in Wii Mode but Im sure they are being used in Wi U Mode as well.
 
I dont know, but its being tossed around like its a fact:

compairison-chart1.jpg

Wtf is a "catch" and why are the cache sizes different for Xbox and PS4? Or is that for the GPU because no one knows the cache figures for the PS4 GPU.
If it's for the Jag, it should be 64kb per core and 2mb per module (4 cores).

EDIT: Derp, it's the GPU.
 

OryoN

Member
So it's not "GAF > Internet > GAF"

So where have these figures come from and why are they being dismissed?

That's what I've been trying to figure out myself. If they were pulling these numbers out of thin air, you'd think they'd do the same for other details they are unsure of, or give estimations. I'm guessing they have some sources(devs) that were willing to share some of their 'discoveries' made thus far. That's probably another reason why they didn't blow it up, but flew under the radar with it. I think I'll try to contact them for a responce on that matter.

Still, it's not a confirmation, just highly likely info that we should keep an eye on. It's real shame no one seems interested is investigating this matter, after months and months of almost fruitless discussions and derailments . Oh well...
 
That's what I've been trying to figure out myself. If they were pulling these numbers out of thin air, you'd think they'd do the same for other details they are unsure of, or give estimations. I'm guessing they have some sources(devs) that were willing to share some of their 'discoveries' made thus far. That's probably another reason why they didn't blow it up, but flew under the radar with it. I think I'll try to contact them for a responce on that matter.

Still, it's not a confirmation, just highly likely info that we should keep an eye on. It's real shame no one seems interested is investigating this matter, after months and months of almost fruitless discussions and derailments . Oh well...

I think the problem is that most people who both cared and were knowledgeable left the thread after so many derailments.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I think the problem is that most people who both cared and were knowledgeable left the thread after so many derailments.
You wish ;p

The issue with that Latte linked spec that you and OryoN discussed is that it has a serious flow in its numbers WRT the VLIW5 rumors: by the quoted numbers, Latte cannot be a VLIW5.

Five CUs (compute units, AKA SIMD units) and 320 SPs (shader processors, AKA processing elements) means each CU should host 64 PEs. 64 is not a multiple of 5, and VLIW5 works with quintets of PEs. Not to mention the consensus is that there are 8 CUs on the die shot, and that's one of the few things there's a consensus about ; )
 
You wish ;p

The issue with that Latte linked spec that you and OryoN discussed is that it has a serious flow in its numbers WRT the VLIW5 rumors: by the quoted numbers, Latte cannot be a VLIW5.

Five CUs (compute units, AKA SIMD units) and 320 SPs (shader processors, AKA processing elements) means each CU should host 64 PEs. 64 is not a multiple of 5, and VLIW5 works with quintets of PEs. Not to mention the consensus is that there are 8 CUs on the die shot, and that's one of the few things there's a consensus about ; )

Thanks blu. So, spec list is bullshit and no change in perceived power.
 

wilsoe2

Neo Member
You wish ;p

The issue with that Latte linked spec that you and OryoN discussed is that it has a serious flow in its numbers WRT the VLIW5 rumors: by the quoted numbers, Latte cannot be a VLIW5.

Five CUs (compute units, AKA SIMD units) and 320 SPs (shader processors, AKA processing elements) means each CU should host 64 PEs. 64 is not a multiple of 5, and VLIW5 works with quintets of PEs. Not to mention the consensus is that there are 8 CUs on the die shot, and that's one of the few things there's a consensus about ; )

Thanks for the clarification Blu. However I guess the noob question in response is that IF there is consensus that there are 8 CUs, THEN 320 / 8 means each CU should host 40 PEs which is a multiple of 5?

I know there were a lot of other reasons why 320 is unlikely (or impossible) as Fourth Storm has mentioned many times. I'm too much of a layman to know enough to explain them though. And I also remember it being discussed here if the Wii U could be 240 ALUs... i guess 240 / 8 CUs = 30 PEs which might be more standard? And also multiple of 5? If you can explain any other evidence for or against I'd be interested to read it. Thanks
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Well, traditionally R700 and R800 (both being 'classic' VLIW5 designs) have been using shader blocks of 20 and 40 SPs; in the R700 case a CU could comprise 2x or 4x such blocks, for a total of up to 80 SPs per CU/SIMD engine (e.g. 4870 had 10x SIMD engines of 4x shader blocks each, amounting to 80 SPs per SIMD engine). Of course this is all speaking of the 'canonical' R700s and R800s. We know Latte has 8x shader blocks, and Fourth Storm et al think those blocks are of the 20 SP kind, with strongest reasoning stemming form the discernible register file layout. I'm of the opinion that that might be the case, but the evidence is non-conclusive, since the regfile layout is too fuzzy a beast. This is all re the number of 'set-in-silicon' features, of course, whereas the final target of the discussion has been more about the hypothetical top FLOPS rate, which would be SPs * clock * 2 (due to FMADD ops). While we know the Latte 'core clock' (i.e. rate at which most of the things, incl ROPs, work), we actually don't know what clock the CUs operate at - there is a chance they operate at a multiple of the core clock. Again, traditionally AMD designs have had CU clocks in line with the core clock, but how 'traditional' an AMD design Latte is remains to be seen.
 

OryoN

Member
Thanks Blu. Super noob stuff incoming...

Still something I'm confused about. The consensus have been that Latte is based on VLIW5 design. But, from what I understand, that design doesn't lend itself to "compute " in the traditional sense. Hence, the GCN architecture. But Latte isn't GCN based, as far as we understand. So how would those 8 shader blocks represent compute units? If AMD was asked to make customizations to those blocks for compute, wouldn't it just end up being a GCN design?

Orignially, I was wondering if the compute features of Latte could be tucked away in a totally separate block or two, (preferably one with lots of registers, and what's not) highly specialized for compute tasks only. Can that be ruled out entirely?
[/noob]
 
A

A More Normal Bird

Unconfirmed Member
Thanks Blu. Super noob stuff incoming...

Still something I'm confused about. The consensus have been that Latte is based on VLIW5 design. But, from what I understand, that design doesn't lend itself to "compute " in the traditional sense. Hence, the GCN architecture. But Latte isn't GCN based, as far as we understand. So how would those 8 shader blocks represent compute units? If AMD was asked to make customizations to those blocks for compute, wouldn't it just end up being a GCN design?

Orignially, I was wondering if the compute features of Latte could be tucked away in a totally separate block or two, (preferably one with lots of registers, and what's not) highly specialized for compute tasks only. Can that be ruled out entirely?
[/noob]
Compute unit =/= optimised for/explicitly related to "GPU Compute" tasks. Or rather, 'compute' doesn't necessarily refer to what's called "GPU Compute", "Compute Shaders" "GPGPU" etc, which is just the use of the GPU for tasks outside of pure graphics. Everything a processor is doing is some form of computation, whether it be a rendering a shader or running a physics simulation.
 

OryoN

Member
Just thought this was interesting with all the recent "lol @176 GFLOPs" chatter.

That is; the fact that the team at Slightly Mad Studios apparently pulled the plug on the current gen version of Project Cars, but Wii U is still along for the ride. The fact that the small dev team opt out of targeting the massive consumer base on PS360, seems to indicate that those consoles posed technical challenges that the team felt wasn't worth the compromise in quality... certain challenges that they either never encountered on Wii U, or found an acceptable solution for.

“Project CARS has always led the pack in terms of insane detail. Whether that’s graphically in the craftsmanship of our cars and tracks, technically in the way we’ve approached weather and time of day, or emotionally in how each car feels and responds to your touch. These powerful new platforms allow us therefore to not compromise on the quality of our vision and ultimately that means players are going to experience something truly breathtaking when they get behind the wheel.”
http://www.wmdportal.com/projectnews/project-cars-races-to-next-gen/

Whether people consider Wii U one of those "powerful platform" or not, the fact is that it at least meets the minimum requirements for the quality and vision behind Project Cars. This is especially interesting since the console is still very young.

One other point that can be made is; having much more modern capabilities(and more RAM, of course) seems to dictate what games are technically possible on Wii U, more so than FLOPs alone, despite all the drama.
 
Just thought this was interesting with all the recent "lol @176 GFLOPs" chatter.

That is; the fact that the team at Slightly Mad Studios apparently pulled the plug on the current gen version of Project Cars, but Wii U is still along for the ride. The fact that the small dev team opt out of targeting the massive consumer base on PS360, seems to indicate that those consoles posed technical challenges that the team felt wasn't worth the compromise in quality... certain challenges that they either never encountered on Wii U, or found an acceptable solution for.


http://www.wmdportal.com/projectnews/project-cars-races-to-next-gen/

Whether people consider Wii U one of those "powerful platform" or not, the fact is that it at least meets the minimum requirements for the quality and vision behind Project Cars. This is especially interesting since the console is still very young.

One other point that can be made is; having much more modern capabilities(and more RAM, of course) seems to dictate what games are technically possible on Wii U, more so than FLOPs alone, despite all the drama.

I wouldn't be surprised if the decision was made due to the RAM size alone. That's the one significant advantage the Wii U definitely has.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I wouldn't be surprised if the decision was made due to the RAM size alone. That's the one significant advantage that the Wii U definitely has.
Another significant advantage would be that the developer responsible for the outstanding NFSMW U port is doing the CARS U backend.
 

lyrick

Member
You wish ;p

The issue with that Latte linked spec that you and OryoN discussed is that it has a serious flow in its numbers WRT the VLIW5 rumors: by the quoted numbers, Latte cannot be a VLIW5.

Five CUs (compute units, AKA SIMD units) and 320 SPs (shader processors, AKA processing elements) means each CU should host 64 PEs. 64 is not a multiple of 5, and VLIW5 works with quintets of PEs. Not to mention the consensus is that there are 8 CUs on the die shot, and that's one of the few things there's a consensus about ; )

What if we assumed that one of those CUs /SIMD Arrays was disabled which gave us the 320 total? Essentially This:
Redwood_architecture.png

Without the last logical CU/SIMD Array functioning (or more than likely not even there, which would result in 4 SIMD Arrays (which would look very comparable to Sumo's 5 (the Llano GPU in the OP).

The VLIW5 architecture in general lends itself to 80 ALUs per SIMD Engine except for the 5450 which was only 40.
http://pc.watch.impress.co.jp/video/pcw/docs/606/220/p1.pdf
HD_5450_Architektur.png

http://www.pcgameshardware.com/aid,...-card/Reviews/&menu=browser&article_id=704313
if the direct link doesn't work...
 

OryoN

Member
Another significant advantage would be that the developer responsible for the outstanding NFSMW U port is doing the CARS U backend.

Really? That's good to hear. How long was that person onboard? Now I'm wondering if it's coincidental that those Wii U change logs really seem to be digging into the hardware lately, or if it's mainly due to said developer. Not that SMS isn't talented already, but NFS:MWU would sure look good on a programmer's resume.
 

fred

Member
So assuming that Latte is a 160:8:8 GPU does anyone have any theories on why the ALU space appears to be too large and why the ROPs and TMUs are 1:1..?

Does the ALU space taken point to the possibility of Nintendo evolving the TEV Unit so that the Wii U has a standard rendering pipeline when using these fixed functions..? Is it possible to have fixed functions and a standard rendering pipeline..? I have no idea how the TEV Unit in the GameCube and Wii worked but know that it caused major problems with ports last gen.

We know from ages ago that Nintendo were working closely with engine creators such as Crytek, Epic and Unity during the time they were putting the Wii U together so perhaps they've come up with a way to integrate these fixed functions with engines..?

And if they're not using fixed functions why are the ALUs so big..?

I've thought for a while that they are using some sort of evolved TEV Unit in an attempt to mitigate the difference in power between Latte and the other GPUs in the PS4 and One, but have no idea how these things work. It would give 'free' use of common effects such as HDR, DOF etc.
 

TunaLover

Member
Kind off topic but I have been doing some texture swapings in Smash Bros Brawl, and it's impressive what those guys have achived, currently they are porting the exact same characters models from PS3/360 games (poli count, texture) they are 1:1 models, while it's just a fighting game with few things going on stage, it's still impressive that old Wii can manage to render 4 of those characters without much stressing, I know it's known that Wii CPU is pretty capable tech piece, but it give me confidence that they are using the same architecture for Wii U.

It's a buffer screenshot, the model looks better in gameplay becuase you can close it enough to see every detail on textures.

1_zpsa869b097.jpg


wDlu9zn.jpg
 

The_Lump

Banned
I don't get it. After months if speculating, debating over what may be in Latte, people are just going to strug off the details on the site as though we knew it all along? What's the matter, website not credible? Or did GAF confirm this already?(I may have missed that)

That's some pretty specific 'info' they've got there. Transistors count(Espresso should bring the total to slightly over 1 billion), # of shaders(the subject of countless debates), and even the number of compute units(which some insisted wasn't in Latte). Why aren't we discussing this? Did they pull it out of thin air? All the other info seems spot on. Someone please fill me in, cause I'm a bit confused.

Probably already been addressed but this has been online for months now. I came across it a few times on my own digging but just dismissed it off hand as Gaf > Internet, as around the time I first saw it those were roughly the numbers still being discussed.
 

krizzx

Junior Member
well there are upcoming third party games for Wii U lets see how they perform.

Don't forget Bayonetta 2 and Project C.A.R.S.

Project C.A.R.S. should be interesting example now that they are utilizing the next gen features in the Wii U for it.

Has anyone heard anything about them supposedly only using one CPU core and using the GPGPU features for the rest? I saw that pop up here and there. Their change log should have a plethora of analytical data. The multithreaded shadows thing still has my interest peaks.
 
Wouldn't multithreaded shadows imply using more than one CPU core? It wouldn't make sense to handicap the game like that anyways, given they're targeting platforms that would benefit from maximizing multiple cores.
 

fred

Member
As far as I remember they were only using a single core early on in development and switched to using all 3 later on. I'd say that it's also a safe bet that they've used the GPGPU functionality as much as possible. I'm really looking forward to seeing what they can do with the console.
 

Powerwing

Member
I have a question for you tech guys. I have a wii u with some games (zombi u, nintendo land,pikmin 3) and i also played bayonetta 2 demo ,mario 3d world demo and also mario kart 8 demo when i went to an event( Paris games week) and none of these games seems to have proper anti-aliasing. I'm a bit biased because i also play on my PC with always max AA on. Is there a reason for that ? I will buy those games no matter what but i was wondering why the lack of AA taking into account the big fat EDRAM (32 MB), resolution (720p) and newer GPU architecture.
 
Status
Not open for further replies.
Top Bottom