• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

USC-fan

Banned
They are not. Embedded parts and consumer parts have entirely different lifespans. How do I know that? I've spent ~6 years of my career working with AMD's (and other vendors') embedded parts.

That's not true at all. Unless there is only one sku in the line then there is no binning. Most embedded part have 2-3 sku and every part is binned. Looking at and product lines I do not see any that are not binned.

It just a lot of business sense to binned parts. You have just one part fail and the chip would be worthless.
 
That's not true at all. Unless there is only one sku in the line then there is no binning. Most embedded part have 2-3 sku and every part is binned. Looking at and product lines I do not see any that are not binned.

It just a lot of business sense to binned parts. You have just one part fail and the chip would be worthless.

Even if that were the case that still doesn't justify binning as a logical reason like I mentioned in response to Fourth.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
That's not true at all. Unless there is only one sku in the line then there is no binning. Most embedded part have 2-3 sku and every part is binned. Looking at and product lines I do not see any that are not binned.
Which are those embedded AMD parts with 2-3 SKUs?

Yeah this makes sense, thanks for the experienced comment about it. Considering they sell for 2+ years on the same embedded parts and sell millions of units, binning never made much sense to me, especially because even when they are based on the same core, as a desktop series, they are a different config.
Actually 2 years is considered fairly short-lived in embedded. Most embedded parts, particularly those by large vendors, have lifespan of 3+ years.
 

krizzx

Junior Member
http://www.lensoftruth.com/head2hea...arison-and-analysis-ps3-vs-xbox-360-vs-wii-u/

Alright, we have the Lens of Truth on Splinter Cell: Blacklist. I prefer these guys to Digital Foundry because they don't have the clear bias and the immature mentality that DF tends to demonstrate in their comparisons and their clickbait headlines. They also seem to provide more credible materiel to back their claims.

The lighting balance and the hair are the clearest markers on the Wii U versoin. And the Wii U version is the only version that has no torn frames.

The loading times seem to be the only big problem on the Wii U, which is the first time I've seen such a thing. Usually they are faster. Also, the 360 version is listed as supporting higher resolutions than the PS3 version isn't a first but its still surprising considering that it looks like the PS3 was the lead development platform for this.

It says the 360 and Wii U version output up to 1080p and the PS3 up to 720p.

http://www.youtube.com/watch?feature=player_embedded&v=Fg33bG2LgkY


This is the Wii U analysis video(I don't know how to embed videos in post). What I find the most odd is that the Wii U version is the only one that gets absolute 30 FPS performance during game play the majority of the time, but has ridiculous frame drop during cut scenes which generally are suppose to have better performance than normal gameplay.


The verdict is the Wii U is the overall best but with the PS3 having the most detailed textures?" I find this the most odd because the 360 and Wii U have the superior GPUs and having superior textures is usually the 1 area where the Wii U trumps. I still feel that this game didn't really do much to make use of the Wii U's capabilities. Those load times should have been short simply to the fact that its RAM has no bottlenecks and your can preload data earlier thanks to it having more RAM.

Even though it was declared the winner, it honestly look likes they just half assed the Wii U version especially when I see that flat wall texture on the second page that had a chunk missing on the other 2 versions. Criteron and Frozenbyte made it clear that is had superior textures capabilities.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
The E6760/E6460 case could indeed be a binned pair, but if they are indeed binned, they are binned by good functions, not performance. Just check their respective briefs - the two parts have different capabilities:

http://www.amd.com/us/Documents/E6460GPU_Product_Brief.pdf
http://www.amd.com/us/Documents/AMD-Radeon-E6760-Discrete-GPU-product-brief.pdf

Apu are even better source for binning. Look at all the difference sku and please note the die size is the same for all sku.
http://en.wikipedia.org/wiki/List_o...sing_Unit_microprocessors#Embedded_Processors
I don't have experience with AMD's embedded APUs. But AMD's entire E GPU lineup, save for potentially those two (E6760/6460) are not binned. You can take it or leave it.
 

fred

Member
You know guys. It all dosen't matter anymore! I haven't really talked much for quite some time in this thread or other threads about how capable Wii U is. For a simple reason...

176 gflops? 352 gflops? It dosen't matter!

Anything being discussed in here dosen't matter to me anymore since E3.

Wii U could have 1 gflop and it wouldn't matter.

Wii U since E3 IMO showerd exactly what i (and several others) expected Wii U to do. Being a good step up above PS360.

If you look at "X" you have to keep in mind that Nintendos games get the really good visual bling bling at the end of the development cycle. Now imagine how this game will look a year from now. Cause if its really coming in 2014, its propably a holiday title. So comparing games that are out or coming in the next weeks to "X" makes no sense. Remember that Pikmin 3 and The Wonderful 101 both improved alot in their last year of development! Also a game like GTA 5 does look incredible because they propably spent over 100 million on it and have a far bigger team on it than any Wii U game will ever see. "X" is propably done with a fraction of GTA 5s budget. A huge budget can make quite a difference that has nothing to do with technical capabilities of the hardware.

Also Wind Waker HD looks incredible on the latest footage. Full 1080p + really awesome lighting effects and higher res textures and proper widescreen. IMO All that was needed to bring this game to 2013 was done.

Mario Kart 8 looks absolutely amazing. Great textures, awesome character/kart models and effects. All in 60 frames/sec. Not buyin the 1080p everyone is speaking of until its official though.

Smash is 1080p60 and looks really amazing. N'uff said :p

Bayonetta 2 looks alot better than Bayonetta 1. And i don't buy the "Bayonetta is a 2010 game so it dosen't count" excuse. Bayonetta 1 came 2010 and was developed on a mature Xbox 360 dev kit from a team with good technical abilities. So the Bayo1/2 comparison is valid. Also both games are propably utilising a comparable budget since i can't see Nintendo "wasting" as much cash on a game as other developers. Especially not on a "B" franchise like Bayonetta wich only appeals to a niche audience. IMO Bayo 1 vs. Bayo 2 is the most exact comparison we have right now if you also use the budget as a factor (And you should)

3D World looks really great and super clean and i wouldn't be surprised if it was 1080p60. (Yet i would be ok woth 720p60 too and wouldn't whine or laugh about it like some people on here would)

The people who still try to push the "Wii U = 360" agenda in terms of capabilities are the same people coming to every Wii U thread to trashtalk so their opinions and statements have absolutely no value to me. Im not calling those people out, you know who you are.

Thats just my 2 cents to the Wii U capabilities talk.

Post. Of. The. Week. Already.

Very well said mate. I'd also like to add (and I've said it before) that Sonic Generations and Sonic Lost World would join Bayonetta 1/2 as a fair comparison. The difference between those two pairs of games is night and day despite the usual suspects here being bizarrely in denial.
 

krizzx

Junior Member
You know guys. It all dosen't matter anymore! I haven't really talked much for quite some time in this thread or other threads about how capable Wii U is. For a simple reason...

176 gflops? 352 gflops? It dosen't matter!

Anything being discussed in here dosen't matter to me anymore since E3.

Wii U could have 1 gflop and it wouldn't matter.

Wii U since E3 IMO showerd exactly what i (and several others) expected Wii U to do. Being a good step up above PS360.

If you look at "X" you have to keep in mind that Nintendos games get the really good visual bling bling at the end of the development cycle. Now imagine how this game will look a year from now. Cause if its really coming in 2014, its propably a holiday title. So comparing games that are out or coming in the next weeks to "X" makes no sense. Remember that Pikmin 3 and The Wonderful 101 both improved alot in their last year of development! Also a game like GTA 5 does look incredible because they propably spent over 100 million on it and have a far bigger team on it than any Wii U game will ever see. "X" is propably done with a fraction of GTA 5s budget. A huge budget can make quite a difference that has nothing to do with technical capabilities of the hardware.

Also Wind Waker HD looks incredible on the latest footage. Full 1080p + really awesome lighting effects and higher res textures and proper widescreen. IMO All that was needed to bring this game to 2013 was done.

Mario Kart 8 looks absolutely amazing. Great textures, awesome character/kart models and effects. All in 60 frames/sec. Not buyin the 1080p everyone is speaking of until its official though.

Smash is 1080p60 and looks really amazing. N'uff said :p

Bayonetta 2 looks alot better than Bayonetta 1. And i don't buy the "Bayonetta is a 2010 game so it dosen't count" excuse. Bayonetta 1 came 2010 and was developed on a mature Xbox 360 dev kit from a team with good technical abilities. So the Bayo1/2 comparison is valid. Also both games are propably utilising a comparable budget since i can't see Nintendo "wasting" as much cash on a game as other developers. Especially not on a "B" franchise like Bayonetta wich only appeals to a niche audience. IMO Bayo 1 vs. Bayo 2 is the most exact comparison we have right now if you also use the budget as a factor (And you should)

3D World looks really great and super clean and i wouldn't be surprised if it was 1080p60. (Yet i would be ok woth 720p60 too and wouldn't whine or laugh about it like some people on here would)

The people who still try to push the "Wii U = 360" agenda in terms of capabilities are the same people coming to every Wii U thread to trashtalk so their opinions and statements have absolutely no value to me. Im not calling those people out, you know who you are.

Thats just my 2 cents to the Wii U capabilities talk.

Thank you for this post. I'm afraid to say stuff like this now because I get mobbed. I've caught ire of many a naysayer with that last comparison between Sonic Lost World and Sonic Generations and my Smash Brothers U analysis.

http://www.neogaf.com/forum/showpost.php?p=77793633&postcount=8390
http://www.neogaf.com/forum/showpost.php?p=77525065&postcount=8201

You should check them out.

So, basically, it makes the graphics, right?

It makes the "next-gen" graphics. How many last gen consoles were capable of GPGPU weather?
 

69wpm

Member
Thanks for posting this. Only thing I was worried about was frame drops since Wii U ports seem to enable v-sync at the expense of framerate. Load times aside, it does seem to be the best performing version of the game. Time to go buy!

There is one weird thing about that analysis: A fellow Gaffer mentiond a day one update, I didn't find anything in the article about that. He also said he didn't experience the frame rate drops to 11 like in the video in the same scene. Maybe the day one patch fixed it?
 

krizzx

Junior Member
There is one weird thing about that analysis: A fellow Gaffer mentiond a day one update, I didn't find anything in the article about that. He also said he didn't experience the frame rate drops to 11 like in the video in the same scene. Maybe the day one patch fixed it?

Oh yeah, I forgot that was mentioned before.

Though, I'm sticking with the PC version though. Its great that the Wii u version was declared the best even before the frame rate issues were patched, but none of the consoles version have shown enough to make me choose them over the PC.

I wonder if they patched the load times as well. They should not exist.

EDIT: Also, I forogt that we should not forget to keep in mind 1 critical fact about the Wii U version. It is also outputting video to the Wii U Gamepad.
 

fred

Member
Weird that there hasn't been a review of the Wii U version yet though. At least, there aren't any listed on Metacritic yet.
 

krizzx

Junior Member
Huh? It says mem1 being 32MB in both diagrams

It does? I must have misread. I thought it listed the 1GB reserved for the OS as MEM1 in the first diagram http://www.vgleaks.com/wp-content/uploads/2013/05/wii_u_mem.jpg right under the diagram for the 1GB of game RAM listed for games as MEM2, since they were both portrayed as the same size graphically. Though, the first diagram does not list MEM1's size at all.

I was unaware that system files were loaded in the eDRAM on the GPU as well. I thought that was done purely in the reserved 1GB of RAM. That also runs counter to what Shin'en said about having full acces to the eDRAM because it has system reserved files in it. Now I am confused.

I need to go reread it. I mostly just analyzed the diagrams and read the summary at the end.
We have 2 different memory zones: MEM1 and MEM2. All the application code and data are placed in MEM2. The system reserves all of the memory except for the area allocated for the application. The system reserved area (another Gigabyte) is not available to applications (although this can change in the future and maybe it will allocate 512 MB more for applications).

The bolded is what through me off. I thought that was referring to MEM1 since the system data is listed in MEM1 and it is stated to be the system reserved area and another Gigabyte.
 
Aren't the load times down to the fact that developers can't include an installation for the WiiU versions ?.

I wonder why Nintendo went with flash memory over the normal HDD's, cost i take it ?. Even a 100GB HDD would have allowed developers to install games,installed games also run better do they not ?.

Wonderful 101 also has some pretty long loading times between missions.
 

QaaQer

Member
Aren't the load times down to the fact that developers can't include an installation for the WiiU versions ?.

I believe so. And it goes to show how much Nintendo values 3rd party games and system performance.

I wonder why Nintendo went with flash memory over the normal HDD's, cost i take it ?.

100% cost related.

Even a 100GB HDD would have allowed developers to install games,installed games also run better do they not ?.

Can you even get hdds that small?

Wonderful 101 also has some pretty long loading times between missions.

:-(

Still buying it tho.
 
Post. Of. The. Week. Already.

Very well said mate. I'd also like to add (and I've said it before) that Sonic Generations and Sonic Lost World would join Bayonetta 1/2 as a fair comparison. The difference between those two pairs of games is night and day despite the usual suspects here being bizarrely in denial.

I don't buy his reasoning on Bayonetta. Again, Bayonetta 2 looks better but it looks better in the way we generally expect a sequel to look, there are plenty of action games that have come out since 2010 that look much better than the original. Shit, even Metal Gear Rising as an example.

You can't just dismiss those other games, consumers won't make the distinction you are and they aren't going to see the leap.
 
I think the issue for me in this case is that would normally be a concern for a new line of GPUs on a new process. We know the process for Latte is mature whatever it may be. And we also know other than eMemory, there's nothing that exotic about the architecture. So I don't see a high amount of bad chips coming from those wafers.

For me I don't understand the TDP impact issue of a console-designed GPU vs a binned GPU of similar specs. IMO it comes off like saying the former would automatically have a higher TDP, when I would think the opposite is true. I also still believe there is merit to the other points I made.

Of course it would be more of an issue at first. I think this has alot to do with the long lifespan blu is talking about. I would hope by year 5, they would have perfected the manufacturing and be getting great yields.

We can't assume that Latte's manufacturing is all that easy, however. And certainly not on Day 1. Remember, we are talking Renesas, firstly - not TSMC, who have been doing this constantly for years now. Also, even though it may be based on R700, it is not the same chip. The way the blocks are configured is entirely different, which means the interchip communication has likely been reworked. There is probably extra silicon throughout for Wii BC. And the inclusion of the eDRAM in addition to putting the GPU on an MCM with an IBM CPU is also likely to be a somewhat tricky process. The Iwata asks seems to indicate it was. The last thing they would need throughout all this is a shader core blowing out.

I'm not quite sure I follow your second paragraph. A console TDP wouldn't necessarily be higher - not higher than the majority of parts in a graphics card yield at least. But they would want yields much higher, and thus they would wisely refrain from packing too much on or clocking too high. There might be Lattes out there capable of well over 550 Mhz, but we'll never know.


They are not. Embedded parts and consumer parts have entirely different lifespans. How do I know that? I've spent ~6 years of my career working with AMD's (and other vendors') embedded parts.

Hmmm, you sure? Looking at the e4690 that bg referenced, for instance: it is built on the RV730 architecture, which was used in the desktop line and two generations of mobile lines. The Mobility Radeon HD 550v wasn't released until May 2010. So there was time to get yields up before the e4690 released and also still a little time to use the duds afterwards.

I'm thinking the lifespan could still be a byproduct of being in the top % of a yield plus modest clocking (and of course yields would increase over time, time which Latte has not yet had). I'm also wondering how much space has to do with lifespan. Laptop cards are stuck in a confined quarters. Most casino machines and medical equipment I've seen have a good amount of breathing room. I'm just guessing here, but it seems like it might come into play.

I'm a bit confused as to how you say they wouldn't be binned in accordance to power/performance. Looking at the comparison tables in the links, that seems to be the main difference, besides a couple display ports and HDCP keys. The e6760 is referred to as the "high end" chip. What am I missing?
 
Aren't the load times down to the fact that developers can't include an installation for the WiiU versions ?.

I wonder why Nintendo went with flash memory over the normal HDD's, cost i take it ?. Even a 100GB HDD would have allowed developers to install games,installed games also run better do they not ?.

Wonderful 101 also has some pretty long loading times between missions.

I think it's more due to form factor that they stuck with flash. Where would you stick a HDD in that case? If you read the Iwata Asks on the console, they actually started with the case design and then designed the hardware around what would fit inside it - quite...curious a decision.
 
I was unaware that system files were loaded in the eDRAM on the GPU as well. I thought that was done purely in the reserved 1GB of RAM. That also runs counter to what Shin'en said about having full acces to the eDRAM because it has system reserved files in it. Now I am confused.

If we go by that diagram, it doesn't look like those system files on the bottom are included in MEM1. Might be MEM0. *shrugs*
 
It may be backwards compatibility as well. They found solutions for the hardware structure that allowed the Wii U to actually function as a Wii when necessary. Having flash memory may have been a facilitator in that process.
 

QaaQer

Member
Most casino machines and medical equipment I've seen have a good amount of breathing room.

You must lead an interesting life.

I think it's more due to form factor that they stuck with flash. Where would you stick a HDD in that case? If you read the Iwata Asks on the console, they actually started with the case design and then designed the hardware around what would fit inside it - quite...curious a decision.

We aren't talking Macbook Air levels of squashing here. There is room for a 2.5 in there.

http://www.ifixit.com/Teardown/Nintendo+Wii+U+Teardown/11796/1
 

fred

Member
I don't buy his reasoning on Bayonetta. Again, Bayonetta 2 looks better but it looks better in the way we generally expect a sequel to look, there are plenty of action games that have come out since 2010 that look much better than the original. Shit, even Metal Gear Rising as an example.

You can't just dismiss those other games, consumers won't make the distinction you are and they aren't going to see the leap.

The solid 60fps framerate, the fact that it's also streaming the same images to the GamePad at the same framerate at the same time and the considerably higher poly count - just look at the Gomorrah boss fight to see this - and the difference between the two games speakes volumes.

It's going to be interesting to see the Iwata Asks about Bayonetta 2 closer to launch, that Gomorrah model and animation is particularly impressive, who knows...they might even be using tesselation for it..?
 
I don't understand half of things in this thread.
I can only judge from what I see with my own eyes.
Having played Pikmin 3 and W101 over the weekend, I can say that there is stuff I have never seen on PS360.
especially Pikmin 3 blowed me away.
Image quality of W101 is top notch.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Hmmm, you sure? Looking at the e4690 that bg referenced, for instance: it is built on the RV730 architecture, which was used in the desktop line and two generations of mobile lines. The Mobility Radeon HD 550v wasn't released until May 2010. So there was time to get yields up before the e4690 released and also still a little time to use the duds afterwards.
That's the point - embedded parts are never at the cutting edge of the given architecture lineup (i.e. never the quantitative flagmen, clock- or transistor-wise), nor fab node leaders - basically they are released at a stable design point, and start their life from there. You could say they are branched off of a consumer part, but that's about it. Embedded (particularly AMD's GPUs) are usually "binned" by 'good' / 'no-good' - the yields are good enough so that the rest can be covered by the unit pricing. Basically, there is no need for performance-rated bins.

I'm thinking the lifespan could still be a byproduct of being in the top % of a yield plus modest clocking (and of course yields would increase over time, time which Latte has not yet had). I'm also wondering how much space has to do with lifespan. Laptop cards are stuck in a confined quarters. Most casino machines and medical equipment I've seen have a good amount of breathing room. I'm just guessing here, but it seems like it might come into play.
No no, by lifespan I'm referring to the product availability lifespan (for the parts we're discussing here it's 5 years). As re the life expectancy of the end product - it's the other way round to what you suggest above - embedded parts are often subjected to much harsher conditions than most laptop components could ever be. Trust me, some of the equipment those little parts end up being used in is supposed to live for years on though conditions most laptops would not survive even a month.

I'm a bit confused as to how you say they wouldn't be binned in accordance to power/performance. Looking at the comparison tables in the links, that seems to be the main difference, besides a couple display ports and HDCP keys. The e6760 is referred to as the "high end" chip. What am I missing?
"Good functionality" as in functioning blocks. E6460 could be an E6760 with disabled blocks. Both parts are specced at the exact same clocks, though. I.e. each and every proto-E6x60, assuming they start life as one die, has to hit the specced clocks, at the specced TDP. Surely disabling blocks could help 'half-good' protos live a 'full life' as E6460s at a relaxed TDP, but that's about it - there are no half-clocked E6760s, and those which don't meet the E6460 criteria (2 functional SIMD units at 800MHz, at least 4 functional Eyefinity pipelines, etc) are disposed of. Even if binned that way, that's some quite rigid binning - millions upon millions of parts make it though that not because of some fine-grained discrimination process (which is what normally clock grades do), but because of the maturity of the production itself.
 

USC-fan

Banned
I really feel the debate is pretty moot at this point. If the only option on the table is 160 vs 320. There is just no way there is 320 alu on that die. We have data on what size the blocks have to be on 40/45nm.

Every bit of data points to 160.
 
I really feel the debate is pretty moot at this point. If the only option on the table is 160 vs 320. There is just no way there is 320 alu on that die. We have data on what size the blocks have to be on 40/45nm.

Every bit of data points to 160.

So you are done here?
 
You must lead an interesting life.

And I only drink Dos Equis.

That's the point - embedded parts are never at the cutting edge of the given architecture lineup (i.e. never the quantitative flagmen, clock- or transistor-wise), nor fab node leaders - basically they are released at a stable design point, and start their life from there. You could say they are branched off of a consumer part, but that's about it. Embedded (particularly AMD's GPUs) are usually "binned" by 'good' / 'no-good' - the yields are good enough so that the rest can be covered by the unit pricing. Basically, there is no need for performance-rated bins.


No no, by lifespan I'm referring to the product availability lifespan (for the parts we're discussing here it's 5 years). As re the life expectancy of the end product - it's the other way round to what you suggest above - embedded parts are often subjected to much harsher conditions than most laptop components could ever be. Trust me, some of the equipment those little parts end up being used in is supposed to live for years on though conditions most laptops would not survive even a month.


"Good functionality" as in functioning blocks. E6460 could be an E6760 with disabled blocks. Both parts are specced at the exact same clocks, though. I.e. each and every proto-E6x60, assuming they start life as one die, has to hit the specced clocks, at the specced TDP. Surely disabling blocks could help 'half-good' protos live a 'full life' as E6460s at a relaxed TDP, but that's about it - there are no half-clocked E6760s, and those which don't meet the E6460 criteria (2 functional SIMD units at 800MHz, at least 4 functional Eyefinity pipelines, etc) are disposed of. Even if binned that way, that's some quite rigid binning - millions upon millions of parts make it though that not because of some fine-grained discrimination process (which is what normally clock grades do), but because of the maturity of the production itself.

I get your points, blu. So you don't think that the 450 Mhz laptop part I mentioned could possibly be some of the runoff from the embedded line?

Granted, I still don't think they are a good way of comparison for the reasons I mentioned to bg in that same post. I don't want to belabor the issue any more. I will accept the possibility of them cramming 320 shaders into Latte, looking at nothing other than the power draw. That does not mean that I find it likely, however.
 
Of course it would be more of an issue at first. I think this has alot to do with the long lifespan blu is talking about. I would hope by year 5, they would have perfected the manufacturing and be getting great yields.

We can't assume that Latte's manufacturing is all that easy, however. And certainly not on Day 1. Remember, we are talking Renesas, firstly - not TSMC, who have been doing this constantly for years now. Also, even though it may be based on R700, it is not the same chip. The way the blocks are configured is entirely different, which means the interchip communication has likely been reworked. There is probably extra silicon throughout for Wii BC. And the inclusion of the eDRAM in addition to putting the GPU on an MCM with an IBM CPU is also likely to be a somewhat tricky process. The Iwata asks seems to indicate it was. The last thing they would need throughout all this is a shader core blowing out.

I'm not quite sure I follow your second paragraph. A console TDP wouldn't necessarily be higher - not higher than the majority of parts in a graphics card yield at least. But they would want yields much higher, and thus they would wisely refrain from packing too much on or clocking too high. There might be Lattes out there capable of well over 550 Mhz, but we'll never know.?

It still sounds like you're making the yield issue bigger than it should be especially when looking at the potential capability range of Latte. After all we aren't talking about a 1000+ ALU part on a 28nm fab. Also we still don't know for sure how much, if any, extra silicon is dedicated to Wii components as I took their comment to mean there was none (eMemory excluded). I don't see how the change in the layout is going to cause a dramatic enough change with any interconnects as if they were going to become something so unique that it would lead to issues of manufacturing a high amount of quality chips. The MCM shouldn't have a high amount of impact on what to make the GPU. Seemed to me the bigger issue with the MCM was that two different companies were involved with the CPU and GPU. The eDRAM IMO would be the primary concern for yields and they aren't doing anything special with that either in regards to the layout.

But I could make a sales joke along the lines of needing yield issues after up to this point. :p

And just a reminder I'm only discussion the validity of TDP and binning to draw a final conclusion. Not to say it isn't a 160-part.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
I really feel the debate is pretty moot at this point. If the only option on the table is 160 vs 320. There is just no way there is 320 alu on that die. We have data on what size the blocks have to be on 40/45nm.

Every bit of data points to 160.
I thought you had everything figured out many WUSTs ago by the solidly estimated TDP measured via a reliable Kill-O-Watt and the fact non-console embedded GPUs (in contrast to console embedded GPUs) are binned to hell and back.. Foolish me.
 

USC-fan

Banned
I thought you had everything figured out many WUSTs ago by the solidly estimated TDP measured via a reliable Kill-O-Watt and the fact non-console embedded GPUs (in contrast to console embedded GPUs) are binned to hell and back.. Foolish me.

Yeah you were pretty foolish in those wust threads.
 
It still sounds like you're making the yield issue bigger than it should be especially when looking at the potential capability range of Latte. After all we aren't talking about a 1000+ ALU part on a 28nm fab. Also we still don't know for sure how much, if any, extra silicon is dedicated to Wii components as I took their comment to mean there was none (eMemory excluded). I don't see how the change in the layout is going to cause a dramatic enough change with any interconnects as if they were going to become something so unique that it would lead to issues of manufacturing a high amount of quality chips. The MCM shouldn't have a high amount of impact on what to make the GPU. Seemed to me the bigger issue with the MCM was that two different companies were involved with the CPU and GPU. The eDRAM IMO would be the primary concern for yields and they aren't doing anything special with that either in regards to the layout.

But I could make a sales joke along the lines of needing yield issues after up to this point. :p

And just a reminder I'm only discussion the validity of TDP and binning to draw a final conclusion. Not to say it isn't a 160-part.

Just to be clear, when talking about the effect of Wii BC, I'm not talking about adding Wii components rather than the tinkering done to the Radeon parts (which still likely amounts to adding transistors) to get them to work in Wii mode, as was described in the Iwata Asks.

The truth is we're both just guessing here. But I'd take the conservative and oddly out of synch clocks of both the GPU and CPU to indicate that they did not have much breathing room before TDP rose to unacceptable levels. For instance, if they could have bumped the GPU clock just a bit more, it would have been at 1/2 the CPU clock as in the PS4 design. If we look at the supposed dev kit GPU clocks (400 Mhz, 450 Mhz) and the rumors of overheating, what we are seeing now seems to be the absolute limit they could hit and reach good yields.
 
The solid 60fps framerate, the fact that it's also streaming the same images to the GamePad at the same framerate at the same time and the considerably higher poly count - just look at the Gomorrah boss fight to see this - and the difference between the two games speakes volumes.

It's going to be interesting to see the Iwata Asks about Bayonetta 2 closer to launch, that Gomorrah model and animation is particularly impressive, who knows...they might even be using tesselation for it..?

Iwata Asks for Bayonetta 2 can be found HERE ;).

It's rather brilliant. Post Of The Year Stuff... Or even the Post Of All The Internets!!

I agree with what you wrote. The differences are clear, and if anybody still doubts, they should see this breakdown by 3Dude on another site. If this was being released on PS360 consoles, it would be downported and those versions wouldn't be able to keep up. I totally reject the "2010 doesn't count" changing of the goalposts here when earlier in this thread, people were perfectly happy to throw around Just Cause 2 (a title released in North America in the same year, and two months after Bayonetta) in relation to Monolith Soft's Project X in their desperate reach to play down anything and everything the Wii U had going for it over PS360 consoles - 2010 meant that the X360 had been out for almost 4 1/2 years, and the PS3 for about 3 1/2. Please note those time frames, realise that developers were far more familiar with PS360 consoles back then than they are with the Wii U at this point, then understand why I think that the idea it somehow "doesn't count" is a complete BS.

Bayonetta 2 will be released in 2014, at that point, up to 1 1/2 years of the Wii U's life, and within up to a third of the X360's time - We haven't seen the rest of the game, and there is plenty of time to polish it (After all, the finished article of The Wonderful 101 looks better than when it was revealed at E3 2012...). Even with Project X, one can see how it's progressed since January - The E3 trailer shows marked differences.

Another point to add, which many people fail to do is that one must allow for the fact that this is Nintendo's first step into the HD gaming development era - Their first efforts are already showing improvements on the 7th Generation at this stage. Certainly, all of the signs are encouraging. If you can't see the improvements now, then let us come back in 2016, 2017 and 2018, and then you will have seen the noticeable steps. Or even 2020 - That way, one can say that the Wii U had eight years, just as the X360 has - But it doesn't need eight years; it's already there.
 
http://www.youtube.com/watch?v=vFp7NlksL3k&feature=c4-overview&list=UU__Oy3QdB3d9_FHO_XG1PZg

This guy on reviewtechusa has some pretty good points.

Not that I am not excited for next-gen but doesn't it suppose to produce 1080P60FPS right from the box ? If I just think about it from logical perspective the ps4 and xboxone have the same PC x86 architecture. If producers are already struggling, with the mindset that the consoles suppose to last for another decade, how is it going to be in 2 or 3 years ?
Its not like in the last generation that the consoles where already 2,3 years ahead of the PC GPU'S, its the opposite ! Or am I overlooking something...

This a little bit off-topic but, I understand why Iwata was not betting on better graphics. It was just simply to expensive to get ahead of the PC GPU's.
 
Just to be clear, when talking about the effect of Wii BC, I'm not talking about adding Wii components rather than the tinkering done to the Radeon parts (which still likely amounts to adding transistors) to get them to work in Wii mode, as was described in the Iwata Asks.

I guess that comment is going to be subjective then because I don't even get adding transistors from "adjusting" in that statement.

The truth is we're both just guessing here. But I'd take the conservative and oddly out of synch clocks of both the GPU and CPU to indicate that they did not have much breathing room before TDP rose to unacceptable levels. For instance, if they could have bumped the GPU clock just a bit more, it would have been at 1/2 the CPU clock as in the PS4 design. If we look at the supposed dev kit GPU clocks (400 Mhz, 450 Mhz) and the rumors of overheating, what we are seeing now seems to be the absolute limit they could hit and reach good yields.

Of course we are. That's what I'm getting at. No one can draw a proper conclusion with that info.

The point you make here would fit more for the argument of a 320 part than a 160 part IMO. And with the overheating that was also supposedly with a 640 part GPU.
 

Mr_B_Fett

Member
The truth is we're both just guessing here. But I'd take the conservative and oddly out of synch clocks of both the GPU and CPU to indicate that they did not have much breathing room before TDP rose to unacceptable levels. For instance, if they could have bumped the GPU clock just a bit more, it would have been at 1/2 the CPU clock as in the PS4 design. If we look at the supposed dev kit GPU clocks (400 Mhz, 450 Mhz) and the rumors of overheating, what we are seeing now seems to be the absolute limit they could hit and reach good yields.

Not sure how you can correlate between yield levels and a cooling problem in the dev kits. Not to mention those early dev kits were not using the MCM but rather had a discrete GPU. As far as I'm aware (and I could very well be wrong) there were no overheating issues with the later kits using the "real" hardware.
 
Just to be clear, when talking about the effect of Wii BC, I'm not talking about adding Wii components rather than the tinkering done to the Radeon parts (which still likely amounts to adding transistors) to get them to work in Wii mode, as was described in the Iwata Asks.

The truth is we're both just guessing here. But I'd take the conservative and oddly out of synch clocks of both the GPU and CPU to indicate that they did not have much breathing room before TDP rose to unacceptable levels. For instance, if they could have bumped the GPU clock just a bit more, it would have been at 1/2 the CPU clock as in the PS4 design. If we look at the supposed dev kit GPU clocks (400 Mhz, 450 Mhz) and the rumors of overheating, what we are seeing now seems to be the absolute limit they could hit and reach good yields.


Uuh... what? It's pretty unlikely that the very different architectures of PS4 and Wii U randomly result in the same "perfect" CPU/GPU clock ratio.

Edit: Tbh I'd say the passage after the bolded doesn't make much sense either. Dev kits probably didn't use the final Wii U hardware to start with. Also I'm not sure where that dev kits overheating / yields correlation is coming from.
 
Status
Not open for further replies.
Top Bottom