• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

z0m3le

Banned
Do numbers go that big?!?! Holy crap.

28 compute units x 64 ALUs in each = 1792 stream processors (or 1792 ALUs)

1792 ALUs X 2 X 1.2GHz = 4.3 TFLOPs which is what I have my card at home running at... Oddly enough it's far more quiet than my old HD 5870 clocked at stock while both are being maxed out by bitcoin mining earlier this year (allowed me to pay for the card for free)
http://www.newegg.com/Product/Product.aspx?Item=N82E16814125414 that is the card I have. It comes with Crysis 3, Blood Dragon, Tomb Raider and Bioshock Infinite... pretty good deal for $300 IMO (which was the price when I bought it, as well as free shipping and no tax)
 
A

A More Normal Bird

Unconfirmed Member
28 compute units x 64 ALUs in each = 1792 stream processors (or 1792 ALUs)

1792 ALUs X 2 X 1.2GHz = 4.3 TFLOPs which is what I have my card at home running at... Oddly enough it's far more quiet than my old HD 5870 clocked at stock while both are being maxed out by bitcoin mining earlier this year (allowed me to pay for the card for free)
http://www.newegg.com/Product/Produc...82E16814125414 that is the card I have. It comes with Crysis 3, Blood Dragon, Tomb Raider and Bioshock Infinite... pretty good deal for $300 IMO (which was the price when I bought it, as well as free shipping and no tax)

But what about your electricity bill? ;)
 

z0m3le

Banned
But what about your electricity bill? ;)

It's only drawing ~180watts actually. I live in Seattle and while mining for 2 months, I paid less than $10 (extra from mining) on my 60 day electric bill. My entire PC is run on a 600watt PSU that is only using about half that. It's probably because I didn't have to change any voltage settings to reach my overclocks.
 
28 compute units x 64 ALUs in each = 1792 stream processors (or 1792 ALUs)

1792 ALUs X 2 X 1.2GHz = 4.3 TFLOPs which is what I have my card at home running at... Oddly enough it's far more quiet than my old HD 5870 clocked at stock while both are being maxed out by bitcoin mining earlier this year (allowed me to pay for the card for free)
http://www.newegg.com/Product/Product.aspx?Item=N82E16814125414 that is the card I have. It comes with Crysis 3, Blood Dragon, Tomb Raider and Bioshock Infinite... pretty good deal for $300 IMO (which was the price when I bought it, as well as free shipping and no tax)

just out of curiosity, would a 7950 overclocked that high out-perform a 7970 overclocked to 1GHz, or would I be better off just getting a 7970 and overclock it to 1GHz?
 

z0m3le

Banned
just out of curiosity, would a 7950 overclocked that high out-perform a 7970 overclocked to 1GHz, or would I be better off just getting a 7970 and overclock it to 1GHz?

Most situations sure, but there is likely other things that need to be taken into account. Memory speed could be a factor for instance, but you could probably just get the 7970 and overclock it to 1.2GHz as well... hitting 5TFLOPs nearly... I mean my card doesn't go above 70c so I'd just get the same brand and go with that if I was looking for the best possible performance.
 

Donnie

Member
Ok, maybe I misunderstood your conversation with BG. I'm not claiming to be an authority to expert, so please don't take this as being combative. The IGP line that went into Brazos was based on a modified R700 design, from what I've been reading. The changes to the shader design were only the 69xx Cayman chips. I suppose it's not impossible that Wii U could have some of those characteristics, but I'm not sure that's what we're seeing here. I found this image that through GIS that compares the ALU's on Brazos to Latte, and they are similar. I think it might lend some credence to the idea that Wii U is a 320 160 shader part if it's counting correctly.

JQJGf8G.jpg


EDIT: Sorry, meant 160 shader part, not 320.

Quite the opposite, the 40SP Brazos block is less than 10% larger than a Latte block when you measure only the shader parts (rather than memory banks). So if we assume Latte blocks are 20SP (160 shader GPU) then we'd have to wonder why its over 90% as big as a 40SP Brazos block.

Of course there are theories here that could explain that. But they're also theories to explain why a 40SP latte block could be ~10% smaller than a 40SP Brazos block. Either way when you compare those block sizes certainly no credence is lent to 20SP's (actually 32SP would fit much better).
 
Hi all, this thread has its up and downs. I was liking the 160ALU, the threaded design hypothesis discussion, I thought it was getting somewhere.

It has been mentioned here some speculation the Wii U should have been released in 2011, but did not, maybe because of the software output, problems with the streaming technology as someone said or production capacity.

I was wondering if this is the case, and with contradicting statements saying the GPU is pretty standard vs it is a custom design, or as I understand it is a custom design but with a pretty standard core as base. When nintendo noticed they were going to have to release in 2012 instead of 2011, could they have taken this extra time to further modernized the GPU design? How feasible is this considering it is based on a standard AMD design? It was extra time after all and they were going to release pretty close to the other competitors.
 

krizzx

Junior Member
Quite the opposite, the 40SP Brazos block is less than 10% larger than a Latte block when you measure only the shader parts (rather than memory banks). So if we assume Latte blocks are 20SP (160 shader GPU) then we'd have to wonder why its over 90% as big as a 40SP Brazos block.

Of course there are theories here that could explain that. But they're also theories to explain why a 40SP latte block could be ~10% smaller than a 40SP Brazos block. Either way when you compare those block sizes certainly no credence is lent to 20SP's (actually 32SP would fit much better).

Is it just me, or are the thinner extensions larger on Latte than they are on Brazos? Perhaps the proportions are simply being misinterpreted by some and its actually 40 SP. I do agree wholeheartedly that trying to say a less than 10% difference in size would make it 20SP when the comparative 20sp chips are 45% smaller. That would be a tremendous waste of silicon which would completely go against Iwata's statement on the design decisions.

Hi all, this thread has its up and downs. I was liking the 160ALU, the threaded design hypothesis discussion, I thought it was getting somewhere.

It has been mentioned here some speculation the Wii U should have been released in 2011, but did not, maybe because of the software output, problems with the streaming technology as someone said or production capacity.

I was wondering if this is the case, and with contradicting statements saying the GPU is pretty standard vs it is a custom design, or as I understand it is a custom design but with a pretty standard core as base. When nintendo noticed they were going to have to release in 2012 instead of 2011, could they have taken this extra time to further modernized the GPU design? How feasible is this considering it is based on a standard AMD design? It was extra time after all and they were going to release pretty close to the other competitors.

This is unlikely. The GPU was still a stock 4850 at the point and time. The Wii U also had less RAM during that point. The console was far from having been supposed to release in 2011. That is just presumptuous conjecture from whoever stated it. I've never known a console to announce and be on the shelf in less than a year. The PS4 and Xbox3 will be the first to do so, so if anything, Sony and Microsoft are rushing their consoles out to compete with Nintendo.

Also, at that point in time, there was absolutely no word on the other 2 next gen consoles, and Nintendo repeatedly reiterated how whatever Sony and Microsoft would release did not concern them. Why do people keep retreading this ground?
http://beefjack.com/news/nintendos-iwata-we-dont-care-about-more-beef-consoles/
http://www.joystiq.com/2012/07/20/iwata-wii-us-timing-relative-to-competition-isnt-important-b
http://www.mobilenapps.com/articles/3212/20120721/iwata-wii-u-competing-microsoft-sony-apple.htm
The entire line of thought of Nintendo possibly making rushed/bad decisions for fear of something Sony or Microsoft was planning is just gratuitous wishful thinking of fanatics and their "console war" mentality. If anything, Nintendo established that they no longer cared to fight a console war(which they stated with the Wii as well). Cost and accessibility were their main goals this time.

I really don't get this constant attacking by people and game media of Nintendo's hardware decision. Nintendo made it clear that they have different focus now, yet you still have people begging their position. If this upsets them, then they should just buy something else. Every time someone makes any comment about the specs, people jump on Nintendo to bash them like insecure bullies. Crossing the same mute arguments has started to get tiresome.
 
I really don't get this constant attacking by people and game media of Nintendo's hardware decision. Nintendo made it clear that they have different focus now, yet you still have people begging their position. If this upsets them, then they should just buy something else. Every time someone makes any comment about the specs, people jump on Nintendo to bash them like insecure bullies. Crossing the same mute arguments has started to get tiresome.

Ok fair enough and valid point, not trying to have "console war mentality" though. I was just curious.

From the Wii, it seemed it was more powerful than PS2 but almost nobody cared, even Nintendo themselves.

So it seems this time is the same scenario, although in better position I believe, because the GPU is somewhat standard to program for and has advanced features. It seems to favor heavily lightning, depth of field and tessellation. The last one yet to be seen.

The threaded design is really intriguing if they actually used it.
 

Meelow

Banned
http://nintendoenthusiast.com/17169...ling-our-doubts-about-shadow-of-the-eternals/

NE: Well, a lot of developers have given negative feedback in regards to Wii U being a more like a current-gen platform. How has it fared so far with CryEngine 3 as you have been working with it? You seem to be pushing out some pretty looking graphics so far…

Paul: Yeah, CryEngine 3 fully supports Wii U and it’s been a great system to work with so far. And in regards to Shadow of the Eternals, beautiful visuals are just one part of the experience. There are many different pieces that all come together for this experience and the Wii U handles that complete package very well. So, we’ve had no problems so far and we’re not at all concerned about what the Wii U can handle.
 

ozfunghi

Member
Quite the opposite, the 40SP Brazos block is less than 10% larger than a Latte block when you measure only the shader parts (rather than memory banks). So if we assume Latte blocks are 20SP (160 shader GPU) then we'd have to wonder why its over 90% as big as a 40SP Brazos block.

Of course there are theories here that could explain that. But they're also theories to explain why a 40SP latte block could be ~10% smaller than a 40SP Brazos block. Either way when you compare those block sizes certainly no credence is lent to 20SP's (actually 32SP would fit much better).

If my memory serves me, Wsippel said that one of the guys from chipworks said the GPU was on an "advanced TSMC 40nm process". Wsippel did some digging and made some interesting discoveries.
 

It is very tough to gauge everything from devs.

From one side you have EA and FB, ditching the Wii U completely, we don't even know when all the test where done and if the tools were the latest ones. We don't know if they had the ONE CORE CPU issue when they tested.

On the other hand you have Shinen, Frozenbyte, Criterion, Precursor, Crytek, Rebellion with positive comments for the platform, and the comment that seems more contradictory and not sure if it is pure PR talk, is that most say that they got the engine/game running fairly easily.
 

Schnozberry

Member
Quite the opposite, the 40SP Brazos block is less than 10% larger than a Latte block when you measure only the shader parts (rather than memory banks). So if we assume Latte blocks are 20SP (160 shader GPU) then we'd have to wonder why its over 90% as big as a 40SP Brazos block.

Of course there are theories here that could explain that. But they're also theories to explain why a 40SP latte block could be ~10% smaller than a 40SP Brazos block. Either way when you compare those block sizes certainly no credence is lent to 20SP's (actually 32SP would fit much better).

If the SIMD Engine in the picture is 40SP for both chips, Brazos has two of those blocks and Latte has four. Brazos is an 80SP budget computing platform, and doubling the number of blocks would make Latte 160SP if I'm looking at it right. That doesn't discount the earlier theory of a customized front end to make better use of the ALU's that are there. It's always possible that I'm looking at the photo wrong though and not counting the blocks correctly.
 

krizzx

Junior Member
It is very tough to gauge everything from devs.

From one side you have EA and FB, ditching the Wii U completely, we don't even know when all the test where done and if the tools were the latest ones. We don't know if they had the ONE CORE CPU issue when they tested.

On the other hand you have Shinen, Frozenbyte, Criterion, Precursor, Crytek, Rebellion with positive comments for the platform, and the comment that seems more contradictory and not sure if it is pure PR talk, is that most say that they got the engine/game running fairly easily.

I doubt that was issue at all, though. Even if they were only using two cores, the GPGPU functionality would have easily made up for it and then some assuming it that big of problem to begin with if it was a problem.

They stated the fact themselves if your read closely. They ran it, it didn't work "how they liked" and they didn't bother do anything more. To put it simply, they had no intention of getting it to work.

If the SIMD Engine in the picture is 40SP for both chips, Brazos has two of those blocks and Latte has four. Brazos is an 80SP budget computing platform, and doubling the number of blocks would make Latte 160SP if I'm looking at it right. That doesn't discount the earlier theory of a customized front end to make better use of the ALU's that are there. It's always possible that I'm looking at the photo wrong though and not counting the blocks correctly.

I think Brazos is 160 SP, or 80x2 I believe.
 

Donnie

Member
If the SIMD Engine in the picture is 40SP for both chips, Brazos has two of those blocks and Latte has four. Brazos is an 80SP budget computing platform, and doubling the number of blocks would make Latte 160SP if I'm looking at it right. That doesn't discount the earlier theory of a customized front end to make better use of the ALU's that are there. It's always possible that I'm looking at the photo wrong though and not counting the blocks correctly.

Latte has 8 shader blocks, highlighted in red here:

c10234f5_d813301_13195mk1y.png


In the picture you showed the top left is a standard 40SP Brazos block and to its right is a single Latte shader block with its memory banks rearranged into an order similar to that of Brazos. On the bottom of the image is a single 40SP Brazos block split into two 20SP blocks (with a similar memory config per 20SP block as Latte), to its right is two full Latte shader blocks. They probably did that to illustrate that two Latte shader blocks are nearly twice as large as two similar 20SP Brazos blocks would be.

In my comparison (Brazos being less than 10% larger) I was comparing the two top images, a single Brazos 40SP shader block vs a single Latte shader block (of which there are 8 in Latte).
 

stanley1993

Neo Member
It is very tough to gauge everything from devs.

From one side you have EA and FB, ditching the Wii U completely, we don't even know when all the test where done and if the tools were the latest ones. We don't know if they had the ONE CORE CPU issue when they tested.

On the other hand you have Shinen, Frozenbyte, Criterion, Precursor, Crytek, Rebellion with positive comments for the platform, and the comment that seems more contradictory and not sure if it is pure PR talk, is that most say that they got the engine/game running fairly easily.

I thought this interview was the most interesting of the more recent interviews.
From this thread: http://www.neogaf.com/forum/showthread.php?t=559196

Silicon studio comments on developing with the Wii U. Not sure if you guys read it yet. They said, "...Wii U has very specific characteristics. Some game designers will like it. Some others will have a hard time to port their game. There are pros and cons. We are very close to Nintendo, so we were working on Wii U for a long time. We almost got the maximum performance with the hardware. Since we are working closely with the Nintendo support team they gave us a lot of useful information."
the answer is more interesting as a whole because they do a small comparison with the ps4.
 

Donnie

Member
Here's a direct comparison between a single Latte shader block vs half of the shader logic from a 40SP Brazos block (obviously we're only comparing shader logic here so ignore the yellow/blue memory banks):

Latte_Brazos.jpg


I've just used the image you posted Schnozberry to make the comparison, obviously the orange block in the middle is the 20SP Brazos shader logic (taken from the bottom left part of that image).
 

Earendil

Member
thats what i was thinking, but the constant ram bandwidth is crap from some people when its never been seen to be an issue is so frustrating

I think the frustrating part is that the people who complain about the RAM bandwidth have no experience with it. And no one who does have experience with it (developers) has said anything.
 

Schnozberry

Member
Brazos is 160 SP, or 80x2 I believe.

Wikipedia lists all Brazos parts at 80:8:4 core configs. Not that it couldn't be wrong, but it doesn't seem like it, since other parts appear to be correct with AMD's listings.

Latte has 8 shader blocks, highlighted in red here:

c10234f5_d813301_13195mk1y.png


In the picture you showed the top left is a standard 40SP Brazos block and to its right is a single Latte shader block with its memory banks rearranged into an order similar to that of Brazos. On the bottom of the image is a single 40SP Brazos block split into two 20SP blocks (with a similar memory config per 20SP block as Latte), to its right is two full Latte shader blocks. They probably did that to illustrate that two Latte shader blocks are nearly twice as large as two similar 20SP Brazos blocks would be.

In my comparison (Brazos being less than 10% larger) I was comparing the two top images, a single Brazos 40SP shader block vs a single Latte shader block (of which there are 8 in Latte).

Ok, I was just counting wrong. 40SP on the Brazos die is about 7490px on the image, and a single Wii U SIMD block is 5016px. That means a single Wii U SIMD block is about 33% larger than a Brazos block, all other things being equal. Both have the same 32 blocks of SRAM per SIMD engine, but it's difficult to ascertain density just from eyeballing these pictures. Maybe somebody else with more expertise can weigh in and tell me where I'm wrong.
 

krizzx

Junior Member

The_Lump

Banned
It was mainly for performance, and capability analysis. Roundness is the most difficult thing to achieve in polygon graphics and the poster was pointing out the roundness of the characters. The poster also asked if it could be using Tessellation.

http://www.neogaf.com/forum/showpost.php?p=58156882&postcount=5313

I would not rule the possibility of those characters being tessellated out.


Oh ok, thanks. I didn't really read the last page - my bad!
 

Donnie

Member
Ok, I was just counting wrong. 40SP on the Brazos die is about 7490px on the image, and a single Wii U SIMD block is 5016px. That means a single Wii U SIMD block is about 33% larger than a Brazos block, all other things being equal. Both have the same 32 blocks of SRAM per SIMD engine, but it's difficult to ascertain density just from eyeballing these pictures. Maybe somebody else with more expertise can weigh in and tell me where I'm wrong.

I know very little about imaging editing and I'm just using the selection tool in paint here so forgive me if I've got something wrong :) But for me a Brazos block is measuring 124,614 pixels, vs 107,328 pixels for a Latte block:

brazoslatte2.jpg


Measuring just the shader logic in the same way comes out at about 74,586 vs 67,878:

brazoslattelogic.jpg
 

Schnozberry

Member
I know very little about imaging editing and I'm just using the selection tool in paint here so forgive me if I've got something wrong :) But for me a Brazos block is measuring 124,614 pixels, vs 107,328 pixels for a Latte block:

brazoslatte2.jpg


Measuring just the shader logic in the same way comes out at about 74,586 vs 67,878:

brazoslattelogic.jpg

Full disclosure, I was using numbers from Beyond3D, which I believe were normalized due to the vastly different sizes of the souce images. I don't doubt your counts at all.
 

krizzx

Junior Member
Full disclosure, I was using numbers from Beyond3D, which I believe were normalized due to the vastly different sizes of the souce images. I don't doubt your counts at all.

As knowledgeable as they are, when it comes to Nintendo products, I wouldn't put much stock in info that comes from B3D. Their bias heavily overpowers all of their analysis. Half of the initial negative analysis about the Wii U in the beginning came from them.

They are the one who concluded that the Wii U had a problem with transparencies/alpha textures(I think some even saying it can't do them), which was already contradicted by Nintendo Land when they made it. They are the ones who said the system was bandwidth starved and bottlenecked by its RAM, which was contradicted during a dev interview. I think a few others came from them as well. They pretty much champion most of the negative beliefs about Nintendo hardware dating back to the GC. They are fast to declare the maximum capabilities as low as they can. They are experienced, but they do very close minded analysis. I honestly do not understand why.

As it stands for Brazos vs Latte, there are more clear similarities between them than any other chip I have seen people stick it next to in design and performance. The HD5550(Redwood LE) may be an exception being it the only AMD GPU to perfectly match the specs from the initial analysis, but we don't have a dieshot of that chip to actually do any analysis. This is the best shot I could find there is nothing you can do with this.
http://tpucdn.com/reviews/HIS/Radeon_HD_5550/images/gpu_small.jpg

I'm leaning towards it having no less than 256(32x8)

EDIT: Is Redwood the same as Llano? I would certainly feel dumb if it was. That would mean I've been looking at it this whole time.
http://www.brightsideofnews.com/print/2010/9/2/amd-shows-next-gen-chips-bulldozer-and-fusion.aspx
http://www.behardware.com/articles/878-2/amd-a10-5800k-and-a8-5600k-the-second-desktop-apu.html
 

TKM

Member
Beyond3D forum has been much more even handed and accurate than WUST.

The site authors far overshot Latte's performance though. They, reasonable IMO, felt Nintendo would target 4770 as the stretch and 4670 as the floor on performance. If Nintendo had gone with a customized 4670 at 40nm, there would be no talk of WiiU being just on par with PS3 and Xbox 360.
 

Donnie

Member
Beyond3D forum has been much more even handed and accurate than WUST.

The site authors far overshot Latte's performance though. They, reasonable IMO, felt Nintendo would target 4770 as the stretch and 4670 as the floor on performance. If Nintendo had gone with a customized 4670 at 40nm, there would be no talk of WiiU being just on par with PS3 and Xbox 360.

Well Latte does have significantly more transistors than a HD4670. Even after you remove all the extra embedded memory its still around 100mm2 on a 40nm process which should be at least 600m transistors. That's another reason I find it very hard to believe that its a 8:8:160 GPU (HD4670 is 16:32:320 with 514m transistors).

Again there are plenty of theories on what custom hardware Nintendo may have put into the chip that could be using extra transistors. But they'd have to be pretty significant if we're talking about a chip that size using so few transistors for its texture units, rops and shaders units.
 
A

A More Normal Bird

Unconfirmed Member
I doubt that was issue at all, though. Even if they were only using two cores, the GPGPU functionality would have easily made up for it and then some assuming it that big of problem to begin with if it was a problem.

Easily? I'm not so sure about that. The GPU would probably be better suited to the code that Espresso struggled with, but even on a 320ALU part the hit to graphics processing may have been severe. Not to mention that the GPU-Compute ability of AMD parts pre-GCN was far from perfect.
 

z0m3le

Banned
I know very little about imaging editing and I'm just using the selection tool in paint here so forgive me if I've got something wrong :) But for me a Brazos block is measuring 124,614 pixels, vs 107,328 pixels for a Latte block:
Measuring just the shader logic in the same way comes out at about 74,586 vs 67,878:

Latte_Brazos.jpg


When you take into account that Brazo is DX11 and SM5 while Latte's ALUs don't have the extra logic required for those things, it makes sense that Latte's ALUs would quite a bit smaller as well. You COULD fit 320ALUs in that block, or 256ALUs + extra logic for TEV. You also have to take into account that to scale, the 40nm process is likely more dense than Brazo, especially the advance one we think they used here.

As for Beyond3D, there is obviously people in the thread driving the thought process to the lowest possible performance while reaching xenos or slightly above it.

http://www.beyond3d.com/content/articles/118 I think this speaks for itself. Pay a lot of attention to the first paragraph in the conclusion:

"To conclude this article, we would like to emphasize, once again, that as usual with modern day Nintendo, very little is officially known on the technical side of their new console entry. Therefore, the speculation presented in this article is not to be mistaken with other more factual Beyond3D entries."

Speculation is hopefully (on beyond3d it almost always is or is called out) an educated guess based on known parameters.

The problem with Wii U, a lot of those parameters we work on are not known, but the best guess at the time. Beyond3D's early e3 conclusion was that Wii U's casing wasn't big enough to offer much graphical performance over 360s. Which is at best very foolish to assume since it could easily of been a 32nm part for the GPU and offer 640ALUs or more and still fit under the assumed at the time TDP of 100watts.

What I mean to say about all of this, is that Beyond 3D is a tool, the experts there might have a bit more insight than experts here, although Blu, Wsippel and others from this very thread have all heavily contributed to that thread even more than some of the Beyond 3D members who don't post here. I would not be so quick to dismiss claims made in this thread just because they were not posted in Beyond3D first. In fact this thread has continuously been referenced there with information, Wii U's CPU benchmarks come from blu posting here first iirc. I personally think this thread has done more useful speculation of Wii U hardware than beyond 3D's though this one goes off topic more often.
 
Well Latte does have significantly more transistors than a HD4670. Even after you remove all the extra embedded memory its still around 100mm2 on a 40nm process which should be at least 600m transistors. That's another reason I find it very hard to believe that its a 8:8:160 GPU (HD4670 is 16:32:320 with 514m transistors).

Again there are plenty of theories on what custom hardware Nintendo may have put into the chip that could be using extra transistors. But they'd have to be pretty significant if we're talking about a chip that size using so few transistors for its texture units, rops and shaders units.



I might be confused, but I'm pretty sure HD4670 only has 8 ROPs.



Looking at the Brazos/Latte comparison shots I still don't see how Latte could possibly only have 160 SPs. I mean I see where the point is coming from, but I just don't see how that could possibly be true. ~90% bigger, but the same amount of shader units? Sounds fishy to me.
There is also RV830 (Redwood, HD5xxx generation) which has a die size of 104mm² (that's probably roughly Latte size, considering the eDRAM) and 400 SPs.
I'm sure we've been over this, but nevertheless I just don't see why Nintendo would waste so much die space for a 160SPs GPU. It just doesn't make all that much sense imo.
 

krizzx

Junior Member
I might be confused, but I'm pretty sure HD4670 only has 8 ROPs.



Looking at the Brazos/Latte comparison shots I still don't see how Latte could possibly only have 160 SPs. I mean I see where the point is coming from, but I just don't see how that could possibly be true. ~90% bigger, but the same amount of shader units? Sounds fishy to me.
There is also RV830 (Redwood, HD5xxx generation) which has a die size of 104mm² (that's probably roughly Latte size, considering the eDRAM) and 400 SPs.
I'm sure we've been over this, but nevertheless I just don't see why Nintendo would waste so much die space for a 160SPs GPU. It just doesn't make all that much sense imo.


You are not alone in this, but there are some people who seem dedicated to having the lowest specs discernable established as absolute fact. Though Redwood has 400 with 80 disabled I belive. I've been doing some more research into it. There is little more than can be concluded without a dieshot of the HD55(50/70).

Everyone should definitely kill this 160 SP theory though. Their have been so many different arguments brought up against it and its improbability that I can't see how anyone ever came to that conclusion to begin with.
 
Some people here were arguing how much from Wii Us 33 Watt went to the GPU and CPU. Could it be possible that since they are on the same die, both are running on 33 Watt at the same time? They are basically one chip. I'm not good with technical stuff and like to fantasize about stupid stuff like this :(
 

tipoo

Banned
Some people here were arguing how much from Wii Us 33 Watt went to the GPU and CPU. Could it be possible that since they are on the same die, both are running on 33 Watt at the same time? They are basically one chip. I'm not good with technical stuff and like to fantasize about stupid stuff like this :(

Err, that doesn't make sense. If both were running 33W, the measured power draw at the wall would be in excess of 66 watts. Power can't be re-used once a chip has used it, it's transferred to heat energy. If things worked like your theory you could make everything run off of a few "shared" watts, that's just impossible.
 

krizzx

Junior Member
Some people here were arguing how much from Wii Us 33 Watt went to the GPU and CPU. Could it be possible that since they are on the same die, both are running on 33 Watt at the same time? They are basically one chip. I'm not good with technical stuff and like to fantasize about stupid stuff like this :(

What??? That makes no sense at all. That is not even possible. Where did you get that from?
 

StevieP

Banned
Question, from what we know of Xbox One's specs, how much more powerful is it compared to Wii U do you guys think?

Still quite a bit more. The leaked specs are on. It's just not the same type of gulf that the Wii had with the PS360. But we knew that already.
 
Question, from what we know of Xbox One's specs, how much more powerful is it compared to Wii U do you guys think?

Interesting is how much RAM will go to the OS and other services. I know 3GB has been speculated.

One thing is clear though, XB1 being the lowest on the technial level, then Wii U will downport from XB1 games, and it shares from the memory layout as Wii U has EDRAM and XB1 ESRAM but there function is the same I guess.
 
Sure it's more powerful, but how much on screen, perceptual difference does that power make to the eyes of people who don't count pixels, blow up screen shots, and dissect them for flaws?

I don't think the difference will be astronomical in any way, since the key features such as advanced lighting, great draw distance, tessellation, etc. are available.

Will likely be far more simple, to develop for Xbox One than Wii U for most devs though.
 

Schnozberry

Member
Everyone should definitely kill this 160 SP theory though. Their have been so many different arguments brought up against it and its improbability that I can't see how anyone ever came to that conclusion to begin with.

I don't think we have enough information to dismiss any theory. Fourth Storm put a lot of hard work into his analysis and no one has refuted it yet. We have alternate theories, but I don't think we'll know who's theory pans out until we see what Nintendo has to offer at E3. It should tell us immediately whether we can expect something beyond what the PS3 and 360 have already shown us. More importantly, the gameplay needs to make the gamepad look unique and novel. Not every game necessarily, but at least a couple need to appealing enough that it makes you wonder why no one thought of that before.
 

Meelow

Banned
Absolutely, even as shit as Ghosts looks from someone who's switched to playing CoD games on PC (for reference, Black Ops 2 looked much clearer than Ghosts did on XB)

Yeah it's going to be interesting

Its impossible to directly compare two different architectures.

Yeah, but it will be interesting to see what Microsoft's first party games and Nintendo's first party games will look like on the Xbox One and Wii U at E3 2013.

Interesting is how much RAM will go to the OS and other services. I know 3GB has been speculated.

Wouldn't be shocked if it's 3GB, the OS/Kinect stuff was impressively fast.
 
Question, from what we know of Xbox One's specs, how much more powerful is it compared to Wii U do you guys think?

probably around 8-10x in real terms (not paper numbers)

microsoft themselves quantified the overall durango as around 8x as powerful as 360 in that wired article. since wii u roughly equal a 360 so far imo...
 
Status
Not open for further replies.
Top Bottom