• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.
  • The Politics forum has been nuked. Please do not bring political discussion to the rest of the site, or you will be removed. Thanks.

WiiU "Latte" GPU Die Photo - GPU Feature Set And Power Analysis

Status
Not open for further replies.

fuzzy.dunlop

Banned
Mar 22, 2013
606
0
0
Any publisher not supporting the Wii U after Christmas this year will have shareholders going apeshit.

Looking at the financials for the major publishers this generation, only a small number have posted an actual profit if you look at the entire 7 or so year period. I don't even understand on EA and Take Two shareholders seem to not be so vocal, both have lost a ton of money even with FIFA and GTA.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
The HD 5870 could run unigen heaven's tessellation benchmark around 20fps at the highest setting iirc. http://www.youtube.com/watch?v=S0gRqaHXuJI

HD 4870 had the moblins Tessellation Demo but the above demo was DX11 and ATI didn't have tessellation engine 3 in the HD 4870 which was one of the main reasons it couldn't be upgraded to DX11, however 4800's Tessellation was clearly usable for gaming just it was programmed to general to use such a unique component.

http://www.anandtech.com/show/5261/amd-radeon-hd-7970-review/26

The real reason tessellation might of improved so drastically too was from the dual graphics engine, pushing 2 triangles per clock allowed the card under tessellation to bury the HD 5870 for instance.

http://www.tomshardware.com/reviews/radeon-hd-6970-radeon-hd-6950-cayman,2818-3.html This one explains it better. Basically no matter how good a tessellation unit you might have is, you can't exceed the theoretical triangles per second. This makes sense and I've been leaning this way for a couple weeks but just now read back to make sure I was right on this, tessellation is after all, just splitting simple polygons into smaller polygons to create more detail in an object.

I'm not sure if Wii U has a dual graphics engine, I don't bring it up because I'm looking for a magic bullet but if geometry powers tessellation, that would explain why they would go this route, and could also explain why it is an absolutely huge GPU compared to its performance, even cutting out the eDRAM. Brazo for instance is 75mm^2 with 2 bobcat cores along for the ride.

Nice info. Your point about the dual engines is something I would agree with.

Exactly - we have no idea if the tessellator in Latte is a standard R700 or R800 one, if it's based on more recent designs or if it's a more custom unit.

Oh I was just pointing out the why and that a proper improvement in AMD GPUs occurred before GCN. I hope common sense prevailed and Latte uses a more recent gen tess unit(s).

Alright, I've been rethinking this. I think we really need to reconsider the necessity for discreet DDR3 memory controllers on this thing. Was digging around and I came across this diagram of the 360 slim's cgpu:



This has two separate memory controllers (one bordering each interface/phy) even with the FSB replacement block. The original Xenos also had two memory controllers. I would reason that even with a NB block on Latte, it would still need a couple of DDR3 controllers. Even Brazos and Llano have separate graphics memory controller blocks apart from the NB.

Assuming this is true, things get a bit reshuffled in my assignments. It would make sense for the memory controllers to border both phys, so they would have to be the W blocks. In the few memory controllers I've seen (Brazos, RV770 if I'm right), they seem to contain a decent amount of SRAM as well. The ROPs could actually be Q then, which might make more sense, placement-wise, being close to the shaders. They don't seem to have much SRAM in them, but if my RV770 labeling is correct, that might be normal. If I'm looking at the Tahiti die correctly, its ROPs appear similarly small and low in SRAM. I actually dug up one statement that has Llano's Z/Stencil and color caches at 4kB and 16kB each. That's the only place that I've ever read mention the capacity of those two caches, but if it is correct, they would fit into Q.

http://www.realworldtech.com/fusion-llano/

The only thing that doesn't make sense are the ROPs on Brazos. That's just an incredible amount of die area and memory for 4 ROPs! I don't quite know how to make sense of it.

I was focusing too much on the GPU in Llano. I'm sure you remember this picture.



However your post made me remember something. Wii U has a 64-bit bus while 360 has a 128-bit bus. MCs are usually 64-bit. Wii U should only have one MC like Brazos, not two like 360. And after looking into it further Llano apparently has one 128-bit controller and Redwood, in the above picture, has a 128-bit bus with two 64-bit MCs. RV770 has a 256-bit bus and in turn four MC's. Based on that I would eliminate all of the duplicates as MCs. All that said the Ws wouldn't have enough SRAM for me to label them as such.

But yeah Brazos' ROPs are big.

Sorry, I've digressed a bit there. One thing I've been meaning to ask is if bg is right about the dual graphics engines what does that mean for the voltage of the console..? I've seen previous reports that the Wii U has been running at an average of 30-35 volts, I'm guessing if it does have dual graphics engines then it should be using twice the power..? Or have I got that wrong..? Maybe the launch and launch window games have only used one of them and using both in the future will increase the power draw..? Haven't they got 75W to play with altogether..?

Or have I just got everything terribly confused lol

Thee graphics engine components are fixed function so there shouldn't be much of an impact on heat like adding more ALUs would do. I would assume both engines are automatically enabled and used. We've seen some early indications that Wii U has improved geometry performance compared to the other consoles, but that could be due to just being more modern in design and efficiency. I think we'll need more time to truly know.
 
A

A More Normal Bird

Unconfirmed Member
Nice info. Your point about the dual engines is something I would agree with.

Oh I was just pointing out the why and that a proper improvement in AMD GPUs occurred before GCN. I hope common sense prevailed and Latte uses a more recent gen tess unit(s).

Right, but as z0m3le pointed out, much of the improvement in tessellation performance in the 6 series was due to the overall improvement in geometry processing; even then, the improvement wasn't drastic and the cards fell short of similar nVidia parts in tessellation performance. With GCN on the other hand, tessellation efficiency improved independently of poly throughput.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
Right, but as z0m3le pointed out, much of the improvement in tessellation performance in the 6 series was due to the overall improvement in geometry processing; even then, the improvement wasn't drastic and the cards fell short of similar nVidia parts in tessellation performance. With GCN on the other hand, tessellation efficiency improved independently of poly throughput.

Well of course those cards fell short of comparable nVidia cards.[/nVidiafan] :p

But bringing nVidia up tess performance comparisons is rather pointless for this discussion. The tessellation in GCN was improved due to them overhauling the graphics engines so it should have had increased performance, but also doesn't mean much in comparison to Latte due to the design direction this GPU seems to have gone. What we can look at is that pre-GCN GPUs did have noticeably improved tess performance, even better with dual engines, and the latter is something I suggested due to the die shot. Plus most, if not all, tess benchmarks are pushing a level that won't be in Wii U games to check the performance capabilities of high end cards. For example one of those z0m3le linked to was testing tess at highest settings, 1900x1200, and 16x AF.
 

Fourth Storm

Member
Feb 17, 2006
6,713
1
1,040
I was focusing too much on the GPU in Llano. I'm sure you remember this picture.

However your post made me remember something. Wii U has a 64-bit bus while 360 has a 128-bit bus. MCs are usually 64-bit. Wii U should only have one MC like Brazos, not two like 360. And after looking into it further Llano apparently has one 128-bit controller and Redwood, in the above picture, has a 128-bit bus with two 64-bit MCs. RV770 has a 256-bit bus and in turn four MC's. Based on that I would eliminate all of the duplicates as MCs. All that said the Ws wouldn't have enough SRAM for me to label them as such.

True enough, but Llano's memory subsytem is quite a bit more complicated than just the North Bridge it seems - what with the that, the two buses (Onion and Garlic) and GMC. Don't know how things would operate if you pulled one of those players. Some documentation I read also mentioned how the 2 64-bit channels on Llano are independently operated, so there's basically two memory controllers in there somewhere, even if they are integrated (I don't see where they would be within that small NB block though - maybe ZB is a better place to look).

The reason I'm thinking two MCs are necessary is that, even though Wii U only has a 64-bit total memory bus to DDR3, that appears to be split into 4 distinct 16-bit channels. The typical memory controller supports dual-channel memory configs. In reading up on Llano's config, it mentioned how it works well because CPU accesses are better kept to one channel, while GPU accesses benefit from being spread out across all channels. With Wii U's limited bandwidth, I'd imagine a similar setup would be employed. So yeah, my best would be 2 custom 32-bit dual channel MCs.
 

z0m3le

Banned
Jun 16, 2011
3,883
1
0
37
Seattle, WA
www.notenoughshaders.com
True enough, but Llano's memory subsytem is quite a bit more complicated than just the North Bridge it seems - what with the that, the two buses (Onion and Garlic) and GMC. Don't know how things would operate if you pulled one of those players. Some documentation I read also mentioned how the 2 64-bit channels on Llano are independently operated, so there's basically two memory controllers in there somewhere, even if they are integrated (I don't see where they would be within that small NB block though - maybe ZB is a better place to look).

The reason I'm thinking two MCs are necessary is that, even though Wii U only has a 64-bit total memory bus to DDR3, that appears to be split into 4 distinct 16-bit channels. The typical memory controller supports dual-channel memory configs. In reading up on Llano's config, it mentioned how it works well because CPU accesses are better kept to one channel, while GPU accesses benefit from being spread out across all channels. With Wii U's limited bandwidth, I'd imagine a similar setup would be employed. So yeah, my best would be 2 custom 32-bit dual channel MCs.

AMD and Intel both have quad channel DDR3 memory controllers, AMD's has been in products since early 2010. AMD does use them in their low end servers in I believe the c32 platform. So I'm not sure why they would do this and they wouldn't even really have to in the first place. Also if all you are looking for is a single channel for the CPU, Triple channel DDR3 is more efficient than dual channel thanks to interleaving so this could also be a possibility unless it has to come from AMD. The blocks are quite big though so I wouldn't rule out a quad channel memory controller.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
^ I would rule out a quad-channel memory controller. My response to Fourth would explain why.

True enough, but Llano's memory subsytem is quite a bit more complicated than just the North Bridge it seems - what with the that, the two buses (Onion and Garlic) and GMC. Don't know how things would operate if you pulled one of those players. Some documentation I read also mentioned how the 2 64-bit channels on Llano are independently operated, so there's basically two memory controllers in there somewhere, even if they are integrated (I don't see where they would be within that small NB block though - maybe ZB is a better place to look).

The reason I'm thinking two MCs are necessary is that, even though Wii U only has a 64-bit total memory bus to DDR3, that appears to be split into 4 distinct 16-bit channels. The typical memory controller supports dual-channel memory configs. In reading up on Llano's config, it mentioned how it works well because CPU accesses are better kept to one channel, while GPU accesses benefit from being spread out across all channels. With Wii U's limited bandwidth, I'd imagine a similar setup would be employed. So yeah, my best would be 2 custom 32-bit dual channel MCs.

Onion and Garlic are "different beasts" in that they are the names for the actual bus (PS4 documentation is where I first learned about them). There aren't two MCs in Llano from what I read. It's one dual channel 64-bit controller. What you saw may have said the two channels can act independently, but everything I read said it's one controller.

The memory chips are on distinct channels in all consoles. They're separate chips soldered to the MB. Going back to what you posted we know that 360 used 8, then I believe 4 GDDR3 chips. Those have 32-bit wide interfaces and in turn they used two 64-bit memory controllers. What matters is the bus width that needs to be addressed in the system. In PCs the chips are on a DIMM going through a singular slot and depends on how many channels the system, normally the CPU, can address which is usually two.

Just being honest but proposing 32-bit MCs goes against what we've seen for probably almost a decade in PCs and consoles (ignoring Wii). I think you might be missing something when you say:

The typical memory controller supports dual-channel memory configs.

It's been awhile since I've really paid attention to AMD CPUs, but looking further they have one 128-bit wide controller. And since a channel is normally considered 64-bits, I would guess that MC probably has two 64-bit channels. But it's still one MC. And in ATi/AMD GPUs we see separate 64-bit MCs being used to support multiple channels. And I don't think you realize it, but proposing two dual channel 32-bit MCs (making each 64-bit) is saying Wii U has a 128-bit wide bus. So either your suggestion is not correct or we've been wrong the whole time about Wii U's main memory BW. I would say what you're proposing here is less likely than my dual engine idea. One "standard" 64-bit MC in Wii U would be a more logical and cost effective design than creating two dual-channel 32-bit controllers. And to semi-correct myself from earlier, I would suspect that it's in the NB since Latte doesn't have the same requirements as Fusion GPUs.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
^ That all makes sense. I was mostly trying to point out that AMD has quad channel DDR3 MCs.

I gotcha. I'm sure quite a few people wouldn't mind there being more BW, but I think we can eliminate the idea of two (or more) memory controllers.
 

z0m3le

Banned
Jun 16, 2011
3,883
1
0
37
Seattle, WA
www.notenoughshaders.com
I gotcha. I'm sure quite a few people wouldn't mind there being more BW, but I think we can eliminate the idea of two (or more) memory controllers.

I agree, if they were after Mem2 (DDR3) bandwidth, there were easier ways to go about it, such as a 128bit memory controller and 4 32bit memory chips or 8 16bit ones.

I'm still surprised to see Intel have 128MB L4 cache in their new midrange APUs. They are only 50MB read and write however but it's reduction to latency should be huge. (staying under 60 cycles before that cache gets full.
 

Barack Lesnar

Banned
Jun 7, 2010
19,179
0
0
That's where the Wii U has a huge advantage over the other two consoles, with the releases of the likes of NSL U, Pikmin 3, The Wonderful 101, Wii Fit U, Game & Wario, the rumoured 3D Mario and Mario Kart 8 all releasing before Christmas the Wii U's sales momentum will pick right up again, giving the Wii U a huge marketshare advantage that will probably take Sony and Microsoft years to close. They might not even manage to do that before the Wii 3 is released in 2017/2018.

Any publisher not supporting the Wii U after Christmas this year will have shareholders going apeshit.

is this real life
 

z0m3le

Banned
Jun 16, 2011
3,883
1
0
37
Seattle, WA
www.notenoughshaders.com
^ It's best to just ignore sales predictions in a tech thread, so we can keep this moving. Especially if people mostly ignored it.

We just don't know how next gen will settle and because of Wii U's lower tech specs, we can assume that third party support with multiplatform titles will be looked at with a RoI since it will cost more to scale down a 3rd party game to Wii U (especially once 360 and PS3 have left the market) than it would be to have done so if Wii U had reached a level of parity with XB1.
 

fred

Member
Mar 27, 2013
2,292
8
490
is this real life

Yes, this is real life. If you don't think that those titles I've listed above will increase hardware sales then you're a few sandwiches short of a picnic. Even if Nintendo don't sell a single console until now and the date of launch for the PS4 and One Nintendo are still going to have over a 3m head start with both consoles being supply constrained and more expensive.

We've already seen EA change their stance with regards to support for the console, going from no support at all to 'less than' the PS4 and One within the space of a few days, I guarantee you that is down to investors/shareholders pressure.

Anyway, back on topic...

I'm still convinced that there's some sort of fixed function stuff going on somewhere, the depth of field effects in particular stand out for me as something that's possibly being done by fixed functions, and it wouldn't surprise me if they've got something going on with dynamic lighting and shadows too.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
I agree, if they were after Mem2 (DDR3) bandwidth, there were easier ways to go about it, such as a 128bit memory controller and 4 32bit memory chips or 8 16bit ones.

I'm still surprised to see Intel have 128MB L4 cache in their new midrange APUs. They are only 50MB read and write however but it's reduction to latency should be huge. (staying under 60 cycles before that cache gets full.

I think sometime around or before Nintendo did their "teardown" of Wii U, Fourth, Thraktor, and myself were looking at DDR3 chips and we were beginning to notice that (at the time) if Nintendo were going with DDR3 that there didn't seem to be any market-ready DDR3 chips with 32-bit interfaces. The latter idea could have worked, but knowing Nintendo and what we are seeing in the design of Wii U the best we probably could have hoped for is 4GB on the same 64-bit bus where the eight chips operate in clamshell mode.
 

z0m3le

Banned
Jun 16, 2011
3,883
1
0
37
Seattle, WA
www.notenoughshaders.com
I think sometime around or before Nintendo did their "teardown" of Wii U, Fourth, Thraktor, and myself were looking at DDR3 chips and we were beginning to notice that (at the time) if Nintendo were going with DDR3 that there didn't seem to be any market-ready DDR3 chips with 32-bit interfaces. The latter idea could have worked, but knowing Nintendo and what we are seeing in the design of Wii U the best we probably could have hoped for is 4GB on the same 64-bit bus where the eight chips operate in clamshell mode.

Oh wow, I didn't realize 32bit chips were that new. Had to look it up myself.

I'm wondering if DDR4 will have the width in a lower density, I'm sort of wondering if Steambox is also going to use an APU, with DDR4 it might make sense though AMD has been slower on the pickup than Intel when it comes to adopting new system ram platforms. 8 1GB 32bit DDR4 memory could probably get to a fairly good performance for those chips as they are drastically being held back by DDR3.
 

MDX

Member
Aug 11, 2010
2,330
0
660


Dat eDRAM.
Its like taking up more than a third of the chip.

Anyway...
The ATI-designed GPU is the main memory arbiter for the multicore Xbox 360 CPU. It is connected to the three-core CPU by a 22GB/sec bus, and to the SiS southbridge and I/O controller via a 2-lane PCI Express link. Besides the 10MB of Embedded DRAM (EDRAM), it has a 256-bit bus to 512MB of GDDR3 running at 700MHz, for a total bandwidth of 25.6 GB/sec. Due to the use of extremely fast “smart” EDRAM, ATI claimed “we have bandwidth to spare.”

"Bandwidth to spare"
What does that mean?


Why does the Xbox 360 have such an extreme amount of bandwidth? Even the simplest calculations show that a large amount of bandwidth is consumed by the frame buffer. For example, with simple color rendering and Z testing at 550 MHz the frame buffer alone requires 52.8 GB/s at 8 pixels per clock. The PS3′s memory bandwidth is insufficient to maintain its GPU’s peak rendering speed, even without texture and vertex fetches.

The PS3 uses Z and color compression to try to compensate for the lack of memory bandwidth. The problem with Z and color compression is that the compression breaks down quickly when rendering complex next-generation 3D scenes.
HDR, alpha-blending, and anti-aliasing require even more memory bandwidth. This is why Xbox 360 has 256 GB/s bandwidth reserved just for the frame buffer. This allows the Xbox 360 GPU to do Z testing, HDR, and alpha blended color rendering with 4X MSAA at full rate and still have the entire main bus bandwidth of 22.4 GB/s left over for textures and vertices.

With the XB360

the EDRAM resides on the same package,



and has a wide bus running at 2GHz to deliver 256GB/sec of bandwidth. That’s a true 256GB/sec, not one of those fuzzy counting methods where the 256GB is “effective” bandwidth that accounts for all kinds of compression.

But on the WiiU its sitting in the GPU itself! Not a separate die like with the XB360.
Now why would Nintendo bother to put the eDRAM on the GPU?
We have people speculating that the WiiU's eDRAM does not achieve anything more than 130GB/sec:

The Wii Us eDRAM is either 70 or 130GB/s, and the DDR3 is 12.8GB/s at maximum. One DDR is 68GB/s and eSRAM is 100GB/s, PS4 main RAM is 172GB/s.

But as we can see, Nintendo didn't need to put that eDRAM in the GPU to get those numbers. So again, why would Nintendo go through the trouble of putting so much eDRAM in a GPU if they could have it just on the package and at least match the XBOne or even the XB360? By putting it on the GPU would it not have to offer more gains than even what the 360 could get?

PS3 = 22.4 GB/s … 25.6 GB/s
XBOne = 68.3 GB/s … 100 GB/s
PS4 = 172 GB/s
360 = 25.8 GB/s … 256 GB/s
WiiU = 12.8 GB/s … eDRAM pool 1 ? … eDRAM pool2 ? …
 
A

A More Normal Bird

Unconfirmed Member
Well of course those cards fell short of comparable nVidia cards.[/nVidiafan] :p

But bringing nVidia up tess performance comparisons is rather pointless for this discussion. The tessellation in GCN was improved due to them overhauling the graphics engines so it should have had increased performance, but also doesn't mean much in comparison to Latte due to the design direction this GPU seems to have gone. What we can look at is that pre-GCN GPUs did have noticeably improved tess performance, even better with dual engines, and the latter is something I suggested due to the die shot. Plus most, if not all, tess benchmarks are pushing a level that won't be in Wii U games to check the performance capabilities of high end cards. For example one of those z0m3le linked to was testing tess at highest settings, 1900x1200, and 16x AF.

That's generally why it's better to look at tessellation scaling (performance as a % of performance at the same settings with tess disabled) rather than overall performance. The only reason I brought up nVidia was to highlight that even with the dual graphics engines, the 6-series cards fell short of them in tess performance, whereas GCN was able to match or eclipse them, despite a poly/clock disadvantage.

It's difficult to ascertain how much of the improvement in tess performance for the 6-series was due to increased geometry processing: in some benchmarks, the Barts cards (1poly/clock) scale better than Cayman (2polys/clock), in others, the Barts cards barely scale better than Evergreen whereas Cayman handily outdoes both. So I was a bit quick in my earlier post to say that the increased geometry processing power was responsible for the bulk of the tessellation improvements, but I think part of the problem was the testing regime - as you pointed out, high-res and IQ settings are generally used, which may distort the baseline for the weaker Barts cards. In any case, whether the dual graphics engines are the main performance booster for pre-GCN tessellation or not we end up with the same conclusion: the improvements in tessellation efficiency for GCN far outdo those made between the 5xxx and 6xxx series.

Here's how I see the situation: if Latte doesn't have a dual graphics engine, even if the tessellator was taken from a Southern Islands design, it's tessellation capability is going to be pretty limited. Even more so if it's from Evergreen, and more so again for R700. If your theory is correct then the situation gets better (with scaling improving anywhere between 10 and 50%, bearing in mind the caveats mentioned above) but the Latte would still be lacking compared to the GCN architectures of XB1/PS4, even on a normalised basis. Obviously, I'm ruling out the possibility of any GCN-derived elements in the chip; I don't think that's a particularly radical thing.
 

z0m3le

Banned
Jun 16, 2011
3,883
1
0
37
Seattle, WA
www.notenoughshaders.com
That's generally why it's better to look at tessellation scaling (performance as a % of performance at the same settings with tess disabled) rather than overall performance. The only reason I brought up nVidia was to highlight that even with the dual graphics engines, the 6-series cards fell short of them in tess performance, whereas GCN was able to match or eclipse them, despite a poly/clock disadvantage.

It's difficult to ascertain how much of the improvement in tess performance for the 6-series was due to increased geometry processing: in some benchmarks, the Barts cards (1poly/clock) scale better than Cayman (2polys/clock), in others, the Barts cards barely scale better than Evergreen whereas Cayman handily outdoes both. So I was a bit quick in my earlier post to say that the increased geometry processing power was responsible for the bulk of the tessellation improvements, but I think part of the problem was the testing regime - as you pointed out, high-res and IQ settings are generally used, which may distort the baseline for the weaker Barts cards. In any case, whether the dual graphics engines are the main performance booster for pre-GCN tessellation or not we end up with the same conclusion: the improvements in tessellation efficiency for GCN far outdo those made between the 5xxx and 6xxx series.

Here's how I see the situation: if Latte doesn't have a dual graphics engine, even if the tessellator was taken from a Southern Islands design, it's tessellation capability is going to be pretty limited. Even more so if it's from Evergreen, and more so again for R700. If your theory is correct then the situation gets better (with scaling improving anywhere between 10 and 50%, bearing in mind the caveats mentioned above) but the Latte would still be lacking compared to the GCN architectures of XB1/PS4, even on a normalised basis. Obviously, I'm ruling out the possibility of any GCN-derived elements in the chip; I don't think that's a particularly radical thing.

2 things I'd like to point out, more to the thread than yourself.

1. Tessellation is simply drawing more polygons by splitting what is there, I am not sure what last gen games pushed when it came to polygons per second, but having a limit of 550 million polygons could make tessellation unusable if the game is already pushing those numbers with more simple models (around 9million polygons per frame if the game is 60FPS) of course that is probably a lot of detail anyways for a modern game, but certainly tessellation adds a ton more polygons on top of it, so a dual graphics engine might be needed.

2. Even though dual graphics engines came about in the 6000 series, the chance of Wii U using one would still point to Wii U having custom parts, the GCN tessellator is something that might of been easy to add to Wii U's GPU7 though I think the real problem there is that tessellators from GCN are designed at 28nm and obviously that might pose a problem for GPU7. My point however though is if something is obviously customized, it is likely to pull together the best parts for a goal so we shouldn't assume any of components GCN's, designed ~4 years ago, didn't make it into Wii U's GPU. (meaning one way or the other)

Basically if games were pushing ~300 million polygons per second this last gen, then Wii U's tessellator would be limited in its use since it couldn't even double the polygon count, and though adaptive tessellation is being used in Wii U, there would obviously be a benefit from moving to a dual graphics engine no matter which generation of tessellator it is using.
 

Fourth Storm

Member
Feb 17, 2006
6,713
1
1,040
Onion and Garlic are "different beasts" in that they are the names for the actual bus (PS4 documentation is where I first learned about them). There aren't two MCs in Llano from what I read. It's one dual channel 64-bit controller. What you saw may have said the two channels can act independently, but everything I read said it's one controller.

Yes, it's one dual channel memory controller. I was merely looking for something on the die that would seem to be designed for the dual channel memory accesses. The GMC in Trinity is in the same position as "ZB" on Llano so I figured that would explain those twin groups of SRAM in that block. The block labeled NB on LLano seems somewhat skimpy if we are to believe there's a dual channel DDR3 memory controller in there. That's why I bring it up mostly.


The memory chips are on distinct channels in all consoles. They're separate chips soldered to the MB. Going back to what you posted we know that 360 used 8, then I believe 4 GDDR3 chips. Those have 32-bit wide interfaces and in turn they used two 64-bit memory controllers. What matters is the bus width that needs to be addressed in the system. In PCs the chips are on a DIMM going through a singular slot and depends on how many channels the system, normally the CPU, can address which is usually two.

I'm well aware of this. Even though Wii U's bus width is half of the 360's, though, it would make sense for them to want to keep four separate channels. I suppose the original 360 used clamshell mode, but in that case, every chip would not be an independently controlled channel, as each MC probably only allowed for 2 32 bit channels (if not just a single 64-bit one - I'd have to research it). Z0m brought up an example of a single memory controller that handles four channels, but they don't seem all that common. The benefit of keeping them separate would be reduced turnaround time for read/write switches. That we're dealing with a much slower data throughput than 2-4 64-bit channels would make this arbitration even more important, I would imagine. The bandwidth is precious and should not be left idol.

It's been awhile since I've really paid attention to AMD CPUs, but looking further they have one 128-bit wide controller. And since a channel is normally considered 64-bits, I would guess that MC probably has two 64-bit channels. But it's still one MC. And in ATi/AMD GPUs we see separate 64-bit MCs being used to support multiple channels. And I don't think you realize it, but proposing two dual channel 32-bit MCs (making each 64-bit) is saying Wii U has a 128-bit wide bus. So either your suggestion is not correct or we've been wrong the whole time about Wii U's main memory BW. I would say what you're proposing here is less likely than my dual engine idea. One "standard" 64-bit MC in Wii U would be a more logical and cost effective design than creating two dual-channel 32-bit controllers. And to semi-correct myself from earlier, I would suspect that it's in the NB since Latte doesn't have the same requirements as Fusion GPUs.

I obviously meant that a controller would be capable of handling two independant 16-bit channels, but I get what you're saying in that I probably misworded that.

Interesting fact, in the "Enhanced Memory Controller" patent for Gamecube, they actually describe a memory controller with 4 "memory access control" blocks, each presiding over one channel. Reading over the patent, it seems clear that they were originally planning for a 128-bit bus from main memory to GPU. As we all well know, that was scrapped and here we are over 10 years later and a 128-bit bus still alludes Nintendo! haha.

I'm not adamant about there being two memory controllers. I don't think it should be ruled out so quickly, though. Whatever Nintendo/Renesas have come up with is a custom solution, because they are obviously dealing with channels much narrower than the 64-bit DIMM standard. It's not like they just took some left over Radeon blocks and did a cut and paste.

There is also DMA, which was brought up by Nightbringer. This might be something worth looking into as I believe that Radeons have had dual DMA engines since the R600 series. It should help with GPU compute, freeing up CPU cycles, and moving data from MEM2 to MEM1. According to wikipedia, a 32-bit address bus is standard for this type of access.

It just seems extremely bizarre for there not to be a memory controller or memory controllers adjacent to the DDR3 phy. Llano, Trinity, Brazos, RV770, and Xenos, to name a few all have their NB/MCs in this position. I suppose it could be hidden away in block "X" somewhere, but there is no solid indicator of it. That block is undoubtedly where the AHB bus is as well, but afaik that's strictly for on-chip communication.

In any case, we are somewhat splitting hairs at this point. Same with the question of where the UVD and SX blocks are located. We know they are both there. The only reason this somewhat bothers me is the odd position of the Q blocks and that, if we are to assume that the NB is off somewhere else, there is one pair of blocks that I can't assign a function to. I've tried placing the Qs as LDS, video heads, etc, but none of those identities has been truly satisfying. There's no reason to throw out the baby with the bathwater, though, and say that it must be a dual setup engine. I believe the TMU/cache similarities between Brazos and Latte to be quite conclusive. Even the SRAM in Brazos' "TD" block matches up quite nicely with what we see in Latte's T blocks, minus those 32 banks. With that in mind, it's then easy to say we also have a pair of ROP blocks and L2 blocks. So I dunno, I've pretty much found the answers I was looking for (mainly the core configuration). Not sure if it's worth digging much deeper. Ah, it sure would be nice if Nintendo documented their hardware like the old days. If they admit to not being in the graphics race, why keep it a secret? It's like they are inviting their hardcore defenders to turn delusional. :/
 

Fourth Storm

Member
Feb 17, 2006
6,713
1
1,040
Onion and Garlic are "different beasts" in that they are the names for the actual bus (PS4 documentation is where I first learned about them). There aren't two MCs in Llano from what I read. It's one dual channel 64-bit controller. What you saw may have said the two channels can act independently, but everything I read said it's one controller.

Yes, it's one dual channel memory controller. I was merely looking for something on the die that would seem to be designed for the dual channel memory accesses. The GMC in Trinity is in the same position as "ZB" on Llano so I figured that would explain those twin groups of SRAM in that block. The block labeled NB on LLano seems somewhat skimpy if we are to believe there's a dual channel DDR3 memory controller in there. That's why I bring it up mostly.


The memory chips are on distinct channels in all consoles. They're separate chips soldered to the MB. Going back to what you posted we know that 360 used 8, then I believe 4 GDDR3 chips. Those have 32-bit wide interfaces and in turn they used two 64-bit memory controllers. What matters is the bus width that needs to be addressed in the system. In PCs the chips are on a DIMM going through a singular slot and depends on how many channels the system, normally the CPU, can address which is usually two.

I'm well aware of this. Even though Wii U's bus width is half of the 360's, though, it would make sense for them to want to keep four separate channels. I suppose the original 360 used clamshell mode, but in that case, every chip would not be an independently controlled channel, as each MC probably only allowed for 2 32 bit channels (if not just a single 64-bit one - I'd have to research it). Z0m brought up an example of a single memory controller that handles four channels, but they don't seem all that common. I would guess this would greatly help mitigate turnaround time for read/write switches. That we're dealing with a much slower data throughput than 2-4 64-bit channels would make this arbitration even more important, I would imagine. The bandwidth is precious and should not be left idol.

It's been awhile since I've really paid attention to AMD CPUs, but looking further they have one 128-bit wide controller. And since a channel is normally considered 64-bits, I would guess that MC probably has two 64-bit channels. But it's still one MC. And in ATi/AMD GPUs we see separate 64-bit MCs being used to support multiple channels. And I don't think you realize it, but proposing two dual channel 32-bit MCs (making each 64-bit) is saying Wii U has a 128-bit wide bus. So either your suggestion is not correct or we've been wrong the whole time about Wii U's main memory BW. I would say what you're proposing here is less likely than my dual engine idea. One "standard" 64-bit MC in Wii U would be a more logical and cost effective design than creating two dual-channel 32-bit controllers. And to semi-correct myself from earlier, I would suspect that it's in the NB since Latte doesn't have the same requirements as Fusion GPUs.

I obviously meant that a controller would be capable of handling two independant 16-bit channels, but I get what you're saying in that I probably misworded that. I was thinking how the 64-bit memory controllers on Radeon cards allow independent access to two 32-bit memory channels.

Interesting fact, in the "Enhanced Memory Controller" patent for Gamecube, they actually describe a memory controller with 4 "memory access control" blocks, each presiding over one channel. Reading over the patent, it seems clear that they were originally planning for a 128-bit bus from main memory to GPU. As we all well know, that was scrapped and here we are over 10 years later and a 128-bit bus still alludes Nintendo! haha.

I'm not adamant about there being two memory controllers. I don't think it should be ruled out so quickly, though. Whatever Nintendo/Renesas have come up with is a custom solution, because they are obviously dealing with channels much narrower than the 64-bit DIMM standard. It's not like they just took some left over Radeon blocks and did a cut and paste.

There is also DMA, which was brought up by Nightbringer. This might be something worth looking into as I believe that Radeons have had dual DMA engines since the R600 series. It should help with GPU compute, freeing up CPU cycles, and moving data from MEM2 to MEM1. According to wikipedia, a 32-bit address bus is standard for this type of access.

It just seems extremely bizarre for there not to be a memory controller or memory controllers adjacent to the DDR3 phy. Llano, Trinity, Brazos, RV770, and Xenos, to name a few all have their NB/MCs in this position. I suppose it could be hidden away in block "X" somewhere, but there is no solid indicator of it. That block is undoubtedly where the AHB bus is as well, but afaik that's strictly for on-chip communication.

In any case, we are somewhat splitting hairs at this point. Same with the question of where the UVD and SX blocks are located. We know they are both there. The only reason this somewhat bothers me is the odd position of the Q blocks and that, if we are to assume that the NB is off somewhere else, there is one pair of blocks that I can't assign a function to. I've tried placing the Qs as LDS, video heads, etc, but none of those identities has been truly satisfying. There's no reason to throw out the baby with the bathwater, though, and say that it must be a dual setup engine. I believe the TMU/cache similarities between Brazos and Latte to be quite conclusive. Even the SRAM in Brazos' "TD" block matches up quite nicely with what we see in Latte's T blocks, minus those 32 banks. With that in mind, it's then easy to say we also have a pair of ROP blocks and L2 blocks. So I dunno, I've pretty much found the answers I was looking for (mainly the core configuration). Not sure if it's worth digging much deeper. Ah, it sure would be nice if Nintendo documented their hardware like the old days. If they admit to not being in the graphics race, why keep it a secret? It's like they are inviting their hardcore defenders to turn delusional. :/
 

soulx

Banned
Apr 1, 2013
189
0
0
I've been lurking since the WUSTs and the change in opinion from all these "experts" on the Wii U's potential has been amusing to say the least.

It went from the Wii U being a 1TFLOP beast based off the 4850 that's totally next-gen, to a 768 GFLOPS GPU that's good enough for next-gen, to somewhere in between 500-700 and then with the release of the Die Photo, a 352 GFLOPS GPU maybe almost twice as powerful as the 360 to uh, 360 with better effects?
 

Log4Girlz

Member
May 23, 2006
40,774
0
0
I've been lurking since the WUSTs and the change in opinion from all these "experts" on the Wii U's potential has been amusing to say the least.

It went from the Wii U being a 1TFLOP beast based off the 4850 that's totally next-gen, to a 768 GFLOPS GPU that's good enough for next-gen, to somewhere in between 500-700 and then with the release of the Die Photo, a 352 GFLOPS GPU maybe almost twice as powerful as the 360 to uh, 360 with better effects?

You're hittin' me right in the feels.
 

Vermillion

Banned
Mar 13, 2011
21,186
0
0
I've been lurking since the WUSTs and the change in opinion from all these "experts" on the Wii U's potential has been amusing to say the least.

It went from the Wii U being a 1TFLOP beast based off the 4850 that's totally next-gen, to a 768 GFLOPS GPU that's good enough for next-gen, to somewhere in between 500-700 and then with the release of the Die Photo, a 352 GFLOPS GPU maybe almost twice as powerful as the 360 to uh, 360 with better effects?

Cold
 

Ryoku

Member
Jun 10, 2011
2,785
0
0
Washington, D.C.
I've been lurking since the WUSTs and the change in opinion from all these "experts" on the Wii U's potential has been amusing to say the least.

It went from the Wii U being a 1TFLOP beast based off the 4850 that's totally next-gen, to a 768 GFLOPS GPU that's good enough for next-gen, to somewhere in between 500-700 and then with the release of the Die Photo, a 352 GFLOPS GPU maybe almost twice as powerful as the 360 to uh, 360 with better effects?

Interesting how opinions change based on getting more and more information.
 

Kimawolf

Member
Sep 15, 2012
6,164
7
765
Missouri
I've been lurking since the WUSTs and the change in opinion from all these "experts" on the Wii U's potential has been amusing to say the least.

It went from the Wii U being a 1TFLOP beast based off the 4850 that's totally next-gen, to a 768 GFLOPS GPU that's good enough for next-gen, to somewhere in between 500-700 and then with the release of the Die Photo, a 352 GFLOPS GPU maybe almost twice as powerful as the 360 to uh, 360 with better effects?

oh shit son, you hit the nail on the head!!!

Or... oh, more information available, people's opinions changed. WOW funny how that works!
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
I'm well aware of this. Even though Wii U's bus width is half of the 360's, though, it would make sense for them to want to keep four separate channels. I suppose the original 360 used clamshell mode, but in that case, every chip would not be an independently controlled channel, as each MC probably only allowed for 2 32 bit channels (if not just a single 64-bit one - I'd have to research it). Z0m brought up an example of a single memory controller that handles four channels, but they don't seem all that common. I would guess this would greatly help mitigate turnaround time for read/write switches. That we're dealing with a much slower data throughput than 2-4 64-bit channels would make this arbitration even more important, I would imagine. The bandwidth is precious and should not be left idol.

I don't see the logic of "needing" to keep separate channels when one 64-bit MC would be just as effective at keeping the 16-bit channels busy.

I obviously meant that a controller would be capable of handling two independant 16-bit channels, but I get what you're saying in that I probably misworded that. I was thinking how the 64-bit memory controllers on Radeon cards allow independent access to two 32-bit memory channels.

I gotcha. I think your last sentence will show you why I pointed that out. Whether a dual-channel 32-bit or single channel 64-bit, the MC for both is going to be considered 64-bit.

Interesting fact, in the "Enhanced Memory Controller" patent for Gamecube, they actually describe a memory controller with 4 "memory access control" blocks, each presiding over one channel. Reading over the patent, it seems clear that they were originally planning for a 128-bit bus from main memory to GPU. As we all well know, that was scrapped and here we are over 10 years later and a 128-bit bus still alludes Nintendo! haha.

Or did it? >_> :p


I'm not adamant about there being two memory controllers. I don't think it should be ruled out so quickly, though. Whatever Nintendo/Renesas have come up with is a custom solution, because they are obviously dealing with channels much narrower than the 64-bit DIMM standard. It's not like they just took some left over Radeon blocks and did a cut and paste.

Some would disagree with your last sentence, haha. But I still don't see that necessarily meaning anything as you can argue the same thing with a DIMM since it has multiple chips, which each obviously still having their own channel for access, being connected to the system to one large channel and/or memory controller.

There is also DMA, which was brought up by Nightbringer. This might be something worth looking into as I believe that Radeons have had dual DMA engines since the R600 series. It should help with GPU compute, freeing up CPU cycles, and moving data from MEM2 to MEM1. According to wikipedia, a 32-bit address bus is standard for this type of access.

I only did a quick search, but they seem to be within the MC.

It just seems extremely bizarre for there not to be a memory controller or memory controllers adjacent to the DDR3 phy. Llano, Trinity, Brazos, RV770, and Xenos, to name a few all have their NB/MCs in this position. I suppose it could be hidden away in block "X" somewhere, but there is no solid indicator of it. That block is undoubtedly where the AHB bus is as well, but afaik that's strictly for on-chip communication.

I had mentioned, then deleted a comment on X containing the MC in my other post because of that large amount of SRAM near the DDR3 I/O. For me the only blocks that might work as MCs in a "traditional GPU"-sense would be the U blocks.


In any case, we are somewhat splitting hairs at this point. Same with the question of where the UVD and SX blocks are located. We know they are both there. The only reason this somewhat bothers me is the odd position of the Q blocks and that, if we are to assume that the NB is off somewhere else, there is one pair of blocks that I can't assign a function to. I've tried placing the Qs as LDS, video heads, etc, but none of those identities has been truly satisfying. There's no reason to throw out the baby with the bathwater, though, and say that it must be a dual setup engine. I believe the TMU/cache similarities between Brazos and Latte to be quite conclusive. Even the SRAM in Brazos' "TD" block matches up quite nicely with what we see in Latte's T blocks, minus those 32 banks. With that in mind, it's then easy to say we also have a pair of ROP blocks and L2 blocks. So I dunno, I've pretty much found the answers I was looking for (mainly the core configuration). Not sure if it's worth digging much deeper. Ah, it sure would be nice if Nintendo documented their hardware like the old days. If they admit to not being in the graphics race, why keep it a secret? It's like they are inviting their hardcore defenders to turn delusional. :/

I still think your original placement of P as the SX is correct.[/splittingonemorehair] :p

I don't think the lack of two MCs automatically means it defaults to a dual engine. That's more of a convenient coincidence. (Some of) Those duplicates could relate to something we haven't considered.

But yeah I think we are about as detailed as we can get on our own and agree about whether or not it's necessary to pursue things any further. And I'm with you on keeping things secret. Whether they are in the race or not, I don't see why the need to keep it secret. The people that want to talk about the hardware are going to do so regardless of the info being available. And the games are still going to be the same regardless. But I think I've been able to scratch my itch with this die enough to satisfy me and get properly back to what I need to.
 

Donnie

Member
Mar 24, 2005
2,862
18
1,415
I've been lurking since the WUSTs and the change in opinion from all these "experts" on the Wii U's potential has been amusing to say the least.

It went from the Wii U being a 1TFLOP beast based off the 4850 that's totally next-gen, to a 768 GFLOPS GPU that's good enough for next-gen, to somewhere in between 500-700 and then with the release of the Die Photo, a 352 GFLOPS GPU maybe almost twice as powerful as the 360 to uh, 360 with better effects?

Its quite ironic that you say "1Tflop beast" because this is actually a prime example of where your argument falls down. Its based on the idea that opinions shouldn't change as time goes by and new information comes to light. Yet the thought of a 1Tflop GPU in WiiU was not considered a beast in the slightest back when we first heard about the dev kits. In fact plenty of people ridiculed it saying 1Tflop would be nowhere near powerful enough, because the assumption from many was that PS4 and XBox3 would have at least 4Tflop GPU's. But because the expectations and final reality of PS4 and XBox One GPU's have dropped so substantially over time (just like WiiU) now 1Tflop can be considered a beast..

As others have said information changes and opinion changes with it, that's normal. if opinion doesn't change with new information then there's something wrong.

Also just FYI plenty of people believe 352Gflops is still the most likely scenario for Latte, at least more likely than 176Gflops, but what we're doing here is discussing all possibilities as long as they have a reasonable technical basis behind them.
 

ArchangelWest

Member
Feb 5, 2013
447
0
0
I would add that in general, this thread has helped many people to move away from the out of date idea that Gflps/Tflps are the be all, end all.
 

Smurfman256

Member
May 24, 2012
963
0
395
I just bought Trine 2 and I noticed a clumpy, rainbowing effect on the objects while viewing the game on the TV. While viewing it on the gamepad, however, the effect was gone. Could someone care to explain?
 

USC-fan

Banned
Oct 9, 2005
7,115
1
1,235
Interesting how opinions change based on getting more and more information.

Funny I called out this bs 8 months ago....

The data was right in front of them but like always they spin it to levels that are just silly. Same exact thing still happens today with the same "experts." It one crazy theory to the next...lol
 

wsippel

Banned
May 25, 2006
14,529
0
0
Erfurt, Germany
Funny I called out this bs 8 months ago....

The data was right in front of them but like always they spin it to levels that are just silly. Same exact thing still happens today with the same "experts." It one crazy theory to the next...lol
Not much different from the Xbox One or PS4 situation then, yet I don't see you lay down your superior knowledge in those threads.
 

Donnie

Member
Mar 24, 2005
2,862
18
1,415
Funny I called out this bs 8 months ago....

The data was right in front of them but like always they spin it to levels that are just silly. Same exact thing still happens today with the same "experts." It one crazy theory to the next...lol

I don't think its a great idea to start with the "I told you so" considering some of the gems you came out with.
 

Hermii

Member
Sep 17, 2012
6,976
0
0
I just bought Trine 2 and I noticed a clumpy, rainbowing effect on the objects while viewing the game on the TV. While viewing it on the gamepad, however, the effect was gone. Could someone care to explain?

Latte hasnt got enough shaders to display this effect on the gamepad. This clearly shows that Fourth Storm is right and there is only 176 gflops. They forgot to put in dual rainbow shaders.

Its a software issue, not a hardware issue.
 

soulx

Banned
Apr 1, 2013
189
0
0
Its quite ironic that you say "1Tflop beast" because this is actually a prime example of where your argument falls down. Its based on the idea that opinions shouldn't change as time goes by and new information comes to light. Yet the thought of a 1Tflop GPU in WiiU was not considered a beast in the slightest back when we first heard about the dev kits. In fact plenty of people ridiculed it saying 1Tflop would be nowhere near powerful enough, because the assumption from many was that PS4 and XBox3 would have at least 4Tflop GPU's. But because the expectations and final reality of PS4 and XBox One GPU's have dropped so substantially over time (just like WiiU) now 1Tflop can be considered a beast..

As others have said information changes and opinion changes with it, that's normal. if opinion doesn't change with new information then there's something wrong.

Also just FYI plenty of people believe 352Gflops is still the most likely scenario for Latte, at least more likely than 176Gflops, but what we're doing here is discussing all possibilities as long as they have a reasonable technical basis behind them.
1TFLOP was regarded as a point of pride for many of the posters in that thread. When the info from Epic came out saying that Unreal Engine 4 required at least 1TFLOP to shine, everyone was cool with that as the general sentiment at that time was the Wii U's GPU was about that powerful (4850).

Except to the more level-headed individuals, it was already pretty clear even back then that the Wii U wasn't going to be as powerful as so many thought. There were numerous verified insiders who came and said as much. arkam, lherre and Chopper all said that the system wasn't nearly as powerful as you guys thought.

But how did most the folks in WUST react to the news (not going to link to specific posts so as not ostracize anyone but this is all in the 2nd WUST):
Even if his credentials check out, it doesn't mean what he says it's true. We have multiple rumors suggesting it will be more powerful than current gen consoles, while we have one rumor that says it's weaker. What to believe what to believe...
Ok I don't get it. ONE guys makes ONE post and all of GAF believes him? You guys must be joking. I mean, how do we believe him? He could be just some troll who has been watching the thread until he got accepted to GAF. Then made this post in order to see how many would believe him. I mean, he said it had modern hardware in it that was weaker than what was in the 360. Any GPU from AMD 4000 series and up, can not be weaker than an early 2000 series. And even if it's tri-core, out-of-order pretty much puts it above the 360 right there. It makes little sense to me how (s)he's more believable than IGN.
What does this say? Not much because we don't know what THESE developers was expecting/hoping for in terms of "next-gen" hardware.
All of these kinds of information/comments are from industry. Why just one person with old version of dev kit can cause such big reactions in this thread? Actually I think ppl in this thread are ppl with better knowledge of current status of wiiU.
Arkam is bullshitting us all. Ignore his posts if you like, because they're totally inaccurate and illogical.

There's no need for concern.
and so on to the point that posters were downright flaming the guy...

Hell Donnie, you yourself were pretty aggressive towards Arkam.

You can go ahead and claim that everyone was being all level-headed and that the gradual release of information about the system's power (through the performance of launch ports, die photo, etc.) are what allowed you to come to your current conclusion but that clearly isn't the case. It's the bias towards Nintendo in those threads that clouded these so-called expert's judgment and what prevented them from realizing that the Wii U isn't nearly as powerful as they thought. It was clear from the very beginning.
 

lwilliams3

Member
Apr 10, 2007
2,528
0
1,135
Indiana
I just bought Trine 2 and I noticed a clumpy, rainbowing effect on the objects while viewing the game on the TV. While viewing it on the gamepad, however, the effect was gone. Could someone care to explain?

You may get an answer to that on Frozenbyte's website.

1TFLOP was regarded as a point of pride for many of the posters in that thread. When the info from Epic came out saying that Unreal Engine 4 required at least 1TFLOP to shine, everyone was cool with that as the general sentiment at that time was the Wii U's GPU was about that powerful (4850).

Except to the more level-headed individuals, it was already pretty clear even back then that the Wii U wasn't going to be as powerful as so many thought. There were numerous verified insiders who came and said as much. arkam, lherre and Chopper all said that the system wasn't nearly as powerful as you guys thought.

But how did most the folks in WUST react to the news:





and so on to the point that posters were downright flaming the guy...

Hell Donnie, you yourself were pretty aggressive towards Arkam.

You can go ahead and claim that everyone was being all level-headed and that the gradual release of information about the system's power (through the performance of launch ports, die photo, etc.) are what allowed you to come to your current conclusion but that clearly isn't the case. It's the bias towards Nintendo in those threads that clouded these so-called expert's judgment and what prevented them from realizing that the Wii U isn't nearly as powerful as they thought. It was clear from the very beginning.

In the case of Arkam, it was the way he initially presented the info that cause most of the negativity towards his character and sources. He did mellow out, though, and we eventually had enough pieces of the puzzle to understand what was going on.

In any case, changing opinions of unleashed (or released) hardware is common. You can just look at the recent rise and fall of the opinions with the Xbox One to see that.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
May 4, 2007
13,696
272
1,285
1TFLOP was regarded as a point of pride for many of the posters in that thread. When the info from Epic came out saying that Unreal Engine 4 required at least 1TFLOP to shine, everyone was cool with that as the general sentiment at that time was the Wii U's GPU was about that powerful (4850).

Except to the more level-headed individuals, it was already pretty clear even back then that the Wii U wasn't going to be as powerful as so many thought. There were numerous verified insiders who came and said as much. arkam, lherre and Chopper all said that the system wasn't nearly as powerful as you guys thought.

But how did most the folks in WUST react to the news (not going to link to specific posts so as not ostracize anyone but this is all in the 2nd WUST):





and so on to the point that posters were downright flaming the guy...

Hell Donnie, you yourself were pretty aggressive towards Arkam.

You can go ahead and claim that everyone was being all level-headed and that the gradual release of information about the system's power (through the performance of launch ports, die photo, etc.) are what allowed you to come to your current conclusion but that clearly isn't the case. It's the bias towards Nintendo in those threads that clouded these so-called expert's judgment and what prevented them from realizing that the Wii U isn't nearly as powerful as they thought. It was clear from the very beginning.
That's some piss-poor revisionism there.

'1TF beast' (A beast? Really? It's funny what recent xb1 events have done to people), 'Arkam was attacked by fanboys!' (the way he presented his information apparently had nothing to do with it), 'Everybody expected 4850 levels' (even though it was always assumed to be downclocked and most discussions early on were about the possibility of 640 SPs in UGPU).

D- for the quoting effort, though.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
1TFLOP was regarded as a point of pride for many of the posters in that thread. When the info from Epic came out saying that Unreal Engine 4 required at least 1TFLOP to shine, everyone was cool with that as the general sentiment at that time was the Wii U's GPU was about that powerful (4850).

Except to the more level-headed individuals, it was already pretty clear even back then that the Wii U wasn't going to be as powerful as so many thought. There were numerous verified insiders who came and said as much. arkam, lherre and Chopper all said that the system wasn't nearly as powerful as you guys thought.

But how did most the folks in WUST react to the news (not going to link to specific posts so as not ostracize anyone but this is all in the 2nd WUST):





and so on to the point that posters were downright flaming the guy...

Hell Donnie, you yourself were pretty aggressive towards Arkam.

You can go ahead and claim that everyone was being all level-headed and that the gradual release of information about the system's power (through the performance of launch ports, die photo, etc.) are what allowed you to come to your current conclusion but that clearly isn't the case. It's the bias towards Nintendo in those threads that clouded these so-called expert's judgment and what prevented them from realizing that the Wii U isn't nearly as powerful as they thought. It was clear from the very beginning.

I saw this last night, but blu beat me to mentioning how much revisionist history was going on in your posts here and in some of the other posts in this thread as well though I was trying not to address it. But this has got to stop.

First the only one throwing around this "expert" comment is you. What does that mean and what is that even based on? Is it based on who talked about the hardware the most? If that's the case then I'm a "PS4 expert" as well.

http://www.neogaf.com/forum/showthread.php?t=477540
http://www.neogaf.com/forum/misc.php?do=whoposted&t=477540

While we're at it I'm also apparently an Xbox fanboy.

http://forum.beyond3d.com/showpost.php?p=1708152&postcount=2382

But I have tried to talk about the Xbox in the past. In hindsight I probably should have left my opinion out of the OP.

http://www.neogaf.com/forum/showthread.php?t=460678

These links are also examples as to why I don't take console discussion as seriously as others might. I just want to talk about the hardware, but some want to take it way further than it should go.

The rest of what you are saying is just as absurd. The whole Wii U having a 1TFLOP GPU started well before the WUSTs even came to existence.

First was the initial talks of the GPU being an R700. Then we had IGN saying this:

http://www.ign.com/articles/2011/05/14/we-built-nintendos-project-cafe

IGN's sources originally informed us that Nintendo would be calling upon a custom triple core CPU, similar to what is being used in the Xbox 360, for their next system, as well as a graphics processor built upon AMD's R700 architecture. We called upon our trusted sources to help us determine the retail products with the closest possible clock speed and power, as well as fill in some of the blanks for other components, like finding a suitable motherboard and appropriate amount of RAM.


Our system comprises of the following:

CPU: 3.2GHz Triple Core AMD Athlon II X3 450
GPU: XFX Radeon HD 4850 GPU with 1GBs of VRAM
Motherboard: BIOSTAR A780L3L Micro ATX
RAM: 2GBs of Kingston DDR3
Power Supply: Rosewill RV350 ATX 1.3
Hard Drive: 80GB WD Caviar Blue 7200RPM

And then we had the badly Google translated article which it took about seven or eight months to get a proper translation.

http://translate.google.com/transla...s=org.mozilla:en-US:official&biw=1920&bih=872

By the way, say RADEON HD4000 system, a translation is to RADEON HD4890 to high-end RADEON HD4350 low-end, but because there was also information from a different system from the AMD processing performance that exceeds the 1TFLOPS, the middle is the spec basis rather than the RADEON HD4000 system of range class, it might be close to the RADEON HD4800 series of high-end system Possibly.

And from there was an article about the dev kits being underclocked.

http://www.gamesindustry.biz/articles/2011-06-16-wii-u-dev-kits-underclocked

And then lherre telling us about the dev kits overheating and how the GPU would stall because of that.

So similar to what blu says, once the WUST started some of us speculated on the idea of a GPU with 640 ALUs (me 640-800) with a lower clock and eventually also a possibly smaller process.


By the time that UE4 info came out that talk had been long gone. The only talk was that Wii U should be capable of running UE4 in a reduced capacity. That UE4 info came almost a year after the WUSTs started. As someone heavily involved in the Samaritan and UE4 talk I know what you're saying is wrong.

The whole "some people kept lowering their view on specs" isn't correct either. Here is the first speculated guess I posted.

http://www.neogaf.com/forum/showpost.php?p=30236063&postcount=5354

The only real change I made over the following year and a half was the clock speed, which in turn would lower the FLOPs metric. Other than that it was seeing the actual die shot. Making it sound like there were multiple changes down isn't correct either.


Also as blu already mentioned, it wasn't that people reacted to the new like that. They reacted to the way Arkam presented his info. I bet all of those quotes were in response to Arkam. Lherre has already been sharing things for months by that point and he never once got that kind of reaction. And if we are going to talk about Arkam, then let's make sure to get the full picture of what he has said.

http://www.neogaf.com/forum/showpost.php?p=42010101&postcount=1420

Some yes. But the same problems are still present if you are porting a game over from the Xbox360. So if companies dont want to invest retooling games, you will get crappy(downgraded) ports.


That said,I am nothing but excited for the console and cant't wait to see the first round of games made from the ground up on the WiiU. That is when we will see what this little beast can do!

The bias thing is funny to me because at least for me it wouldn't have anything to do with Nintendo, but me being optimistic and always looking for the positives. One poster even pointed out my optimism.

Here I am being optimistic with PS4.

http://www.neogaf.com/forum/showpost.php?p=38658228&postcount=139
bgassassin said:
I agree. I think Sony is on track for a powerful console that won't break the bank this coming gen.

Here I am being optimistic about Xbone despite the info I had just passed along.

http://www.neogaf.com/forum/showpost.php?p=47835397&postcount=1522

Depressing would have been one of the last words I would have used. I think it's pretty powerful considering retail cost limits.

http://www.neogaf.com/forum/showpost.php?p=47838939&postcount=1559

Only if the focus is on a couple of points. For me personally those "negative" points are a non-issue so far.

So if anything I'm biased for all the consoles.


I don't know what it is you're even trying to achieve with these posts you are making. They're (old) opinions. Heck here was another opinion I made.

http://www.neogaf.com/forum/showpost.php?p=47540962&postcount=2039

IMO and based on a discussion I prompted on B3D at one time dealing with PS4 and Xbox 3, Nintendo seems to be going through having to put together new documentation and dev tools for their HD transition. And this is the biggest issue right now. Like one of the posters said at that time, you're only as good as your tools. It seems PS3 had the same trouble early on. We've seen quite a few cases of devs mentioning they had to learn things on their own with Wii U. I don't see it as intentional by Nintendo, but I do see it as frustrating because Nintendo already has enough going against them whether deserved or not and it can be considered reason to impact the possibility of future games again whether deserved or not.

http://www.neogaf.com/forum/showpost.php?p=47702193&postcount=537

Yeah like I said in that post I believe in my evaluation of Wii U's potential, but it's on Nintendo to show what Wii U is capable of. I won't go to technical, but to me it seems like Nintendo is having to get newer documentation and dev tools together because of their transition to HD and modern hardware development. So they're going through growing pains right now and hopefully they'll be able to provide devs with a better environment where they aren't having to figure out so much with Wii U on their own.

And what did we learn not even a day after that second post?

http://www.neogaf.com/forum/showpost.php?p=47712851&postcount=648

We benefited by not quite being there for launch - we got a lot of that support that wasn't there at day one... the tools, everything."

"Tools and software were the biggest challenges by a long way... the fallout of that has always been the biggest challenge here," Idries reaffirms. "[Wii U] is a good piece of hardware, it punches above its weight. For the power consumption it delivers in terms of raw wattage it's pretty incredible. Getting to that though, actually being able to use the tools from Nintendo to leverage that, was easily the hardest part."

I'm not going around saying "I told you so" because it's a freaking opinion. I got out of doing "I told you so" on opinions because to me it's like boasting about a coin toss landing like you called it. And likewise for trying to put someone down for the toss not going as they called it.




Anyway to get back on topic there was one thing I kept forgetting to mention that I wanted to pass along. That was other than the duplicates in the die shot the other half to the idea I had about a dual graphics engine, or maybe something like Cypress was that Flipper/Hollywood had three rasterizers.




http://www.segatech.com/gamecube/overview/

PLL: Phase Lock Loop
eFB: Embedded Frame Buffer
eTM: Embedded Texture Memory
TF: Texture Filter
TC: Texture Coordinate Generator
TEV: Texture Environment
RASx: Rasterizer
C/Z: Color/Z Calculator
PEC: Pixel Copy Engine
SU: Triangle Setup
CP: Command Processor
DSP: Audio DSP
XF: Triangle Transform Engine
NB: Northbridge - all system logic
including CPU interface, Video
Interface, Memory Controller, I/O
Interface


I probably need to reiterate that my view is just an opinion since if/when we learn that Wii U does not have a dual engine, some of you won't try to come back later and say that I claimed it would have one.
 

Fourth Storm

Member
Feb 17, 2006
6,713
1
1,040
So it seems conversation has died down/been diverted in here over the past few days. I suppose I may as well let loose a little tidbit that has been passed on to me.

I have heard that Latte is manufactured on a 45nm (not 40nm) process at Renesas. I have not been able to get 100% confirmation, but I do trust the source. I have my own thoughts but will save them for later. Feel free to make of this info as you will.
 

bgassassin

Member
Sep 8, 2006
8,188
0
0
So it seems conversation has died down/been diverted in here over the past few days. I suppose I may as well let loose a little tidbit that has been passed on to me.

I have heard that Latte is manufactured on a 45nm (not 40nm) process at Renesas. I have not been able to get 100% confirmation, but I do trust the source. I have my own thoughts but will save them for later. Feel free to make of this info as you will.

A quick search shows it's not farfetched.

http://www.edn.com/electronics-news/4320767/Matsushita-Renesas-Testing-45nm-SoC-Technology
 

Raist

Banned
Jan 27, 2007
24,541
1
0
Its quite ironic that you say "1Tflop beast" because this is actually a prime example of where your argument falls down. Its based on the idea that opinions shouldn't change as time goes by and new information comes to light. Yet the thought of a 1Tflop GPU in WiiU was not considered a beast in the slightest back when we first heard about the dev kits. In fact plenty of people ridiculed it saying 1Tflop would be nowhere near powerful enough, because the assumption from many was that PS4 and XBox3 would have at least 4Tflop GPU's. But because the expectations and final reality of PS4 and XBox One GPU's have dropped so substantially over time (just like WiiU) now 1Tflop can be considered a beast..

The fuck?
 

Cosmonaut X

Member
Jun 5, 2006
9,604
0
0
Scotland
So it seems conversation has died down/been diverted in here over the past few days. I suppose I may as well let loose a little tidbit that has been passed on to me.

I have heard that Latte is manufactured on a 45nm (not 40nm) process at Renesas. I have not been able to get 100% confirmation, but I do trust the source. I have my own thoughts but will save them for later. Feel free to make of this info as you will.

Hm. Could that explain the odd sizes of certain components?
 

Xun

Member
Feb 10, 2006
14,157
1
0
31
United Kingdom
It's all speculation, but what are people now saying about the GPU? I've been out of the loop.

I know people all have conflicting theories but I'm curious.
 

ozfunghi

Member
Jun 19, 2010
5,469
0
580
So it seems conversation has died down/been diverted in here over the past few days. I suppose I may as well let loose a little tidbit that has been passed on to me.

I have heard that Latte is manufactured on a 45nm (not 40nm) process at Renesas. I have not been able to get 100% confirmation, but I do trust the source. I have my own thoughts but will save them for later. Feel free to make of this info as you will.

Heard where? The guy from Chipworks said an advanced 40nm process.
 

AzaK

Member
Jun 11, 2011
8,363
1
0
So it seems conversation has died down/been diverted in here over the past few days. I suppose I may as well let loose a little tidbit that has been passed on to me.

I have heard that Latte is manufactured on a 45nm (not 40nm) process at Renesas. I have not been able to get 100% confirmation, but I do trust the source. I have my own thoughts but will save them for later. Feel free to make of this info as you will.
They're not confused about the CPU? :)

Weird. Although as we know Wii U isn't SoC, so would Nintendo go to expense of customising it. And of course the question has to be asked, why use 45nm above others?!
 

Phazon

Member
Mar 3, 2012
1,907
0
0
Belgium
www.4gamers.be
While I'm not a very techie guy like most of you here, I really enjoy this discussion. Keep up the good work guys and hopefully you'll be able to comment a bit more on all this after we've seen what Wii U games at E3 are going to look like. :)
 
Status
Not open for further replies.