• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

WiiU technical discussion (serious discussions welcome)

QaaQer

Member
Are contributors getting a copy? I'd like to see it.

you'd have to be pretty sure of who you were sending the photo to, who wants to be sued. And I'd bet that they are all using a security watermark. I think chipworks has some sample photos of other chips though, if you just want to see what a chip in general looks like.
 

ahm998

Member
Fix:
Sorry guys open 2 thread post wrong.

Accept my apologize.

Whey they are taking money for scanning GPU and what kind of information they will give?
 

Clefargle

Member
What does this irrelevant shit have to do with the actual tech specs of the system? Get your vitriol out of this thread, learn grammar, and read the OP.

What's wrong with nintendo guys?

All these games not announced to Wii U:

Metal gear Rising
Bioshock 3
Crysis 3
Dead space 3
Metal gear phantom Pain
Tomb Raider
Dishonored
GTA V
Dark soul 2
DmC
Resident evil 6
Dead or alive 5
Naruto Ultimate Ninja Storm 3
Far Cry 3
FFXIII:Lightning Returns

And there is so many.

Nintendo lost there Magic again and again to get 3rd party , i don't know why even Mr.reggi promises us to get many game and still nothings showed up.

That's mean start from now Wii U will be only 1st party supported for next generation or nintendo will do the magic very soon.

Nintendo bring all control type and good mid range GPU still we get nothings .

I think in the end this is nintendo mistake and 3rd party do there job very well , i remember when Ps3 coming even with hard development get so many support.

I wish my thinking wrong in the end and nintendo will solve this very soon.
 

Daedardus

Member
Can someone explain why these pictures are so expensive?

I guess they have to get their Wii U money back, but couldn't someone just buy a broken Wii U from somewhere and scan them in themselves? Or is it hard to get access to that kind of equipment, even if you work in the industry?
 

LeleSocho

Banned
Can someone explain why these pictures are so expensive?

I guess they have to get their Wii U money back, but couldn't someone just buy a broken Wii U from somewhere and scan them in themselves? Or is it hard to get access to that kind of equipment, even if you work in the industry?

The process of stripping down a chip costs a lot of money here's an example on how it works and what are the machines used.
http://www.ifixit.com/Teardown/Apple+A6+Teardown/10528/1
 

tipoo

Banned
Can somebody help a man lost and tell me what on earth is going on here? Pictures of the GPU? I don't understand this. Why does it cost $200 for pictures and who charges that? Do teardowns like I often see for devices not work for this? Couldn't somebody just open their Wii U and look at the information? Or are these some kind of special images showing more than the human eye can see? Why can't you post the pictures?

I've never been so lost before.

It's not just a picture of the chip, they dissolve the top layer and take a high res picture of the actual architecture of the chip. It comes out something like die shots Intel and AMD release, and you can tell stuff like shader count from it.

It will look something like this, and you can see how many compute clusters there are, and from that plus the architecture type you can tell how many shaders. And what's more, from how many shaders plus the architecture type plus the clock speed (which we know is 550 thanks to marcan), we can tell the Gflops.

https://twitter.com/marcan42

gk104-die-shot.jpg



So Fourth Storm, will the details go here or in a separate thread, or shared through PMs? Also god we got to 200 dollars fast, I love you geeky guys :p


(also I just skimmed through Marcans tweets for the first time in a while...Nintendo didn't strip their binaries on the Wii U?! The thing seems like a hackers wet dream).
 

OryoN

Member
Nice work guys! That was faster than I thought. Much thanks to those who contributed more than expected. Sorry for those who wanted to contribute but didn't make it in time... don't worry, remain on standby for the CPU project. ;)

Cool bananas. And remember, post fake diagrams and info and wait a few days. Let all the leeching game sites that should have done this themselves suck on some wrong info for a while.

This is another project that I would gladly fund, haha.
 
It's not just a picture of the chip, they dissolve the top layer and take a high res picture of the actual architecture of the chip. It comes out something like die shots Intel and AMD release, and you can tell stuff like shader count from it.

It will look something like this, and you can see how many compute clusters there are, and from that plus the architecture type you can tell how many shaders. And what's more, from how many shaders plus the architecture type plus the clock speed (which we know is 550 thanks to marcan), we can tell the Gflops.

https://twitter.com/marcan42

gk104-die-shot.jpg



So Fourth Storm, will the details go here or in a separate thread, or shared through PMs? Also god we got to 200 dollars fast, I love you geeky guys :p


(also I just skimmed through Marcans tweets for the first time in a while...Nintendo didn't strip their binaries on the Wii U?! The thing seems like a hackers wet dream).

Is that the actual Latte chip or is it just a photo that you pulled up for reference?
 

tipoo

Banned
Is that the actual Latte chip or is it just a photo that you pulled up for reference?

That's Nvidias GK104 Kepler architecture, specifically the GTX680 graphics card.

This is GK110 which no consumer card currently has, just for fun. You can see the added compute units.

NVIDIA-GeForce-Titan-Is-the-Name-of-NVIDIA-s-GK110-Graphics-Card.jpg
 

Donnie

Member
We all know it doesn't come anywhere near the next gen machines. It may have a gpu that is similar in that its much more modern but it's held back in every other regard.

In the Trine 2 thread, a developer of the game basically confirms the Wii U is more powerful than the hd twins but it doesn't have the power to run the game at 60fps at 720P. A mid range PC could easily do that, probably even at 1080P.

The GPU specs are pretty well laid out. It's GPU is based off of an AMD card from 2010. It's CPU is heavily based off of the old IBM Power PC chips.

Knowing vaguely what PC GPU its based on isn't a specification. The GPU specs are very far from well laid out, we actually know almost nothing about them. Hopefully with this Chipworks scan we'll soon know most of the specs.

In terms of the CPU, lots of newly released chips are based on older architectures to varying degrees (most of them really), including XBox3 and PS4's new CPU's. What would be interesting to know is just how heavily Espresso is based on Broadway cores.
 

Donnie

Member
I went to bed last night as this whole thing was being arranged and by the time I get back on the PC its already been paid for.. Oh well if we end up going for the CPU shot there's still $20 here for that.
 

FLAguy954

Junior Member
I went to bed last night as this whole thing was being arranged and by the time I get back on the PC its already been paid for.. Oh well if we end up going for the CPU shot there's still $20 here for that.

Same here, I'm in $15 for the CPU if you Gaffers don't beat me to the punch.
 

pestul

Member
Got my $10 in last night. If this results in something really good, I think I will contribute for the cpu as well.. just for the heck of it lol.

And Fourth Storm was never heard from again..
 

Thraktor

Member
We could do the CPU after, but there's not as much unknown there (it's three 750-based cores with 3MB of eDRAM L2 cache). If we wanted to do it right, we should buy both the Espresso photo and the Broadway photo, so that we could determine if there have been significant changes to the cores (although there's no guarantee that we could tell what the changes are, just how big the changes are).

And Fourth Storm was never heard from again..

He's good for it (he better be, he's got my money, too ;) ), but I think he has to wait a couple of days for funds to clear, so it may be a bit longer than expected.
 

pestul

Member
Yeah, since there's really no rush on anything, we should probably take our time and keep the amount limited to like $5 each for the cpu. I don't really care if the GPU turns out to be a disappointment, but it will be nice to finally know a few more specifics about it.
 

tipoo

Banned
Spitball here, what could we tell from a CPU die picture?

Show me a GPU die and tell me what architecture it is and I can probably tell how many shader units are in there as mentioned I above, and from that how many Gflops since we know the clock speed, but apart from the core count I'm not sure what we could tell from a CPU. I'd still love to see it, I'm just not sure what it will reveal.

I guess memory interface could be interesting, I wonder if it has full speed access to the 32MB the GPU supposedly has. That would be interesting for passing GPGPU data back and forth.

Conversely I wonder if the GPU has full speed access to the CPUs alleged 3MB.

Heck neither number was confirmed, maybe we can tell how much eDRAM there is from the dies? We know the measurements of the dies, with the percentage of the die the eDRAM takes we would know how many mm2 those are.

Chipworks has also traditionally stated the fabrication process of the chips they show pictures of, I expect that detail, althouh I doubt there will be any surprise there since we know the CPU is being made at IBMs old 45nm plant, and the CPU is probably on TSMCs 40.
 
I don't really care if the GPU turns out to be a disappointment, but it will be nice to finally know a few more specifics about it.
Me too, at least we'll know what's there. With the CPU we kinda know (and anyone expecting otherwise is only setting himself up for disappointment). I'd still like to see it done, but it really has to come with that disclaimer.


And I'm sorry that I can't chip in (pun intended) but I'm really low on money, haven't had the money to buy any new games this year too (great thing I have a huge ass backlog though). So yeah.



As for what the biggest surprise to see for me would be, that would have to be layout done by hand, like with Apple's A6:



Generally, logic blocks are automagically laid out with the use of advanced computer software. However, it looks like the ARM core blocks (of the A6) were laid out manually—as in, by hand.

A manual layout will usually result in faster processing speeds, but it is much more expensive and time consuming.

The manual layout of the ARM processors lends much credence to the rumor that Apple designed a custom processor of the same caliber as the all-new Cortex-A15, and it just might be the only manual layout in a chip to hit the market in several years.
Source: http://www.ifixit.com/Teardown/Apple+A6+Teardown/10528/2

If the GPU was somewhat messed-up that way. (not counting on it, but would be impressive; I believe such method is also supposed to help reduce power consumption a tad, since it's about turning the design more efficient)
 

Donnie

Member
I reckon that's gonna be disappointing:

It's a core shrinked PPC 750 with new SMP interfacing. (although I'd love to be wrong)

Not exactly, after all we know it has bigger/different caches, also even Broadway isn't just a PPC750.

I doubt it'll give us anything nearly as interesting as the GPU shot of course, but as long as people don't expect us to see a totally different CPU core from Broadway than they shouldn't be disappointed.

At the same time just because the way the CPU reacts to software seems exactly like Broadway doesn't mean there aren't any changes outside of the ones we know of. Might be difficult to work those changes out from a die shot though..

Anyway I've got no problem in sticking in $20 for it considering I was going to do that just for the GPU and ended up not needing to.
 

Pociask

Member
Here's a dumb, but serious question:

A lot of cost and expense is going into looking at what the hardware looks like. With Wii U's in the wild, though, is there no way to run benchmarking software on the hardware itself. If there' s not now, will there be in the future when the ability to run homebrew becomes available?

My frame of reference is that usually when a new GPU is released, there are usually bar charts produced by magazines, which basically amount to "Yep, this new one is this much better than this old one." Did we ever get something like that for last gen consoles?
 

tipoo

Banned
Here's a dumb, but serious question:

A lot of cost and expense is going into looking at what the hardware looks like. With Wii U's in the wild, though, is there no way to run benchmarking software on the hardware itself. If there' s not now, will there be in the future when the ability to run homebrew becomes available?

My frame of reference is that usually when a new GPU is released, there are usually bar charts produced by magazines, which basically amount to "Yep, this new one is this much better than this old one." Did we ever get something like that for last gen consoles?


No, unfortunately. Any benchmarking software would have to be compiled specifically for the Wii U, as they are written for x86 or ARM architectures. And then the tests would be incomparable since they are written differently, plus the cost that goes into benchmarks creation is very high.


So unfortunately I think browser benchmarks are the best we will get from the CPU, and with the GPU we will be able to figure out Gflops from the die (shaders + architecture + clock speed). And from THAT, we could point to a card it is similar to on PC and see where that is on the carts, although that's still not directly comparable since consoles make better use of the same hardware.
 

Thraktor

Member
I've heard that mentioned often, but how concrete is that?

Wsippel posted a die shot of Wii's EEPROM a page or two back. You'll notice that an almost identical code is etched in an almost identical manner onto one of the Chipworks shots (it's in one of the small preview images). Considering it was long expected to be EEPROM, that's basically as good a confirmation as you're going to get.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
No, unfortunately. Any benchmarking software would have to be compiled specifically for the Wii U, as they are written for x86 or ARM architectures. And then the tests would be incomparable since they are written differently, plus the cost that goes into benchmarks creation is very high.


So unfortunately I think browser benchmarks are the best we will get from the CPU, and with the GPU we will be able to figure out Gflops from the die (shaders + architecture + clock speed). And from THAT, we could point to a card it is similar to on PC and see where that is on the carts, although that's still not directly comparable since consoles make better use of the same hardware.
There are quadrillions of open-source GPU benchmarks out there. Gettiing the GPU to do stuff in the context of a hacked console is the actual challenge. Normally reverse-engineered platform-holder APIs come into play then.
 
A lot of cost and expense is going into looking at what the hardware looks like. With Wii U's in the wild, though, is there no way to run benchmarking software on the hardware itself. If there' s not now, will there be in the future when the ability to run homebrew becomes available?
It's more feasible to benchmark then, yes.
My frame of reference is that usually when a new GPU is released, there are usually bar charts produced by magazines, which basically amount to "Yep, this new one is this much better than this old one." Did we ever get something like that for last gen consoles?
Like that, no. Those are benchmarks with the same benchmark application running across all hardware variations and that simply can't be done here. You can never compare directly, but you can grasp better what the capacities are, I guess.

What you can benchmark is stuff like how many floating point operations it can handle and stuff like that. Abstract stuff whereas most magazine benchmarks in the boil down to "it runs x game at x frames per second in high detail mode in here" which in the past has been due to stuff like optimized drivers. This tends to be more honest, but also not as visual, it's putting numbers out there.

As for your question, for the GPU's never, but for CELL we've got some:

・Dhrystone v2.1
PS3 Cell 3.2GHz: 1879.630
PowerPC G4 1.25GHz: 2202.600
PentiumIII 866MHz: 1124.311
Pentium4 2.0AGHz: 1694.717
Pentium4 3.2GHz: 3258.068

・Linpack 100x100 Benchmark In C/C++ (Rolled Double Precision)
PS3 Cell 3.2GHz: 315.71
PentiumIII 866MHz: 313.05
Pentium4 2.0AGHz: 683.91
Pentium4 3.2GHz: 770.66
Athlon64 X2 4400+ (2.2GHz): 781.58

・Linpack 100x100 Benchmark In C/C++ (Rolled Single Precision)
PS3 Cell 3.2GHz: 312.64
PentiumIII 866MHz: 198.7
Pentium4 2.0AGHz: 82.57
Pentium4 3.2GHz: 276.14
Athlon64 X2 4400+ (2.2GHz): 538.05
Source: http://forum.beyond3d.com/showthread.php?t=36058

But that's about it (and it's CPU/PPE only)
 

tipoo

Banned
There are quadrillions of open-source GPU benchmarks out there. Gettiing the GPU to do stuff in the context of a hacked console is the actual challenge. Normally reverse-engineered platform-holder APIs come into play then.

That's essentially what I said. All the benchmarks are written specifically for x86 or ARM systems, getting it running on the Wii U CPU would require a major rewrite, which would be a gargantuan task. They got some CPU-only benchmarks on the PS3s cell, but that was only with Linux running on it which isn't supported here.
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
That's essentially what I said. All the benchmarks are written specifically for x86 or ARM systems, getting it running on the Wii U CPU would require a major rewrite, which would be a gargantuan task.
Perhaps I was not clear enough. Let me rephrase myself.

What architecture-specific benchmarks are out there is of no importance - there are plenty of open-source benchmarks written for well-known public APIs (OGL, D3D). The actual challenge in the context of a hacked console is getting the GPU to obey. At all. GPUs are largely more complicated beasts when it comes programming, compared to CPUs. That's why you need to at least initially leverage any existing code that can send commands to the GPU. Such code, low and behold, is the platform holder's own GPU stack, AKA drivers, APIs, etc.
 

tipoo

Banned
Perhaps I was not clear enough. Let me rephrase myself.

What architecture-specific benchmarks are out there is of no importance - there are plenty of open-source benchmarks written for well-known public APIs (OGL, D3D). The actual challenge in the context of a hacked console is getting the GPU to obey. At all. GPUs are largely more complicated beasts when it comes programming, compared to CPUs. That's why you need to at least initially leverage any existing code that can send commands to the GPU. Such code, low and behold, is the platform holder's own GPU stack, AKA drivers, APIs, etc.

Ah ok, understood.
That's also why the PS3 had a few CPU benchmarks done on it back when it could run Linux, but no GPU benchmarks.

But even with that the benchmarks didn't use the CPU the way developers are supposed to, so those were also of limited worth.

So any ETA on when the die shots will arrive?
 

blu

Wants the largest console games publisher to avoid Nintendo's platforms.
Ah ok, understood.
That's also why the PS3 had a few CPU benchmarks done on it back when it could run Linux, but no GPU benchmarks.
Yes.

But even with that the benchmarks didn't use the CPU the way developers are supposed to, so those were also of limited worth.
That is normally a matter of how much the people involved know what they're doing. 'If it builds - ship it!' is a particularly wrong mentality when it comes to hw-performance-assessing code.
 

Pociask

Member
Thanks for the replies everybody - I'm really interested in this stuff, but don't have the foggiest what you are saying most the time :)
 
So, are they going to compare the photo of the die to known Radeon architectures and see which one is the most similar, then count the Shader cores, TEVs, ROPS and then, by using the formula for that specific architecture, find out what the theoretical max GFLOPS of the GPU will be?
 

z0m3le

Banned
So, are they going to compare the photo of the die to known Radeon architectures and see which one is the most similar, then count the Shader cores, TEVs, ROPS and then, by using the formula for that specific architecture, find out what the theoretical max GFLOPS of the GPU will be?

If they can count the shaders, they can just multiply that number by 2 and the new number by .55 (clock) which will give us the GFLOPs number. Of course counting ROPS, Texture units and other stuff will be fun too, but that will give us a performance estimate. If it is 40nm as assumed, that gives Wii U over twice the transistor count for just the GPU space when compared to Xenos, so it could be a pretty nice count.

With what someone in the know said about the shader count being odd, I half expect GCN's 64 shader CUs, with something unique like the basic 384 shader count but with another CU or two. Giving us a GFLOPs count of 492GFLOPs or 563GFLOPs for the later. of course, 440 shaders or 520 shaders would be a fair comparison to R700's architecture giving a similar performance of 484GFLOPs or 572GFLOPs for the later.

of course I could be very optimistic with these numbers and it might be much lower, but that die size points to a higher shader count, though that is dependant on the architecture, also I am crazy to even guess GCN since it's likely a 40nm part and would be the first GCN part at that processor node. We will know soon if even that educated guess by us is wrong though.
 

Durante

Member
As for your question, for the GPU's never, but for CELL we've got some:

But that's about it (and it's CPU/PPE only)
There's very little mystery to Cell performance (including PPE and the SPEs), there are dozens of scientific papers from half a decade ago dedicated to the topic.
 

tipoo

Banned
So, are they going to compare the photo of the die to known Radeon architectures and see which one is the most similar, then count the Shader cores, TEVs, ROPS and then, by using the formula for that specific architecture, find out what the theoretical max GFLOPS of the GPU will be?

We can, yes. If we know the architecture family, number of shader cores, and the clock speed, we can get pretty close to the theoretica Gflop count based on other chips in that architecture. That's what I've been saying.
 
Top Bottom