View Single Post
(02-19-2012, 08:39 PM)
bgassassin's Avatar

Originally Posted by z0m3le

I wouldn't call it on a whim, businesses like theirs have to make investments, and having a base to start from is important, MS did originally use nearly off the shelf parts for the Xbox, and IBM knew that they would be coming to them soon for a new CPU based on newer technologies but without the unnecessary components that a computer cpu would need. It could of happened either way, but I doubt Nintendo would customize the cpu much at all.

Yeah, I know that it's a 1ghz gpu at stock, lowering it by 10% to 900mhz would give you lower TDP (which is all that really matters when we talk about console limits)

You could in fact remove ram power from the GPU's power draw, because you'd count the ram separately from the GPU as it will likely share a bus with the CPU, The point you are trying to make about AMD making a GPU from scratch doesn't make sense, that wasn't how the 360's gpu was created, and I doubt it will be how Wii U's GPU will be created, I mean your order is sort of tall for the current Nintendo, sure they will customize the chip, get certain features that they want, and cut out others, in the end it might not look like a HD7700 series, or even an HD6700 series, but it will likely start from one of those two.

40nm or 28nm makes the most sense for a GPU, over time the 28nm would of course be cheaper, and TSMC would have the capacity to fill far more orders as the year progresses, Wii U would be big business for them, and should be something they would be very interested in in the long run. Also even if you do get better yields at 40nm, you make more 28nm per process, so it's really hard to say which is better IMO, especially in 6 months. For Nintendo, with wattage, performance, and space for their architecture, I think 28nm is a no brainier, but I am not a TSMC customer, so maybe it does look grimmer than I've laid out.

The main problem i see with a 32nm chip, is you are designing a brand new chip, on a architecture AMD has never designed a GPU for, at least nothing like this, it would make much more sense to go with an existing design, that would of existed in some form even 2 years ago, when Nintendo would of been figuring out what they wanted.

In the end, this really doesn't matter, it will be powerful, and it will fit in the box without catching fire, and I trust Nintendo won't have any RROD issues either, thanks for arguing your points, I see where you are coming from, but I just am not sure I buy it, feel free to reply, I'll read it tonight when I get to work and reply when I can.

There's a difference in having a base where the CPU is based on an existing line of their processors, which is what normally happens, and a base where they already started making something for consoles before getting any input from what the console makers might want. The latter doesn't happen nor needs to since the console CPU is going to be based on existing architecture. That's an unnecessary cost to start something in general and then modify when you can just modify or build something based on what you already have. And I couldn't disagree more with Nintendo not doing much customization to the CPU. All signs point to the CPU being based on POWER7. That chip itself is too big and hot for a console. They are either going to heavily modify it to work in that case, or what is most likely to happen is build something from the ground up based on POWER7 since the POWER7 CPU is only made with eight cores.

It's not just about lowering clocks as I expect Nintendo to be picky about the clock speeds to the point where I see them sticking with multiples like with GC/Wii. And considering the heat the GPU puts out, I'd see them focusing on much lower clocks.

And the reason why I said you can't ignore the RAM's power draw is because you have to look at the console's total TDP so it still has to be factored in.

Actually the 360 wasn't quite like that. It used unified shaders, but apparently was not VLIW-based like future ATi GPUs. So from what I recall that wasn't based on anything in existence. With Wii U's GPU, I doubt it starts from those two unless when you say 6700 you mean Barts instead of Juniper since they could have just put a Juniper in the dev kit if that were the case. But referring back to Xenos, it's not guaranteed that Wii U's GPU will be VLIW-based either essentially eliminating all Radeon lines from consideration. AMD said themselves that VLIW5 saw poorer utilization once DX10 came into existence. We could very well see something not VLIW-based with the GPU to overcome that issue.

IMO 40nm or 28nm makes the most sense if you are looking at it from a PC perspective. This isn't something with interchangeable components so it doesn't have to have those "standard" processes. At the same time using TSMC doesn't sound like a Nintendo move for the reason I mentioned before. That to me would have played into why they used NEC the last two gens to make their GPUs. As for your argument against using 32nm, I think it really underestimates AMD's capabilities. If they haven't done it before and in turn couldn't do it now, then I would question their ability as a processor maker. At the same time while looking up to see how old the 32nm process is, it is also considered in the "mainline" of process shrinks while 28nm isn't. 32nm is a more mature process than 28nm and still gets them the benefits of a much cooler GPU compared to the 55nm GPU in the dev kit.

But you are right though. In then end it will be powerful and won't blow up your house.
Last edited by bgassassin; 02-19-2012 at 08:44 PM.