Originally Posted by bgassassin
There's a difference in having a base where the CPU is based on an existing line of their processors, which is what normally happens, and a base where they already started making something for consoles before getting any input from what the console makers might want. The latter doesn't happen nor needs to since the console CPU is going to be based on existing architecture. That's an unnecessary cost to start something in general and then modify when you can just modify or build something based on what you already have. And I couldn't disagree more with Nintendo not doing much customization to the CPU. All signs point to the CPU being based on POWER7. That chip itself is too big and hot for a console. They are either going to heavily modify it to work in that case, or what is most likely to happen is build something from the ground up based on POWER7 since the POWER7 CPU is only made with eight cores.
It's not just about lowering clocks as I expect Nintendo to be picky about the clock speeds to the point where I see them sticking with multiples like with GC/Wii. And considering the heat the GPU puts out, I'd see them focusing on much lower clocks.
And the reason why I said you can't ignore the RAM's power draw is because you have to look at the console's total TDP so it still has to be factored in.
Actually the 360 wasn't quite like that. It used unified shaders, but apparently was not VLIW-based like future ATi GPUs. So from what I recall that wasn't based on anything in existence. With Wii U's GPU, I doubt it starts from those two unless when you say 6700 you mean Barts instead of Juniper since they could have just put a Juniper in the dev kit if that were the case. But referring back to Xenos, it's not guaranteed that Wii U's GPU will be VLIW-based either essentially eliminating all Radeon lines from consideration. AMD said themselves that VLIW5 saw poorer utilization once DX10 came into existence. We could very well see something not VLIW-based with the GPU to overcome that issue.
IMO 40nm or 28nm makes the most sense if you are looking at it from a PC perspective. This isn't something with interchangeable components so it doesn't have to have those "standard" processes. At the same time using TSMC doesn't sound like a Nintendo move for the reason I mentioned before. That to me would have played into why they used NEC the last two gens to make their GPUs. As for your argument against using 32nm, I think it really underestimates AMD's capabilities. If they haven't done it before and in turn couldn't do it now, then I would question their ability as a processor maker. At the same time while looking up to see how old the 32nm process is, it is also considered in the "mainline" of process shrinks while 28nm isn't. 32nm is a more mature process than 28nm and still gets them the benefits of a much cooler GPU compared to the 55nm GPU in the dev kit.
But you are right though. In then end it will be powerful and won't blow up your house.
I find it funny that you and I are arguing the CPU and GPU side completely opposite, you think the CPU will be based on Power7 but think the GPU won't be based on anything, you also believe that they will create a brand new gpu process (a tick and tock in one cycle) where you have new architecture going into a new process for AMD GPUs at least, usually they wouldn't touch a new process without extending their existing architecture, GPUs are very complicated after all, and can have major problems if they take a chance like that.
I think you could be right about everything, but I think it's more likely they will use a 40nm or a 28nm, if they used a 32nm, it will be HD4000-HD6000 based, shrunk into 32nm, but it would be more expensive than using HD7000 at 28nm, so either it will be a new architecture at 40nm, or something more a long the lines that I am saying, and 360 is R500, it's part of AMD's GPU architecture, it fits in between HD1800-HD2900 it's highly customized and HD2000 series which i believe is R520 was highly based on the advances we saw with it. Also VILW5 was not used as much in Dx10 because people weren't coding for 5 threads anymore, so 20% of the gpu was hardly ever used.
there is an excellent review that talks about why they went VILW4.
I think that there is little chance AMD and Nintendo will come up with a brand new architecture, about the only reason they would is because ArtX former employees have some sort of design, but I think their time would be better spent customizing a HD6000 or HD7000 chip, now you could be right with AMD shrinking an HD6000 design down to 32nm, but it would be better for them to shrink it down to 28nm, if we are talking about 6 months from now, I doubt TSMC's problems will be a big deal at that time.