Its not relevant because your never gonna see a console MB that is inherently comparable to a PC MB. The apu design wouldn't be exotic because the PC already has had a similar design, the video memory is still in-house on the card and there are still DDR slots. Again, consoles don't use cards and memory is soldered on to a board.
Just because the form factor is different for some of these components doesn't mean the high-level architecture inherently changes. Look at laptops. There are designs that incorporate MoBo designs that are more akin to a console than a PC - memory soldered on, discrete GPU's that aren't using a conventional PCIe slot, etc. Laptop designs are quite different from PC's in terms of physical interfaces, yet they don't require some exotic memory architecture in order to communicate.
An APU isn't some mythical beast. From a high-level it's pretty similar to Intel's integrated graphics - it's an SoC that includes both a CPU and a GPU. In both cases, the GPU uses main memory. They don't have anything that would automatically require a complex new memory architecture. What mostly differentiates ATi's design is processing power and the fact it supports the current shader model.
And yeah, consoles don't use drivers.
That's not strictly the case any more, and if anything I'd expect things to increase.
I also don't know which part of their Xfire design would transfer into a console. A multi-gpu design doesn't make sense in a modern console.
Whether it makes sense or not depends on what the manufacturer is trying to do. As armchair architects, our first inclination is to always assume that a single, more powerful GPU makes the most sense ... but we have no idea what their end goal is or what sort of issues they're seeing.
It's possible that in terms of heat/reliability, they can do better doing a split design than a single GPU even if that means they aren't quite as efficient. I suspect the bigger issue is one of usage intent. They may envision a situation where a low power mode will be quite useful. Basically the console will use the APU-only for certain situations like media, specific arcade games, social stuff, etc. I'm sure there are plenty of other reasons to do something like this we haven't even though of.
No no non no no. NO. The Xbox, PS2, GCN, and 360 all had what is essentially unified memory.
Ugh, you're right. For some reason I remembered Xbox having dedicated VRAM, but it looks like it was 64MB unified.
For PS2 and GCN though, Panajev2001a and Fafalada should be able to answer exactly how those architectures worked. From what I recall of some of their posts, I'm not sure you can really call them unified in the same way that 360 was. The eDRAM on Xenos was for a very specific usage and performance goal. However they could have went without it and rendering/game design wouldn't have changed all that much. For PS2 and NGC, the rendering architecture was quite different from traditional PC or unified IIRC.
Also PC games are designed with Direct X and a Windows Kernel. Its a very poor choice to compare that with console development.
What does that have to do with anything? Direct X is an API for graphics and sound and Windows is an OS. Any modern PC or console has its own functional equivalents. While many times console manufactures allow some level of 'coding to the metal', that doesn't mean a higher level API doesn't exist.
I'm not sure what you're trying to say here. Direct X isn't tied to split memory or anything. You can use PC's without a dedicated GPU. Moreover, the Xbox was called Xbox as a short-hand for Direct X Box. MS's goal was to make a console with API's and dev environments that were closer to PC development than traditional console development. Xbox and 360's API is essentially a modified subset of Direct X.
No. PS3 has essentially the same amount as the 360. The problem with the PS3's memory is that it used two different types, with different read speeds. The GPU had access to the XDR but it adds extreme latency making it unpractical.
I didn't say it was
literally less, I said
effectively less. The big issue with PS3 development is that many games coming from PC or being ported from the 360 (360 being the lead console) are more GPU memory bound than CPU. Since the PS3 has 256MB of VRAM, that presents a problem for the games on 360 where they were dedicating more of the unified memory to GPU than CPU.
Also, the issue was made worse by the fact the OS footprint is bigger on PS3 than 360. So even more of the memory is getting eaten up particularly on the GPU side.
These are all issues that could pop up with a two-gpu design. I can't imagine the BW issues of console that would have three separate processing units.
What issues ... I haven't seen any that you've brought up.