The differences in memory bandwidth and ROPs also match up closely to the PS4 and Xbone bandwidth differences. So yeah, while not perfect, it is the nearest anyone has to real world comparisons.
7850 specs (PS4 in brackets)
1.78 TFLOPS (1.84 TFLOPS)
153.6GB/s memory bandwidth (176 GB/s)
16 CUs (18 CUs)
32 ROPS (32 ROPS)
7770 specs (Xbone in brackets)
1.28 TFLOPS (1.2 TFLOPS)
72GB/s memory bandwidth (68 GB/s)
10 CUs (12 CUs)
16 ROPS (16 ROPS)
The point is, when these two AMD cards are hooked up to PCs with otherwise identical specs, the 7850 gives 2 times the performance over the 7770, even though it only has 1½ times the CUs.
If MS downgrade the Xbone further, the performance difference between it and the PS4 will be more than double.
That is not how this works.
1. Narrower BUS leads to higher overhead. So you have less usable bandwidth, increased latencies and expend more time processing data. GPUs suffers a lot with narrow buses.
2. Unified memory pool makes better use of bandwidth. You don't have a PCIe between GPU/VRAM and CPU/Main RAM. PC's GPUs have to brute force a lot of things to avoid that interface.
3. You are missing on purpose esRAM and movers.
That doesnt negate any of his post. You can overclock the hell out of 7770's memory but it's performance wont go up because of other things such as pixel/texture fill rate, raw shader power etc. That comparison (7770/7850) is comparable with the Xbone/PS4 and there could be instances where the performance differences between a ~1-1.2TFlop and 1.8TFlop GPU could be twice.
You can also drop the double VRAM crutch, just look up the benches of a 1GB 7850 and it more or less performs like a 2GB 7850 except for a few corner cases.
Oh, so you get it but were simply dismissing it because of ..
You can't overclock a 128 bit BUS. This is just like that nonsense of compare GPUs using raw GFlops performance. Remember me of times when people compared CPUs using MIPS. For any given total bandwidth, it's better a bigger bus and slower ram than otherwise. This was an old debate at GTX280 (DDR3, 512 bits) vs HD4890 (GDDR5, 256 bits) era:
http://www.hwcompare.com/1473/geforce-gtx-280-vs-radeon-hd-4890-1gb/
Industry moved to GDDR5 because it provides cheaper bandwidth, not because magic. Big buses are expensive both at chip and PCB. Even so, Nvidia delayed the transition as much as possible because of performance loses. But do not derrail, my point is that is better to have 60GB/s from DDR3 in a 256bit bus than from GDDR5 in a 128bit BUS for a GPU. Those 60GB/s aren't the same, you should know that. Do you or are you just pretending?
We now, for sure, than PS4 have more bandwidth than needed for such a mid tier GPU. Question here is if One GPU will be bandwidth starved or not with 68.3 GB/s
and 102 GB/s (Not 68+102=170 as I'm used to read here) . You DON'T need 176GB/s to max such GPU. And much less in a GPU that is already slower than PS4 one.
If you ask me, I'm more worried about esram not being big enough with only 32mb than bandwidth.
Jesus, it's the same shit from Wii U threads all over again.
Easy, easy, I'm an unarmed citizen. Don't shoot me!
The AMD cards I referenced both had GDDR5, even the Xbone equivalent.
Lol, and what if so? Your One 'equivalent' still have half the bus width and no extra bandwidth. It's GDDR5 the magic sauce now?
Yes, we all know PS4 solution is better and smarter than MS design. I would say more, it will be cheaper to manufacture in the future, with memory shrinking both in size and price, whereas MS will have to use expensive legacy DDR3 and a ton of chips. It always happens that overengineered consoles fails some way or another. Trying to solve problems so ahead on time is a waste of resources, because other specialized companies will do better. Happened to Saturn trying to overcome japanese CPUs lack of power. Happened to PS2 and 3 lacking GPU capabilities, and now it's happening to One lack of memory size. All problems that were solved at release time by other companies.
But you are trying to sell a PS2 to Xbox gap, and I'm not buying that.
My guesses:
Scenario 1: One GPU have higher clocks than PS4 GPU (As it should). Minor differences, close to 360 to PS3.
Scenario 2: Same clocks. Mild difference, Digital Foundry goes out of bussines some months after consoles release.
Scenario 3: One have lower clocks, LOL festival. Slowdowns even on sport transmissions.