• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EDGE: "Power struggle: the real differences between PS4 and Xbox One performance"

twobear

sputum-flecked apoplexy
But... if in my wildest fantasies Cerny decides to walk on stage, the lights shut off, and the projector shines an image of 900 mhz, than consider me excited.
My favourite part of the console warz is always reading these quasi-sexual fantasies about fave console manufacturers unveiling 'teh megtonz'.

'And then Kaz Hirai is on stage and then gabe Newell is like 'hi did you miss me' and the lights go off and then the words HALF LIFE 3 EXCLUSIVE PLAYSTATION 4 XBONE SUX' appears on the screen and like everyone starts crying and bill gates shoots himself! Wow!'
 

ekim

Member
untitledi0p1w.png
 

ekim

Member
A bit out of the loop. What is this in relation to?

Albert told us that they are working on a tech deep dive about Xbox One especially in comparison to the PS4 aka. why the extra HW in X1 makes up for the Raw power difference.

So its a R9 series based chip then.

That's what I'm guessing. I wonder how much difference this would make in practical power and what chip the PS4 GPU is based on.
 

ekim

Member
All this and how they made eSRAM bandwidth 204GB/s.
Armchair opinion:
I guess it was already discussed here - but I think the parallel read and write might come from the fact that they implemented 4 blocks of each 8MB eSRAM. If they all have their own bus you could do a read and write in parallel.
 
Armchair opinion:
I guess it was already discussed here - but I think the parallel read and write might come from the fact that they implemented 4 blocks of each 8MB eSRAM. If they all have their own bus you could do a read and write in parallel.
I didn't see that discussion, but wouldn't 4 separate blocks of RAM each with their own unique bus be rather inefficient if a developer wanted to use the whole 32MB for the same thing?
 

stryke

Member
Armchair opinion:
I guess it was already discussed here - but I think the parallel read and write might come from the fact that they implemented 4 blocks of each 8MB eSRAM. If they all have their own bus you could do a read and write in parallel.

Armchair opposing opinion:

If that's the case, isn't this something you should know about on the hardware level in the beginning, instead of "discovering" the extra bandwidth as suggested by that digital foundries articlesome time ago.
 

bobbytkc

ADD New Gen Gamer
Armchair opinion:
I guess it was already discussed here - but I think the parallel read and write might come from the fact that they implemented 4 blocks of each 8MB eSRAM. If they all have their own bus you could do a read and write in parallel.

That is impossible. That will mean that the ability to read only (or write only) is also 204gb/s
 

Durante

Member
I didn't see that discussion, but wouldn't 4 separate blocks of RAM each with their own unique bus be rather inefficient if a developer wanted to use the whole 32MB for the same thing?
No, that's more or less how any wide bus works already. You don't notice that from the software perspective.

However, the original explanation doesn't make sense IMHO, since even if you have separate buses they are each still designed as either uni- or bidirectional. And you should know that from the start ;)
 

Durante

Member
Their spin - their interpretation that minimises disadvantages and maximises advantages. I wonder if we'll get any graphs this time around. A nice eSRAM latency graph perhaps? Some added-up bandwidths?
Considering how silly MS has been about specs lately, I wouldn't even be surprised by an actual bar-chart comparison of die area.

Maybe with the die area of 3 Azure servers added into the mix. Can't forget those 300000 cloud servers.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Maybe with the die area of 3 Azure servers added into the mix. Can't forget those 300000 cloud servers.

They should add the entire bandwidth of the internet as well. That's what's connecting those servers to the boxes, so it is totally legit.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
What if on the 25th of September not Microsoft's NDAs expire...

conspiracykeanu.jpg


...but Sony's?

/jk
 
Developers have already come out and said the PS4 is much more powerful than the Xbox One, so I am not sure what MS is so excited to tell us. Are we to believe that developers do not know the full specs of the Xbox One?
 

ekim

Member
Developers have already come out and said the PS4 is much more powerful than the Xbox One, so I am not sure what MS is so excited to tell us. Are we to believe that developers do not know the full specs of the Xbox One?

before optimization. And MS wants us to know that there are more possibilities to optimize on X1. At least, that's how I understand this.
 

big_z

Member
Developers have already come out and said the PS4 is much more powerful than the Xbox One, so I am not sure what MS is so excited to tell us. Are we to believe that developers do not know the full specs of the Xbox One?

well if you were to believe the crazy dgpu vomit then developers are locked out from accessing it until after the nda expires.
 
That's what I'm guessing. I wonder how much difference this would make in practical power and what chip the PS4 GPU is based on.

Prepare to be disappointing. Everything we know about Xbox One suggests it contains no post GCN 1.0 enhancements, let alone Volcanic Islands technology. It's no more likely than the 12GB of RAM upgrade or dual APUs or secret "dGPUs" or any other absurd thing fanboys have been hop-scotching through this year.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
And MS wants us to know that there are more possibilities to optimize on X1.

I still can't harmonize this statement with Microsoft's other statement that they have no knowledge about the details of the PS4's architecture.
 

ekim

Member
Prepare to be disappointing. Everything we know about Xbox One suggests it contains no post GCN 1.0 enhancements, let alone Volcanic Islands technology. It's no more likely than the 12GB of RAM upgrade or dual APUs or secret "dGPUs" or any other absurd thing fanboys have been hop-scotching through this year.

Is HSA related to GCN or a part of it's featureset? If so, we will definitely see post GCN 1.0 enhancements.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
By the way, I still don't really understand why Microsoft went with eSRAM and not eDRAM. Apparently, eSRAM gives them more options for manufacturing. But is that worth the eDRAM's obvious benefit of having way more capacity? Couldn't they have gone for a discrete on-die package of eDRAM manufactured on a different process like Intel did with Crystallwell which provides the CPU/GPU with 128MB of L4 cache?
 

tzare

Member
before optimization. And MS wants us to know that there are more possibilities to optimize on X1. At least, that's how I understand this.

since XB seems a bit more 'complicated' than PS4 I guess optimization may turn into slightly better fit for XB games than for PS4 ones. I guess PS4 will still have the edge, for sure, but maybe that '50%' difference some claim now may end being a 40% or something similar when games ship`.
 

Durante

Member
By the way, I still don't really understand why Microsoft went with eSRAM and not eDRAM. Apparently, eSRAM gives them more options for manufacturing. But is that worth the eDRAM's obvious benefit of having way more capacity? Couldn't they have gone for a discrete on-die package of eDRAM manufactured on a different process like Intel did with Crystallwell which provides the CPU/GPU with 128MB of L4 cache?
I had the same question a few months ago, and the only answer I got which made sense was the manufacturing agnosticism which you mentioned.

The must believe that to be extremely worthwhile in the long run.
 
I had the same question a few months ago, and the only answer I got which made sense was the manufacturing agnosticism which you mentioned.

The must believe that to be extremely worthwhile in the long run.

Yeah, it's still crazy, though. If they spent those transistors on EDRAM they'd have at least 128MB, and as much as 256MB of high bandwidth embedded memory. But I don't know what manufacturing options there are for EDRAM at 28nm.
 

ElTorro

I wanted to dominate the living room. Then I took an ESRAM in the knee.
Yeah, it's still crazy, though. If they spent those transistors on EDRAM they'd have at least 128MB, and as much as 256MB of high bandwidth embedded memory. But I don't know what manufacturing options there are for EDRAM at 28nm.

Does it have to be on the same process as the APU? They could have gone for a discrete eDRAM package on a different process, much the same way that Intel does it. TSMC, for instance, can manufacture eDRAM at 40mn. An APU at 28mn combined with an eDRAM package at 40mn and connected via a 128bit-wide bus sounds possible to me as a layman.
 

Guymelef

Member
Considering how silly MS has been about specs lately, I wouldn't even be surprised by an actual bar-chart comparison of die area.

Maybe with the die area of 3 Azure servers added into the mix. Can't forget those 300000 cloud servers.

Something like that.
usMkJkz.png
 

KidBeta

Junior Member
Albert told us that they are working on a tech deep dive about Xbox One especially in comparison to the PS4 aka. why the extra HW in X1 makes up for the Raw power difference.



That's what I'm guessing. I wonder how much difference this would make in practical power and what chip the PS4 GPU is based on.

It looks pretty close to a HD7790 to me, as in, practically identical, if it had 2 CU's turned off for yields it would be identical (sans memory controllers, eSRAM). Everything we have seen so far has shown us that it is GCN1.0, and we know VI will be far beyond that, so anyone hoping for some massive 'but its teh supper efficient VI so therefore FLOPs aren't comparable' kinda moment is going to sorely disappointed.
 
By the way, I still don't really understand why Microsoft went with eSRAM and not eDRAM. Apparently, eSRAM gives them more options for manufacturing. But is that worth the eDRAM's obvious benefit of having way more capacity? Couldn't they have gone for a discrete on-die package of eDRAM manufactured on a different process like Intel did with Crystallwell which provides the CPU/GPU with 128MB of L4 cache?

From what I can gather EDRAM is embedded DRAM, 1 transistor per bit (instead of six: 32MB instead of 192MB).

DRAM has some disvantages : each READ clears the row (that have to be rewritten) and refresh cycles leads to higher latency.
 

Durante

Member
It looks pretty close to a HD7790 to me, as in, practically identical, if it had 2 CU's turned off for yields it would be identical (sans memory controllers, eSRAM). Everything we have seen so far has shown us that it is GCN1.0, and we know VI will be far beyond that, so anyone hoping for some massive 'but its teh supper efficient VI so therefore FLOPs aren't comparable' kinda moment is going to sorely disappointed.
7790 seems really close, since it also has 2 primitives/clock geometry throughput.
 
7790 seems really close, since it also has 2 primitives/clock geometry throughput.

7790 is GCN 1.1 and has enhancements like 4 ACEs and (presumably) Tier 2 PRT support. Its resemblance to the Xbox One GPU is purely coincidental. Xbox One's hardware appears basically to be based on Pitcairn level tech with fewer CUs and ROPs.
 

Osiris

I permanently banned my 6 year old daughter from using the PS4 for mistakenly sending grief reports as it's too hard to watch or talk to her
7790 is GCN 1.1 and has enhancements like 4 ACEs and (presumably) Tier 2 PRT support. Its resemblance to the Xbox One GPU is purely coincidental. Xbox One's hardware appears basically to be based on Pitcairn level tech with fewer CUs and ROPs.

Yeah, the Xbox One's 2 ACE's rule out a direct comparison to the 7790.

As for it being VI / R9.... yeah, no.
 

Bundy

Banned
Albert told us that they are working on a tech deep dive about Xbox One especially in comparison to the PS4 aka. why the extra HW in X1 makes up for the Raw power difference.
Jesus Christ....

before optimization. And MS wants us to know that there are more possibilities to optimize on X1. At least, that's how I understand this.
I hope you don't believe that "magically optimized super- drivers, move engines, audio block, "power of the cloud" and "special sauce" will make up for the missing RAM, missing CU's, missing ROPs, etc. etc.
 

Durante

Member
7790 is GCN 1.1 and has enhancements like 4 ACEs and (presumably) Tier 2 PRT support. Its resemblance to the Xbox One GPU is purely coincidental. Xbox One's hardware appears basically to be based on Pitcairn level tech with fewer CUs and ROPs.
Ah, I wasn't aware of the discrepancy in the number of ACEs.
 
Top Bottom