• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.

benny_a

extra source of jiggaflops
Our insiders, pretty much spot on on everything, have told us 2cores for the Xbone OS and half a core for the PS4, so for games you would have 7.5 cores on PS4 and 6 cores on Xbone.
I wasn't aware that there were rumors about the PS4 reserved stuff.

All I thought we had was the Killzone: Shadowfall technical analysis which revealed they are currently using 6 cores for the game.

Didn't an "insider" say that they were having major yield issues with the ESRAM? Doesn't this go completely against that?
Several insiders said that. And no, this doesn't go at all against that. See the previous two pages in how people arrive at the downclocking based on the revealed numbers.
 

Phawx

Member
I'm not sure how this is supposed to be a good news, especially with the amount of PR involved in the article. :D Take a 7770 and overclock it's memory to hell and back, look at the performance scaling and then be excited. What is worse is that the amount of ROPs are also lacking to be able to take advantage of this so called "new found" bandwidth.

Ideally the best news would have been that the GPU clock had gone up.

Cache is always a bandaid to different problems. This ESRAM is considerably more flexible than the eDRAM found in the 360.

Will this cache require finesse to get the most out of? Yes. Will it be as hard to develop for when thinking about SPEs in Cell. No. Storing/transferring is far different than wringing performance out of an obtuse function.
 

Pug

Member
I find it fascinating that Richard Leadbetter's "well-placed development sources" unveiled these esoteric and theoretical numbers to him, but we still can't get the actual fucking clock speed of the Xbone GPU. Smells funny, Dickie.

While none of our sources are privy to any production woes Microsoft may or may not be experiencing with its processor, they are making actual Xbox One titles and have not been informed of any hit to performance brought on by production challenges. To the best of their knowledge, 800MHz remains the clock speed of the graphics component of the processor, and the main CPU is operating at the target 1.6GHz. In both respects, this represents parity with the PlayStation 4.

From the article.

If there was a downclock MS would have no choice but to tell developers and seeing as rumour has been doing the rounds for a month or two, Devs would have been the first to find out.
 

Jagerbizzle

Neo Member
So massive that Guerrilla Games are already making use of ~5GB with a launch title. So does The Witness.

This is a pointless discussion to get into without seeing a breakdown of their memory usage. I can go allocate a 6 gig buffer right now and break an XB1 but not a PS4. That doesn't prove much of anything.
 
Major nelson is the source...

And he said this about the xbox 360.

The Xbox 360 GPU has more processing power than the PS3′s. In addition, its innovated features contribute to overall rendering performance.

Xbox 360 has 278.4 GB/s of memory system bandwidth. The PS3 has less than one-fifth of Xbox 360′s (48 GB/s) of total memory system bandwidth.

CONCLUSION
When you break down the numbers, Xbox 360 has provably more performance than PS3. Keep in mind that Sony has a track record of over promising and under delivering on technical performance. The truth is that both systems pack a lot of power for high definition games and entertainment.

-Archive By Larry Hryb, Xbox LIVE's Major Nelson
 

dr_rus

Member
Either way, this is a pretty big deal and great news for MS if the difference between multi-plat titles will be negligible at best, and probably not evident till later on (~Fall 2014?). I got the feeling there would be a big difference right off the bat but I guess not. An X1 purchase is getting more and more enticing for me (I think the exclusives announced are better for X1 so far, but the DRM and power difference scared me away).
Upping the effective ESRAM bandwidth will help with stuff like alpha blending, antialiasing and other B/W heavy effects but we're still talking about 32 MB used for tiled back buffer operations, rest of the memory is MUCH slower than on PS4. Also the GPU itself is 33% slower. The difference will be there and it will be quite noticeable. Increasing the ESRAM bandwidth won't change this.
 
While none of our sources are privy to any production woes Microsoft may or may not be experiencing with its processor, they are making actual Xbox One titles and have not been informed of any hit to performance brought on by production challenges. To the best of their knowledge, 800MHz remains the clock speed of the graphics component of the processor, and the main CPU is operating at the target 1.6GHz. In both respects, this represents parity with the PlayStation 4.

From the article.

If there was a downclock MS would have no choice but to tell developers and seeing as rumour has been doing the rounds for a month or two, Devs would have been the first to find out.

'to the best of their knowledge' is not a firm confirmation however you try and frame it.
 
not as a cache for particular operations.

one of the things it can serve as is a frame buffer.

look at it as a faster small truck. You put the most time sensitive stuff there for delivery while your larger trucks take a bit longer moving the not so important stuff.

What? I didn't realize the graphical assets for rendering the scene were "not so important stuff".
 

artist

Banned
Cache is always a bandaid to different problems. This ESRAM is considerably more flexible than the eDRAM found in the 360.

Will this cache require finesse to get the most out of? Yes. Will it be as hard to develop for when thinking about SPEs in Cell. No. Storing/transferring is far different than wringing performance out of an obtuse function.
This is in no relation to what I was saying, more like a tangent.
 

Pug

Member
'to the best of their knowledge' is not a firm confirmation however you try and frame it.

They are saying they haven't been told of any changes from 800 and 1600.

If there had been changes made a months back Devs would have been told long ago.
 

benny_a

extra source of jiggaflops
This is a pointless discussion to get into without seeing a breakdown of their memory usage. I can go allocate a 6 gig buffer right now and break an XB1 but not a PS4. That doesn't prove much of anything.
They showed their breakdown of the memory usage in the February 20th post-mortem. I linked that earlier in the thread.
 
While none of our sources are privy to any production woes Microsoft may or may not be experiencing with its processor, they are making actual Xbox One titles and have not been informed of any hit to performance brought on by production challenges. To the best of their knowledge, 800MHz remains the clock speed of the graphics component of the processor, and the main CPU is operating at the target 1.6GHz. In both respects, this represents parity with the PlayStation 4.

From the article.

If there was a downclock MS would have no choice but to tell developers and seeing as rumour has been doing the rounds for a month or two, Devs would have been the first to find out.

You don't see a major disconnect between these very specific "theoretical" bandwidth numbers and the rather glaring line "to the best of their knowledge?"

LOL.

Keep the faith, brother.
 

saladine1

Junior Member
If esram is upped by 22% and edram is going to flop by a terabyte, then its safe to assume that yields are exactly the same as the hard drive and floppy drive except that performance is downgraded to accommodate GPU clocks peed plus DDr5567 RAM cycles and the power of the cloud wins..
 

Elios83

Member
if anything, this news suggests that their performance took a minor hit to 1.15Tflops.

It seems a bit surreal.

So Microsoft was thinking they could only do an operation per clock with their esram.
Clocked at 800MHz with a 128 bytes single transfer it gives 102,4GB/s.
If we're going with the idea that the esram is actually capable of two transfers per cycle (but it's really hard to believe that Microsoft didn't know about), then you'd be right and the clock would have been slightly downgraded to 750MHz.
BUT it seems to me that they're just talking about optimizing the timing of the operations so that it's possible to sneak in more trasnfer operations than they initially thought which would mean a higher theoretical esram bandwidth. But it's not an actual upgrade, and the effective bandwidth will depend on the applications.
Basically: there's no downclock, but this is just a PR thing, with a PR like bandwidth figure.
 
If the report from the article that they can process read and write cycles simultaneously, it sounds like this actually would mean precisely that, more or less.

If you assume 100% efficiency, yet the article states the max read or write only is still 102. Therefore 100% efficiency is over 200. Yet they are saying it is 192. Which is therefore a lower efficiency when using dual mode.

Plus there is probably a lot more to that number then just adding those numbers like everyone is, even if you try to account for efficiency.
 

ekim

Member
You don't see a major disconnect between these very specific "theoretical" bandwidth numbers and the glaring line "to the best of their knowledge?"

LOL.

Keep the faith, brother.

WTF? These specific numbers were provided by MS to the devs and after DF asked if there is a downclock, they probably simply said, that they only know about 800MHZ clock, which is another specific number provided by MS to the devs.
 
That is very poor form by you to lie there. It's still viewable on Major Nelson's website and other websites like IGN.com

They are talking about the source of DF's article being Major Nelson. That or the original poster of this statement worded his post very badly.
 
Dat secret sauce?
Wasabi


XK7wSYU.gif
 

borghe

Loves the Greater Toronto Area
It seems a bit surreal.

So Microsoft was thinking they could only do an operation per clock with their esram.
Clocked at 800MHz with a 128 bytes single transfer it gives 102,4GB/s.
If we're going with the idea that the esram is actually capable of two transfers per cycle (but it's really hard to believe that Microsoft didn't know about), then you'd be right and the clock would have been slightly downgraded to 750MHz.
BUT it seems to me that they're just talking about optimizing the timing of the operations so that it's possible to sneak in more trasnfer operations than they initially thought which would mean a higher theoretical esram bandwidth. But it's not an actual upgrade, and the effective bandwidth will depend on the applications.
This is of course possible.. Though the fact that all numbers are evenly divisibile the one way and essentially rough fuzzy math the other way seems a bit suspect.

Basically: there's no downclock, but this is just a PR thing, with a PR like bandwidth figure.
really this is no more confirmed or certain than my statement (except by PR).. heh.. just saying..
 

Vestal

Gold Member
What? I didn't realize the graphical assets for rendering the scene were "not so important stuff".

!?!?! seriously? Everything in a Computer system works on a priority scheme, some things take priority over others in the way they get pushed through the system.

A Frame buffer for example takes precedent over a "Graphical Asset". That Graphical Asset will be pushed to the GPU for calculation and creation of the actual buffer, then the buffer would get pushed to ESRAM to reside until its needed to be displayed. This eliminates the bottleneck of having to put the buffer back in the slower DDR3 memory. It obviously does not completely solve the "problem" in comparison to the raw performance benefit in bandwith the PS4 has, but it does somewhat alleviate it coupled with lower latency times.

The "Magic" that we will need to find out is how the ESRAM cache will be leveraged by MS to make up for the memory bandwith difference. will MS provide a robust interface with prebaked functions to make life easier for developers or not.
 
Good news if true.

GPU still sucks.


I remember not too long ago it was rumored that Xbone would have a 6670, now THAT GPU would've sucked!

Today not only are we getting a true GCN architecture, but the 7790 GPU (Closer to Xbone's GPU) is a pretty nice GPU, good performance even on an open platform in handling high profile PC titles without any specialized optimizations. PS4's GPU is probably closer to AMDs 7850 GPU, a bettery GPU for sure but people are in for a big surprise if they're expecting a drastic difference in games.

This is probably what we'll see:

http://www.youtube.com/watch?v=qzf7Q2-A7AU
 
I remember not too long ago it was rumored that Xbone would have a 6670, now THAT GPU would've sucked!

Today not only are we getting a true GCN architecture, but the 7790 GPU (Closer to Xbone's GPU) is a pretty nice GPU, good performance even on an open platform in handling high profile PC titles without any specialized optimizations. PS4's GPU is probably closer to AMDs 7850 GPU, a bettery GPU for sure but people are in for a big surprise if they're expecting a drastic difference in games.

This is probably what we'll see:

http://www.youtube.com/watch?v=qzf7Q2-A7AU

Too much common sense in this post. Nothing to see here.
 

Jagerbizzle

Neo Member
They showed their breakdown of the memory usage in the February 20th post-mortem. I linked that earlier in the thread.

Thanks, I took a look. They're using close to half of their >4.5gb on render targets and textures, which I guess is a typical breakdown for a 30fps 1080p shooter.

I'm not sure where they'd want more ram to go in their breakdown but perhaps I lack imagination at this stage in the cycle.
 

artist

Banned
I remember not too long ago it was rumored that Xbone would have a 6670, now THAT GPU would've sucked!

Today not only are we getting a true GCN architecture, but the 7790 GPU (Closer to Xbone's GPU) is a pretty nice GPU, good performance even on an open platform in handling high profile PC titles without any specialized optimizations. PS4's GPU is probably closer to AMDs 7850 GPU, a bettery GPU for sure but people are in for a big surprise if they're expecting a drastic difference in games.

This is probably what we'll see:

http://www.youtube.com/watch?v=qzf7Q2-A7AU
Xbone's GPU is closer to 7770 than a 7790.

Too much common sense in this post. Nothing to see here.
Common sense as in wishful thinking? Yes.
 

Vestal

Gold Member
I remember not too long ago it was rumored that Xbone would have a 6670, now THAT GPU would've sucked!

Today not only are we getting a true GCN architecture, but the 7790 GPU (Closer to Xbone's GPU) is a pretty nice GPU, good performance even on an open platform in handling high profile PC titles without any specialized optimizations. PS4's GPU is probably closer to AMDs 7850 GPU, a bettery GPU for sure but people are in for a big surprise if they're expecting a drastic difference in games.

This is probably what we'll see:

http://www.youtube.com/watch?v=qzf7Q2-A7AU

Get out of here with your common sense.. We shall have NONE OF IT!!!.






NONE!
 
I remember not too long ago it was rumored that Xbone would have a 6670, now THAT GPU would've sucked!

Today not only are we getting a true GCN architecture, but the 7790 GPU (Closer to Xbone's GPU) is a pretty nice GPU, good performance even on an open platform in handling high profile PC titles without any specialized optimizations. PS4's GPU is probably closer to AMDs 7850 GPU, a bettery GPU for sure but people are in for a big surprise if they're expecting a drastic difference in games.

This is probably what we'll see:

http://www.youtube.com/watch?v=qzf7Q2-A7AU

Its a shame microsoft didn't opted for an 7790 :p
A 7790 at 900mhz would close the gap better.

The X1 sound block better be impressive and deliver probably wont because if it would i would have marketed it.
 

Izick

Member
Can anybody explain what this means in relations to what the "insider(s)" said about the ESRAM? I thought the rumor was that Microsoft was experiencing major issues with yields, and now this article is saying that they're allegedly expecting way better results than they expected? Obviously this article isn't set in stone or truth either, but if it is true, doesn't that contradict that rumor?

Anyone care to explain?
 
Can anybody explain what this means in relations to what the "insider(s)" said about the ESRAM? I thought the rumor was that Microsoft was experiencing major issues with yields, and now this article is saying that they're allegedly expecting way better results than they expected? Obviously this article isn't set in stone or truth either, but if it is true, doesn't that contradict that rumor?

Anyone care to explain?

Either read and write is asynchornous and we get a 750Mhz clock so an downgrade.
Or they found some ways to find magic.
 

borghe

Loves the Greater Toronto Area
I remember not too long ago it was rumored that Xbone would have a 6670, now THAT GPU would've sucked!

Today not only are we getting a true GCN architecture, but the 7790 GPU (Closer to Xbone's GPU) is a pretty nice GPU, good performance even on an open platform in handling high profile PC titles without any specialized optimizations. PS4's GPU is probably closer to AMDs 7850 GPU, a bettery GPU for sure but people are in for a big surprise if they're expecting a drastic difference in games.

This is probably what we'll see:

http://www.youtube.com/watch?v=qzf7Q2-A7AU

you do realize the durango GPU is actually just a bit under then lesser one you linked.. and the PS4 GPU is very slightly above the higher one in the video (9% slower clock, 2 more CUs). If anything, performance will be very slightly greater than what was in the video.
 

8bits

Banned
you do realize the durango GPU is actually just a bit under then lesser one you linked.. and the PS4 GPU is very slightly above the higher one in the video. If anything, performance will be very slightly greater than what was in the video.

So games will basically look the same and xb1 will run at 30fps while the ps4 will run at 60? I'm ok with that.
 
From what we know though, the 7790>Xbone GPU. Looky-looky here:

http://www.extremetech.com/gaming/1...d-launches-radeon-7790-meet-the-xbox-720s-gpu

So it isn't really accurate. 1.79 vs 1.23 gflops. It is as close to a 7770 I think as the PS4 gpu is to the 7850. Definitely not a 7790 so far...


TFLOP performance on Xbox One is not confirmed and just like ESRAM may have not been final at the time of the leak, therefore this analysis is flawed.

There were several articles that went live claiming the 7790 was the PC version of X1's GPU.

http://www.extremetech.com/gaming/151367-amd-launches-radeon-7790-meet-the-xbox-720s-gpu
 
Status
Not open for further replies.
Top Bottom