• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.

Vestal

Gold Member
this is the only thing I don't understand in the article.. they say this suggests the downclock rumors are false, but yes, by yours and everyone else's math, taking MS' 192GB/s number and their explanation that you can read/write in the same clock cycle, yes that is 96GB/s actual bandwidth meaning a 750Mhz clock speed. aka 50Mhz downclock. Who knows though....


They already pegged the Killzone presentation demo at roughly 5GB. Also remember that PLENTY of PC games are already coming in around this number.. Games having recommended requirements of 4GB+ system RAM and 2GB+ vram.

Yes recommend that amount because of the overhead in regards to OS/running apps. Run most of these games at FULL everything and they won't come close to "Recommended specs"
 
So, if the numbers are accurate this confirms ESRAM downgrade (102GB/s -> 96) and Thuway info?

"128 bytes per block multiplied by the GPU speed of 800MHz offers up the previous max throughput of 102.4GB/s. It's believed that this calculation remains true for separate read/write operations from and to the ESRAM. "

It says that 102.4 is still the ceiling for read or write operations, so no, no downgrade.
 
Right, and people from both sides went crazy for it. Your post was fine until "lol, any news is good news, right haha". The delusional people have been corrected numerous times. Whatever, the overly hostile tone in these next Gen threads is disappointing sometimes and my coffee hasn't set in yet.

Sorry. I just don't like unnecessary excitement and misdirection. If the bandwidth improvement helps with multiplatform games that's great, but i'm not expecting miracles because of it.
 

borghe

Loves the Greater Toronto Area
Would love to see in what it was wasted... Games that look better on PC use less than 2GB.

first, I would like to see your definition (and screenshots) of "games that look better".

Second, remember that PC has two pools of RAM. 2GB on the video card, but the games are still accessing system RAM which these days is easily 8-16GB or more. Even if the game is running x86 with 32-bit instructions Win7 will still give that WoW64 instance the full 4GB system memory if available.

"128 bytes per block multiplied by the GPU speed of 800MHz offers up the previous max throughput of 102.4GB/s. It's believed that this calculation remains true for separate read/write operations from and to the ESRAM. "

It says that 102.4 is still the ceiling for read or write operations, so no, no downgrade.

This is what DF is saying. But MS' 192GB/s number doesn't make sense then. The entire basis behind their number is simultaneous reads and writes. Which at 102GB/s would be a theoretical 204GB/s.. but that's not what MS said. MS said 192GB/s. So cut that effective number in half and the real number is 96GB/s. Divide that by 128 bytes and you get 750Mhz.
 
Lol @ people thinking bandwidth will close the gap in power, Lets forget the 5GB vs 6/7GB RAM and a 50% more powerful GPU, but nah that Bandwidth...

Totally agree, when we are a few years into this generation its the extra available RAM and faster GPU that will give the PS4 an advantage in games.

The Xbox is confirmed to use 5GB for games, while the PS4 is unconfirmed, its possible Sony could offer 7.5 GB for games as there was talk that 0.5GB of RAM would be required for OS. Although this was when the PS4 had 4GB, so we will have to wait to find out.
 

Jagerbizzle

Neo Member
Isn't that a big benefit in itself?

It may not be big for you but it can be for other smaller devs.

Yes, it will free up time for work on other things. More RAM is definitely a good thing. It would be nice to have unlimited ram and never have to worry about anything =)

That being said, this is still years away from manifesting unless you're being truly retarded about your usage of memory. For all intents and purposes, the transition from current gen to next gen is giving us "unlimited" ram to play with. In a few years we'll have systems that eat all of this new ram up and you'll start seeing some differences at that point.
 
Q: So are the CPU and GPU at parity with the PS4? They're the same clock speeds but I'm guessing PS4's GPU has more power?

Either way, this is a pretty big deal and great news for MS if the difference between multi-plat titles will be negligible at best, and probably not evident till later on (~Fall 2014?). I got the feeling there would be a big difference right off the bat but I guess not. An X1 purchase is getting more and more enticing for me (I think the exclusives announced are better for X1 so far, but the DRM and power difference scared me away).
 
...maybe a 50mhz drop in GPU clock rate allows them to up the eSRAM clock and fit within their yields? o_O

It's a far cry... but... that's the only thing I can think of.
 

Drek

Member
Thanks for the FYI. I'll be sure to let all of my colleagues know that we're idiots.

I can find quite a few ways to waste that much ram right now, but realistically it's not going to be used for anything useful anytime soon. The biggest benefit I see from this is that it will allow you to be lazier and spend less time optimizing for memory, allowing more focus on features.

That will still be years down the road.

What is the most expensive part of developing a game? Time. The same as any large project. Especially those involving a sizable staff where any one segment can create a bottleneck for everyone else.

You say that you understand the technical side but do you know the first thing about management? Spending less time optimizing isn't being lazy, it's being economical with what actually matters. Man hours.

It's a pretty different situation. MS stated publicly that bandwidth of the 360 EDRAM was 256GB/s at first, but that was proven to be incorrect. This time around we've got an 'official' 102GB/s (that can be confirmed by looking at the specs), but that's been revised up to 192GB/s.

Remember that according to the article this is coming from developer sources, and I don't think it would do MS any good to misrepresent the eSRAM bandwidth in that situation.
256GB/s was correct, for one specific part of the hardware. That wasn't the pipe from the EDRAM to the GPU, but that didn't stop MS from acting like it was pre-release.

Now the 192GB/s rumor is coming from unnamed sources via the gaming press with even less clarification of what exact part of the data pipeline it's referring to. It is in every way less meaningful than the 256GB/s claim from last generation.

But the reason they got there was dumb fucking luck. Microsoft though they were going to have a memory advantage from day one and got unlucky.
Why do you assume it was dumb luck? You think that Sony's lead hardware designer, working in Japan, didn't have frequent discussion with key people from Samsung and Hynix (both South Korean companies, so literally just a few hours away) about their long term road map for GDDR5? If both told him that they expected 4-Gbit chips to be ready for full production this spring why wouldn't he design the system around that?

I can't imagine Cerny and co. hadn't been painted this very road map a year or more ago by both manufacturers of GDDR5 RAM. Initially playing it safe with the 4GB allotment was sensible at the time as the ramp up was something that could happen late in the console's life without significant negative impacts on software development or hardware production. In short, Sony had an ace in the hole and were just waiting to play it when they knew it would deliver.
 
Exactly. All the PS4 power on earth isn't gonna matter for multiplat titles if devs are developing for the lowest common denominator and xbone is way behind

this isn't true at all. unless we're talking about an Xbone game that never drops framerate and already runs at 1080p. Then, sure, the laziest port in the world will have parity on the PS4, but outside of that, unless you're specifically preventing your game from accessing certain parts of the PS4 hardware, you'll see plenty of games that run better on PS4 in the scenario you outline.

If all is equal in terms of resolution, effects, etc, any game that even occasional drops frames on Xbone, is going to do so less on the PS4 if at all.

Parity is the worst case scenario. Still. All that extra power can't be locked away though, so parity is going to be incredibly rare, even if the extra power is only used to add some AA or up the native resolution or anything else that would be simple to implement.
 

Pie Lord

Member
Truth be told I'm less interested in the bandwidth of the Xbox One's memory, than I am in the OS footprint compared to that of the PS4.
 

borghe

Loves the Greater Toronto Area
...maybe a 50mhz drop in GPU clock rate allows them to up the eSRAM clock and fit within their yields? o_O

It's a far cry... but... that's the only thing I can think of.

eSRAM doesn't have a clock rate. Just amount of data it can push out in one clock cycle, which is 128 bytes. There's nothing they can do with that except adjust the GPU clock speed. Even the simultaneous reads and writes, while an effective boost, still aren't truly a doubling because you should still hit deadlocks when reading and writing to the same address.
 

benny_a

extra source of jiggaflops
Q: So are the CPU and GPU at parity with the PS4? They're the same clock speeds but I'm guessing PS4's GPU has more power?
CPU yes, GPU no. GPU is quite a bit better (the 50% figure you hear is good enough even though one could argue for a bigger difference because of ROPs and such.)

That's with the assumption the interpretation that this is a 50MHz downclock isn't true.
 

8bits

Banned
Truth be told I'm less interested in the bandwidth of the Xbox One's memory, than I am in the OS footprint compared to that of the PS4.

Truth be told I'm less interested in the bandwidth of the Xbox One's memory, than I am in the quality of games. Of course, games are secondary to tech specs.
 
Yes, it will free up time for work on other things. More RAM is definitely a good thing. It would be nice to have unlimited ram and never have to worry about anything =)

That being said, this is still years away from manifesting unless you're being truly retarded about your usage of memory. For all intents and purposes, the transition from current gen to next gen is giving us "unlimited" ram to play with. In a few years we'll have systems that eat all of this new ram up and you'll start seeing some differences at that point.

At which time we'll all be playing multi-platforms on the superior PCs anyway. First few years is all that matters from a tech standpoint I think.
 
This post from the xboxone subreddit might interest you and many others....

So, now that both are the same, DRM wise... then... he should have no preference, right? Lol.

eSRAM doesn't have a clock rate. Just amount of data it can push out in one clock cycle, which is 128 bytes. There's nothing they can do with that except adjust the GPU clock speed. Even the simultaneous reads and writes, while an effective boost, still aren't truly a doubling because you should still hit deadlocks when reading and writing to the same address.

I don't understand how they reach that figure then. =S

Makes no sense. Maybe that's why the "realistic" throughput is only 133GB/s...

The PS4's ram would have a much higher "realistic" throughput, correct?
 
eSRAM doesn't have a clock rate. Just amount of data it can push out in one clock cycle, which is 128 bytes. There's nothing they can do with that except adjust the GPU clock speed. Even the simultaneous reads and writes, while an effective boost, still aren't truly a doubling because you should still hit deadlocks when reading and writing to the same address.

and of course, the only way you reach that theoretical maximum is if you are always doing equal numbers of reads and writes, which you won't be. hence 133 GB/sec being given as an example of real world results.
 

borghe

Loves the Greater Toronto Area
Why do you assume it was dumb luck? You think that Sony's lead hardware designer, working in Japan, didn't have frequent discussion with key people from Samsung and Hynix (both South Korean companies, so literally just a few hours away) about their long term road map for GDDR5? If both told him that they expected 4-Gbit chips to be ready for full production this spring why wouldn't he design the system around that?

I can't imagine Cerny and co. hadn't been painted this very road map a year or more ago by both manufacturers of GDDR5 RAM. Initially playing it safe with the 4GB allotment was sensible at the time as the ramp up was something that could happen late in the console's life without significant negative impacts on software development or hardware production. In short, Sony had an ace in the hole and were just waiting to play it when they knew it would deliver.

because 512MB chips aren't available to ANYONE right now. I mean they need to have their architecture in place at least 12-18 months pre-launch.. and there was no way, even with constant talks with Samsung or whomever, that they could have counted on those chips being available prior to production. If I had to guess, it was that they got the go ahead from Samsung (or whomever) less than 30 days before their presser. Had that go/no-go slipped by even 2-4 weeks, Sony would almost definitely be shipping with 4GB GDDR5.

and of course, the only way you reach that theoretical maximum is if you are always doing equal numbers of reads and writes, which you won't be. hence 133 GB/sec being given as an example of real world results.
absolutely. everything in that article, to the contrary of what they (DF) actually stated, suggests that MS is clocking the XBONE GPU at 750Mhz.
 

EagleEyes

Member
Excellent news if true. Some people on here have a hard time these days dealing with positive Xbox news. Why is that? When DF does a very positive PS4 article everyone cheers but then we get a positive X1 article and a lot of posters are bringing out the tinfoil hat and trying to bring up the downclock rumor. This is actual imformation revealed to DF and people keep trying to bring up unconfirmed rumors. Crazy days I tell ya.
 

3rdman

Member
I'll take this news with a bit of salt for now...this is a classic example of misinformation that puts them on a generally even platform with their main rival. It costs nothing against MS if this is false but in the meantime, they'll happily accept that others believe this.

If this is true, MS should be screaming about it from the rafters....Remember the "tech-war" they had against the PS3? They were very aware of the differences of the machines and were more than happy to exploit the differences to anyone who would listen...I haven't gone through this whole thread...did MS confirm this?
 

Jagerbizzle

Neo Member
What is the most expensive part of developing a game? Time. The same as any large project. Especially those involving a sizable staff where any one segment can create a bottleneck for everyone else.

You say that you understand the technical side but do you know the first thing about management? Spending less time optimizing isn't being lazy, it's being economical with what actually matters. Man hours.

Again, the differences are so massive in this generation jump (512mb to 5+ gb) that it will take a long time for studios to start effectively using that additional ram, assuming you're starting with current gen tech and building it up for next gen (which is what most studios are doing). No studio is going to have a need for optimizing ram usage anytime soon.
 
Truth be told I'm less interested in the bandwidth of the Xbox One's memory, than I am in the quality of games. Of course, games are secondary to tech specs.

At the end of the day that is all that counts and is why im getting an X1 over ps4 in year 1.
Next year gonna upgrade my pc with that intel 8 core and ddr4 mobo and new AMD card so for graphics i will go there. And by then hopefully i have an paid internship going and sony has released some new interesting games for the ps4 im getting it or if they announce an XNA sort at gamescom/GDC europe i may be there day one.

They're going to be talking "games" and "experiences" now? Their first party versus Sony's? That's a losing battle on their part. Going after Sony there is like them coming out and breaking down their hardware to compare, except the margin is significantly wider.

Sony has killed quiet a few studios this gen microsoft i pumping out studio from the ground left and right the last 2 years.
Still not sure if microsoft has their Naughty dog i think 343 can be they seem to make the same shitty game play(imo cinematics in games meh) but fancy graphics games
 

8bits

Banned
You think the PS2 hardware helped make those games better than the equivalents on Xbox and GameCube?

It really didn't.

No, but it didn't make the ps2 version any less fun. There were also a lot more great games on the ps2 than there were the gamecube.
 

Jagerbizzle

Neo Member
At which time we'll all be playing multi-platforms on the superior PCs anyway. First few years is all that matters from a tech standpoint I think.

Yes, the next-gen consoles are a major win for PC gamers, because it means we (as devs) now have the green light to develop for 64-bit architectures. It'll be nice to actually use more than 2gb of the 16 I have sitting on my PC at home for games.
 

borghe

Loves the Greater Toronto Area
Excellent news if true. Some people on here have a hard time these days dealing with positive Xbox news. Why is that? When DF does a very positive PS4 article everyone cheers but then we get a positive X1 article and a lot of posters are bringing out the tinfoil hat and trying to bring up the downclock rumor. This is actual imformation revealed to DF and people keep trying to bring up unconfirmed rumors. Crazy days I tell ya.

Why are you taking offense..? The only thing being talked about in here is math.

Per MS' own numbers and explanation, it looks like they are down clocking the GPU by 6.25%. No one is saying (that I see) "lol xbone fail!!!". On the contrary we are just trying to figure out what this means exactly.

Good news that the eSRAM can support reads and writes on the same clock cycle? Absolutely! (though it doesn't necessarily mean 192GB/s.. aka deadlocks) But does this news basically confirm that MS is down clocking the GPU? Yeah, basically.
 

watership

Member
In the end, does it even matter? The X1 still only has 5 GB out of the 8 that it can use for games. I can't see any real advantage this will give them when push comes to shove.

Pretty sure that available memory number will grow, given all the optimization MS did on the 360.

Think about the blades functionality compared to the current dash, then consider that the OS footprint is actually smaller on the 360 than it was at launch.
 

Orayn

Member
Q: So are the CPU and GPU at parity with the PS4? They're the same clock speeds but I'm guessing PS4's GPU has more power?

Either way, this is a pretty big deal and great news for MS if the difference between multi-plat titles will be negligible at best, and probably not evident till later on (~Fall 2014?). I got the feeling there would be a big difference right off the bat but I guess not. An X1 purchase is getting more and more enticing for me (I think the exclusives announced are better for X1 so far, but the DRM and power difference scared me away).

There's an as of yet unconfirmed rumor that MS is using a lower clock speed for the CPU.

GPUs are thought to have the same clock speed, but the PS4's has more compute units. (Comparable to cores.)

This isn't huge news in the bigger scheme of things and does little to close the "power" gap.
 

Frodo

Member
I've been around long enough to not trust these numbers no matter which company is providing them.

Anyway, this looks like good news for Microsoft. At least, if this information is right, we know there is no downclocking going on, which is good enough news by itself, I guess.
 

coldfoot

Banned
Simplest way to use that extra 2GB of memory? Keep more game data in memory, resulting in less loading times, since less will have to be loaded in.
Boom, instant, tangible advantage for gamers.
 

borghe

Loves the Greater Toronto Area
Pretty sure that available memory number will grow, given all the optimization MS did on the 360.

Think about the blades functionality compared to the current dash, then consider that the OS footprint is actually smaller on the 360 than it was at launch.

I actually HIGHLY doubt this. They did a great job stream lining the 360 OS.. but it was only one OS.. Here they can streamline the hypervisor and the XBONE OS, but there is only so much they can do for the WinRT VM.. especially announcing yesterday that Windows Store apps would be usable on it.. Whatever memory availability they make to WinRT on day one, has to likely be the minimum availability on day one thousand and one.

I highly doubt you will see the optimizations made similar to what was done for the 360..
 

longdi

Banned
It just seems like MS is revising how the figures are derived from what's already there in the hardware. It is not like adding more processing units or increasing clock speed. Not too sure how much one can trust the current Xbox team (i don't), This does seems quite "convenient" but the proof will be in the pudding in the end.
 

coldfoot

Banned
Pretty sure that available memory number will grow, given all the optimization MS did on the 360.

Think about the blades functionality compared to the current dash, then consider that the OS footprint is actually smaller on the 360 than it was at launch.

Nope.
Current dash with full functionality isn't resident in memory while playing a game on the 360.
 
Why are you taking offense..? The only thing being talked about in here is math.

Per MS' own numbers and explanation, it looks like they are down clocking the GPU by 6.25%. No one is saying (that I see) "lol xbone fail!!!". On the contrary we are just trying to figure out what this means exactly.

Good news that the eSRAM can support reads and writes on the same clock cycle? Absolutely! (though it doesn't necessarily mean 192GB/s.. aka deadlocks) But does this news basically confirm that MS is down clocking the GPU? Yeah, basically.

Didn't the developer also told the clock was still 800mhz.

I actually HIGHLY doubt this. They did a great job stream lining the 360 OS.. but it was only one OS.. Here they can streamline the hypervisor and the XBONE OS, but there is only so much they can do for the WinRT VM.. especially announcing yesterday that Windows Store apps would be usable on it.. Whatever memory availability they make to WinRT on day one, has to likely be the minimum availability on day one thousand and one.

I highly doubt you will see the optimizations made similar to what was done for the 360..

Hopefully this does allow faster hardware iteration with better forward and backward compatibility devs will probably hate it :(
 
Status
Not open for further replies.
Top Bottom