• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DigitalFoundry: X1 memory performance improved for production console/ESRAM 192 GB/s)

Status
Not open for further replies.
No, but it didn't make the ps2 version any less fun. There were also a lot more great games on the ps2 than there were the gamecube.

And those games would have been better if the PS2 had the hardware architecture of either the GameCube or the Xbox.
 

borghe

Loves the Greater Toronto Area
So MS did some downclocking and is not telling any devs? Hopefully dev perf targets were expecting to come in low.

the down clocking rumor has been around since their presser. I'm guessing the "rumor" was founded from MS doing exactly that.. communicating the news to high priority devs.

also considering there were no third party dev kits at E3, I'm guessing that devs either don't have final SoC dev kits yet, or are just now starting to get them.
 
Totally agree, when we are a few years into this generation its the extra available RAM and faster GPU that will give the PS4 an advantage in games.

The Xbox is confirmed to use 5GB for games, while the PS4 is unconfirmed, its possible Sony could offer 7.5 GB for games as there was talk that 0.5GB of RAM would be required for OS. Although this was when the PS4 had 4GB, so we will have to wait to find out.

A few years is a while away imo. That's a nice time to enjoy what both consoles have to offer before moving to the PS4 strictly for gaming.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
first, I would like to see your definition (and screenshots) of "games that look better".

Second, remember that PC has two pools of RAM. 2GB on the video card, but the games are still accessing system RAM which these days is easily 8-16GB or more. Even if the game is running x86 with 32-bit instructions Win7 will still give that WoW64 instance the full 4GB system memory if available.



This is what DF is saying. But MS' 192GB/s number doesn't make sense then. The entire basis behind their number is simultaneous reads and writes. Which at 102GB/s would be a theoretical 204GB/s.. but that's not what MS said. MS said 192GB/s. So cut that effective number in half and the real number is 96GB/s. Divide that by 128 bytes and you get 750Mhz.

with data throughput levels up to 88 per cent higher in the final hardware.

1.88 * 102 = 192. I don't think the 192 is the combined system bandwidth, just the new PR number for the eSRAM.
 

Pistolero

Member
So the engineers who made the design somehow stumbled upon this new discovery- -that isn't explained in the slightest-...just like that. How magical!!!
I smell bullshit and specs PR FUD.
 

Elios83

Member
Honestly thinking that the people who designed the system didn't know this is not the most credible thing in the world.
Also as people said it would mean that the frequency was not 800MHz since the beginning?
I'd be cautious on this, DF might have been fooled or just being sourced with wrong news.
 
Anyone else find it amusing that it seems hardware engineers have created this super fast, super optimized 32MB capable of amazing bandwidth.

And their operating system guys end up using 3GB of memory, which is what 100 times the memory size!
 

borghe

Loves the Greater Toronto Area
1.88 * 102 = 192. I don't think the 192 is the combined system bandwidth, just the new PR number for the eSRAM.

that's make believe math though.. honestly, DF's assertion that the down clock is false makes zero sense.

you can access 128 bytes per clock cycle. fixed.
MS is now saying that in that clock cycle you can both read from and write to eSRAM.
They are saying this gives an effective 192GB/s
presumably (I really don't think this is a stretch) this means you can read at 96GB/s and write at 96GB/s.
96GB/s divided by 128 bytes = 750Mhz, on the nose.

previously MS was saying 102GB/s with one access per clock.
102GB/s divided by 128 bytes = 800Mhz, on the nose.

so what to take away from this? The EFFECTIVE REAL WORLD throughput on the eSRAM looks to be around 30% higher than expected. (that 192GB/s number will NEVER happen) At the same time, the GPU looks to have been down clocked by 6%.

Would this make the bandwidth speed same as PS4?

No. Comparing this news to anything having to do with the PS4 is pointless. Overall it's probably a win for MS.. Although with a 6% down clock it means there may eventually have to be concessions made (frame rate, res, etc).
 
So from what we can gather the PS4 has:

1 CPU core more available to games
2GB of RAM more available to games
50% more powerful GPU

Forgetting the fact that the PS4 has a unified pool of fast RAM whereas the Xbox One has a split pool of comparatively slower DDR and 32MB of eDram. This should clearly show people that the PS4 is not only more powerful but coupled with the fact that its apparently easy to develop for means that it should perform better with both 1st party and 3rd party games.

Personaly i dont think the gap will be too significant but the will be a gap, that much seems evident. I don't see why people would dispute that.
 

iamvin22

Industry Verified
...maybe a 50mhz drop in GPU clock rate allows them to up the eSRAM clock and fit within their yields? o_O

It's a far cry... but... that's the only thing I can think of.


agreed. this article is all over the place. the IGN xbox board is going crazy over this haha claiming xbone is doing 200+GB of BW.
 

Pie Lord

Member
Truth be told I'm less interested in the bandwidth of the Xbox One's memory, than I am in the quality of games. Of course, games are secondary to tech specs.
Oh yes, how dare we discuss the tech specs of the upcoming consoles? Obviously there could not be any interesting information or discussion to be had on such a topic. Truly we have lost our way.
 

Zyae

Member
not really. The effective bandwidth appears to be around 133 gb/sec and the PS4 Memory bandwidth is still significantly higher plus the PS4 GPU has more CU's.

the PS4s effective bandwith isnt 174GB/s just like the xbox isnt 192GB/s
 

Drek

Member
At which time we'll all be playing multi-platforms on the superior PCs anyway. First few years is all that matters from a tech standpoint I think.

And lets say the extra memory, no need to juggle ESRAM allocation, etc. end up chopping a 2.5 year title down to a 2 year title on PS4. That will make a difference, no?

Cerny claims that the PS4 saved SCEJ a full year of development time over what the title would have taken on the PS3. If that is even remotely accurate consider how much that kind of time savings will benefit Sony's strong and diverse first parties. Polyphony released Gran Turismo 1-3 on a two year cycle, GT4 was only about half a year behind. Suddenly with PS3 it took them 6 years to make GT5 and will have taken about 3 to make GT6.

If moving to the PS4 lets them get back on a 2-3 year cycle we could see GT7 in late 2014 and GT8 in late 2016. That would have some big market ramifications.

Same goes for Naughty Dog. Imagine their two teams working on a faster production cycle without hurting quality.

Time is the one true enemy for all production based businesses. Man hours are your greatest source of cost and missing deadlines is your greatest threat to lose revenue. Anything that reduces that is a boon to development.
 

Vestal

Gold Member
But it's still such a small pool of RAM...

not as a cache for particular operations.

one of the things it can serve as is a frame buffer.

look at it as a faster small truck. You put the most time sensitive stuff there for delivery while your larger trucks take a bit longer moving the not so important stuff.
 

benny_a

extra source of jiggaflops
So from what we can gather the PS4 has:

1 CPU core more available to games
2GB of RAM more available to games
50% more powerful GPU
1 CPU core more available and 2GB more RAM is just conjecture and hasn't been confirmed anywhere.
 

JaggedSac

Member
the down clocking rumor has been around since their presser. I'm guessing the "rumor" was founded from MS doing exactly that.. communicating the news to high priority devs.

also considering there were no third party dev kits at E3, I'm guessing that devs either don't have final SoC dev kits yet, or are just now starting to get them.

That means that DF sources are not high priority devs.
 

Phawx

Member
Why are you taking offense..? The only thing being talked about in here is math.

Per MS' own numbers and explanation, it looks like they are down clocking the GPU by 6.25%. No one is saying (that I see) "lol xbone fail!!!". On the contrary we are just trying to figure out what this means exactly.

Good news that the eSRAM can support reads and writes on the same clock cycle? Absolutely! (though it doesn't necessarily mean 192GB/s.. aka deadlocks) But does this news basically confirm that MS is down clocking the GPU? Yeah, basically.

They implicitly state in the article that duplex operations can only happen in certain conditions. I think you might be assuming a bit too much that the simplex operation = 96GB. The article never mentions that the WHOLE of the esram can do duplex.
 

Portugeezer

Gold Member
if anything, this news suggests that their performance took a minor hit to 1.15Tflops.

seriously-not-funny-o.gif
 

Pug

Member
if anything, this news suggests that their performance took a minor hit to 1.15Tflops.

The developer mentioned in the article states they have been told nothing of a downclock. This source digital foundary are using was the same source they got the information from previous. If MS had downclocked the GPU this developer would have been told straight away.
 
1 CPU core more available and 2GB more RAM is just conjecture and hasn't been confirmed anywhere.
That's why I said from what we can gather, most of the information from earlier this year was pretty accurate though.

Also wasn't there a rumor that it only had access to 90% of the GPU too?
 
1 CPU core more available and 2GB more RAM is just conjecture and hasn't been confirmed anywhere.

Our insiders, pretty much spot on on everything, have told us 2cores for the Xbone OS and half a core for the PS4, so for games you would have 7.5 cores on PS4 and 6 cores on Xbone.

I don´t know why such a tech savy website like Digital Foundry would say that CPU´s are equal, considering we are talking about games.

If things are never confirmed and published in a book i guess we will go on for years saying "but it´s not confirmed" kind of like what happens with Nintendo systems.
 

artist

Banned
I'm not sure how this is supposed to be a good news, especially with the amount of PR involved in the article. :D Take a 7770 and overclock it's memory to hell and back, look at the performance scaling and then be excited. What is worse is that the amount of ROPs are also lacking to be able to take advantage of this so called "new found" bandwidth.

Ideally the best news would have been that the GPU clock had gone up.
 
I find it fascinating that Richard Leadbetter's "well-placed development sources" unveiled these esoteric and theoretical numbers to him, but we still can't get the actual fucking clock speed of the Xbone GPU. Smells funny, Dickie.

This combined with XBox's Albert Penello downplaying specs coming out today smacks of a carefully-timed PR blitz. Not entirely surprised to see Leadbetter as one of Microsoft's prefered message boys.
 

borghe

Loves the Greater Toronto Area

states? no. suggests? see my earlier math. The 192GB/s theoretical max, assuming full duplex operations, would mean 750Mhz GPU clock.

Of course if MS would just come out and tell us the specs no one would have to guess at anything.

Dude, you're reading too far into it. I see where your math is. But you'd also be avoiding a bunch of words from the article.

just the part of "we haven't been told of a down clock". But shit, we haven't been told of a clock speed period. You are right though... All this entire thread is (and the DF article as well) is theorizing.
 
It's a pretty different situation. MS stated publicly that bandwidth of the 360 EDRAM was 256GB/s at first, but that was proven to be incorrect. This time around we've got an 'official' 102GB/s (that can be confirmed by looking at the specs), but that's been revised up to 192GB/s.

Remember that according to the article this is coming from developer sources, and I don't think it would do MS any good to misrepresent the eSRAM bandwidth in that situation.


This is true, MS eDram was indeed 256GB/s capable, but resided in the "Daughter die". But this system worked differenty, there was logic in the daughter die that was then sent in chunks via the 32GB/s bus, but the buffer was still limited to 32GB/s, with ESRAM we have a completely different setup and there will be indeed 192GB/s of theoretical peak bandwidth, not to mentions 10MB wasn't enough for a framebuffer without tiling, now with 32MB there will be plenty for 1080p 4XAA.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
states? no. suggests? see my earlier math. The 192GB/s theoretical max, assuming full duplex operations, would mean 750Mhz GPU clock.

Of course if MS would just come out and tell us the specs no one would have to guess at anything.



just the part of "we haven't been told of a down clock". But shit, we haven't been told of a clock speed period. You are right though... All this entire thread is (and the DF article as well) is theorizing.

Did you miss my last post with the math or don't agree with it?
 

Takuya

Banned
Slight improvement, but hindered by the tiny amount of RAM it affects, as opposed to being effective for the entire memory pool.
 

Drek

Member
You are correct. The PS2 was a fluke.

Were PS2 games generally better/more advanced than PS1 titles?

What's your comparison here? Do you think the spurious correlation of a system being the weakest in it's generation and it having the best library is somehow intertwined, ignoring that 90% of PS2 games never went multi-plat because the system dominated the sales charts, and the handful that did (PoP trilogy, BG&E, etc.) were all better on the Xbox?

Say that to PC gamers how their games are better than anyone else. Better hardware help game's visual quality.
What?

So Battlefield with 64 players and bigger maps isn't better than Battlefield on consoles? Planetside 2 isn't doing anything interesting with it's tech I guess. Those massive battles could clearly have been done on PS360, right? What way do you prefer playing your online shooters, at 30 fps on console or at 60fps on PC? And obviously we have tons of games with highly destructable environments like the original Crysis on the current consoles right? I mean, Crysis 2 and 3 are on them so it has to be just like playing the original on PC from an interactivity/destruction standpoint, right?

Also, visual quality can directly improve game play if done correctly, see: every game that relies heavily on cinematic storytelling.

Again, the differences are so massive in this generation jump (512mb to 5+ gb) that it will take a long time for studios to start effectively using that additional ram, assuming you're starting with current gen tech and building it up for next gen (which is what most studios are doing). No studio is going to have a need for optimizing ram usage anytime soon.
So massive that Guerrilla Games are already making use of ~5GB with a launch title. So does The Witness.
 

Izick

Member
Didn't an "insider" say that they were having major yield issues with the ESRAM? Doesn't this go completely against that?
 
states? no. suggests? see my earlier math. The 192GB/s theoretical max, assuming full duplex operations, would mean 750Mhz GPU clock.

Of course if MS would just come out and tell us the specs no one would have to guess at anything.

Your assuming in your math beyond what the article states.
 
Status
Not open for further replies.
Top Bottom