• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Can someone explain this eSRAM issue in layman's terms?

CLEEK

Member
So is the xbone closer in performance to the wii u or the PS4?

|---360/PS3 - Current Gen
|
|---WiiU - Current Gen Plus
|
|
|
|---Xbone
|
|---PS4 - Next Gen
|
|---High End PC - Next Gen Plus

Closer to Next Gen than to Current Gen, but its a shitty Next Gen if that's your only experience of it. And you're more likely to get native 1080p on the WiiU than the Xbone.
 

beril

Member
Not in the slightest. There isn't much to understand in a fast scratch pad memory pool. Consoles been using this since SNES I think. The situation is much worse for XBO than it was for PS3 because it's not a question of understanding anything, it's just that XBO's memory architecture is weak compared to PS4's. This won't go away.

SNES didn't use a framebuffer, so no. But yes most 3D consoles has had a similar setup
 

mocoworm

Member
Why is the ESRAM only 32mb ?

Why is in not larger, and would it not be beneficial to have it larger? What is the decision process behind 32 ?
 

CLEEK

Member
It takes up a lot of space on the chip.

The entire APU is 5 billion transistors. That's the combined count of the CPU + GPU + ESRAM. The ESRAM is made up of 1.6 billion transistors, or 1/3rd of the entire chip.

If MS had used that silicon budget on the GPU, that would have had more rendering power than the PS4.

ESRAM is expensive, big, and doesn't scale. It's not designed to be main memory.
 
The problem with eSRAM it is just another hurdle for developers to get their heads around, and build their engines around. Sony went to 30+ developers and according to Cerny the vast majority stated their #1 thing they did not want was " tricks " and split memory. And even when used, your getting bandwidth out of the eSRAM that is only slightly higher then the overall bandwidth between the GDDR5 and the GPU within the PS4. Only devs have 250x the RAM to utilize in the PS4 setup lol.

Now, the problem with the X1 is they ended up going with DDR3 on a small bit bandwidth pipe ( think a highway, but very thin, like 2 lanes ) which ended up with 67GB/s of memory bandwidth between the DDR3 and the GPU. VERY small in todays terms. The 360 had 22GB/s of memory bandwidth between its RAM and GPU, so your getting a pretty substandard 3x memory bandwidth increase from the 360 to the X1. eSRAM's max bandwidth, according to spec sheets, is 200GB/s or so of memory bandwidth. But the problem is, it is basically like a shed worth of storage compared to the PS4's city block worth of storage.

Why is in not larger, and would it not be beneficial to have it larger? What is the decision process behind 32 ?

Ever seen a die?

cpu-hand-17079926.jpg


These things are very small. When you put RAM directly onto that chip there, it takes up real-estate when there is not much to give. The more you use, the bigger the space it will need is. These things are " cut " out of wafers, they look like this ...

intel-cpu-wafer-d001.jpg


So the bigger your APU, the less you will " cut out " from these wafers, making your yields worse then the smaller APU's per wafer.
 

Kerub

Banned
Basically, ESRAM is a sharpening module. It makes the colours and textures "pop" more. This makes the Xbox One image quality better than the PS4.
 

industrian

will gently cradle you as time slowly ticks away.
When a ESRAM and a DDR3 love each other very much, they hold each other in a special way...

But the GDDR5 just jerks one out faster by itself. And doesn't need to buy dinner.
 
Its pretty prophetic that Cerny discussed this exact design problem when he revealed the PS4 specs, and explained why they stayed away from embedded RAM. I wonder if he knew what direction MS was going in at the time.
 

avaya

Member
Its pretty prophetic that Cerny discussed this exact design problem when he revealed the PS4 specs, and explained why they stayed away from embedded RAM. I wonder if he knew what direction MS was going in at the time.

Cerny's alternative was 1TB/s eDRAM though.....that would have been insane.
 
Basically:

[PS4]GDDR5 = 176GB/s
vs
[XB1]DDR3 = 68GB/s

Oh shit too slow, need to add something

[PS4]GDDR5 = 176GB/S
vs
[XB1]esRAM = 102 GB/S

Oh few that's better

[PS4]8GBs (~8000MBs) GDDR5
vs
[XB1]32MBs esRAM


Oh shit.

The Xbox stats are all theoretical and probably a huge pain in the ass to accomplish so were are probably way below that at the moment. The Api for the new xbox doesnt have great tools for filling and emptying the esram so doing it manually means its not up to max speeds yet.
 

Farslain

Member
How will the PS4 improve in the future? I heard some things about AOCs and hUMA but how will those help in real world performances? For example, if BF4 was released on PS4 later instead of on launch, could it get to 1080p/60FPS with higher settings?

In the immediate future you'll see developers simply get much better at what they've got, in time for the next development cycle of games you'll have had the console launch and most other dev's time with the hardware will have doubled.

Moving in to year 1, 2, 3 etc you'll get better/more efficient tools via the PS4 SDK making work a lot easier and more efficient.

The easiest way to see the effect of these two is to l compare games like The Last of Us and Halo 4 with the launch titles we saw, as much as this isn't the most fair of comparison with these titles not being multi platform it is still easy to appreciate the increase from pre launch development cycle to 5+ years down the road.
 

Dr_Swales

Member
Talk to me like I'm a total idiot that doesn't know anything about this stuff, because that's what I am. This is something at the heart of lots of discussions and articles, and all of these things are written for people that already understand the basic facts.

Thanks in advance.

I was bored on my dinner break, it's raining so I can't sit outside... so I made this for you!

It's an extremely simplified diagram, but maybe you will get the basic idea of how it works.

I think it's right, feel free to shoot me down if it's BS =)

Edit:-

No

The eSRAM is already RAM, so it does not communicate with the main block of RAM at all. The eSRAM pipe goes to the GPU.

PS4

25N8TNA.png


Xbox One

ctfVPsQ.png
 
Silicon budget is a bit more complex, there are redundancies here and there.
Another problem with chips is that is difficoult to "carry in/out" signals: you're limited by pin count.
Microsoft choose to stick with DDR3, a solution could've been move to a 512 bit memory bus (doubling bandwidth) and trade ESRAM for more GPU legs, but driving all those pins (and lanes) would've been really complex.
 
I was bored on my dinner break, it's raining so I can't sit outside... so I made this for you!

It's an extremely simplified diagram, but maybe you will get the basic idea of how it works.

I think it's right, feel free to shoot me down if it's BS =)

PS4

25N8TNA.png


Xbox One

8qdPxOy.png

No

The eSRAM is already RAM, so it does not communicate with the main block of RAM at all. The eSRAM pipe goes to the GPU.

Which is why the bandwidth is so high, because it is basically a large block of L2 cache directly onto the CPU/GPU die, and not somewhere in the distance. Just edit the picture to where the eSRAM has that 109GB/s arrow between it and the GPU. To further be accurate, you can break the big 32MB chuck into 4 8MB chunks. It is not one large slab of 32, but 4 smaller slabs of 8MB.
 

Gonff

Banned
It is initially a headache for programmers, but once utilized properly, allows for some pretty cool things.
 

ZiggyRoXx

Banned
eSRAM's max bandwidth, according to spec sheets, is 200GB/s or so of memory bandwidth.

..I think this is where most people get confused over the potential performance of the Xbox, the figure is pretty much correct, but only having 32Mb of the stuff means they can't fit a 1080p frame buffer in there with all the extra graphical effects people have come to expect such as 4xAA, nice shaders, etc.

..if they'd been able to put 128Mb of the stuff in the box, that would have put the Xbox on a more than level playing field with the PS4.

..of course the ironic thing is, it would have been cheaper to go with a unified 8GB of GDDR5 like the PS4 rather than fitting 128Mb of eSRAM.

In other words, 32Mb is effectively a 70mph speed limiter fitted to a Ferrari..

..duh!!!
 

BWJinxing

Member
In terms of driving on the Speed Limit-less parts of the Autobahn in relation to memory bandwidth:

PS4= You buy a new turbo car can hit out at 176 MPH (Theoretical Max bandwidth) consistently getting close or hitting it. Some Occasional turbo lag (Memory Latency)

XBone= Your Econo box tops out at about 68 MPH (DDR3) on a good day. So you decided YOLO and go twin turbo, adding a lot more power. You can go at least 109 MPH faster (ESRAM). You couldn't get bigger turbos because of engine bay constraints (32MB of ESRAM due to die constraints).

Once installed, the setup lacks a proper tuning and other bottlenecks (hard to program for atm, ESRAM data movement not automatic) resulting fluctuating top speeds. On paper, you should be hitting 200+ MPH but your going much slower than that. Further tuning is required but your budget only allows for so much work (platform limitations of Hybrid ESRAM/DDR)

This is my analogy of the situation.
 
Its pretty prophetic that Cerny discussed this exact design problem when he revealed the PS4 specs, and explained why they stayed away from embedded RAM. I wonder if he knew what direction MS was going in at the time.

In all fairness, Sony got lucky. Originally they were going to go with 2 GB of GDDR5, then 4 GB, and then got a deal on 8 GB.
 

DBT85

Member
In all fairness, Sony got lucky. Originally they were going to go with 2 GB of GDDR5, then 4 GB, and then got a deal on 8 GB.

By all accounts they were not just sitting around hoping that someone would make the 4gb GDDR5 chips, they were putting in a lot of work to help it along.
 

Sethos

Banned
It is initially a headache for programmers, but once utilized properly, allows for some pretty cool things.

Sorry for the newbie question but what cool things does it allow? To me it seems like a little compensator for a slower piece of hardware, like a middle-man. What does this thing do that say the PS4 couldn't?
 

Skeff

Member
Sorry for the newbie question but what cool things does it allow? To me it seems like a little compensator for a slower piece of hardware, like a middle-man. What does this thing do that say the PS4 couldn't?

Nothing, your understanding of it is correct.
 

SappYoda

Member
ESRAM is like the hyperbolic time chamber, where you can perform 1 year worth of tasks in just one day. But the fridge is really small, and the door limits the amount of food that goes through it. Gravity is higher than normal, and the air gets denser and temperature fluctuates the deeper you want to use it.
 
Having the developer manually have to manage the data flows through 2 pools of memory, one large and slow and one tiny and fast, is a super duper headache, especially in a dynamic, unpredictable environment like a game.
 

Sinthor

Gold Member
Firstly it is too small, MS' answer is to use regular ram if ESRAM is too small, but the whole point of ESRAM is it is this fast gateway which is useless if nothing fits.

Secondly and perhaps most importantly it requires additional work to write to, unlike for instance PS4's unified 8gb of very fast ram.

Overall we have lovely simple x86 pc based hardware that people are very familiar with, all they're having to learn is the development software, with xbone they're having to go through what they went through with PS3's cell processor all over again, aka more complexity for low pay off.

On the plus side, things will get better, the xbone development software and the game developers, this will close the gap, on the negative side the same thing will happen on PS4 opening the gap again.

ESRAM was a very bad decision for a console that was forced out the door 6-8 months early with immature software and technically weaker hardware.

But isn't this the same basic thing developers had to do with the 360? It had a small amount of EDRAM, I think or something similar. So this should be the same process for them... I wouldn't think this would be a big deal since developers were used to a similar type of architecture that they used with the 360.
 

Bossofman

Neo Member
I think the Esram was there because they knew for SURE they where going 8gb's at a time when doing that with GDDR5 was an iffy proposition. Sony lucked out with production of the GDDR5 chips getting them out in quantity when they needed them, otherwise we would have had the current XBO ram vs 4gb of GDDR5 which I think would have favored XBO at least in the ram dept.
 
I hate to do this to XB1 fans but...

ESRAM creates latency in variable-environment games. Period.

It may help push HD textures better than DDR3 alone, but the data has got to be called and retrieved before it gets into the pipeline. And since ESRAM acts as a cache, it gets flushed for new data whenever repeating data stops being called (when new levels/models/art/etc... are streamed or loaded)

That creates latency, which is ultimately a bottleneck.

The only games that will benefit from ESRAM are games with mostly static environments, like sports games or racing games, where all the art for the backgrounds, player models, etc... are loaded one time and don't change during the individual games or matches. Which is why games like NBA2K14 and Forza will be able to handle 1080p/60fps on the XB1. (ie. NBA2K14 games take place in one, enclosed arena at a time, with no draw distance or atmospheric effects to create. No variable explosions, no sudden bursts of light or particle effects. Here, ESRAM works well. Same with Forza, where each race is on a single track, within a capsulated environment where the background is actually a scrolling plane, as opposed to a true 3D landscape. Again, no major particle effects, no variable explosions... all art is largely loaded one time and that's it. It’s stored in the ESRAM bank and stays there until the next race or match or game loads up.)

But in games that have big, open worlds (Skyrim, GTA, etc…), or big MMOs (Everquest, WoW, ESO, etc…) or large-sandbox FPSes, like BF or large map CoD games, that have literally thousands of unique art assets and effects constantly being called, ESRAM will struggle because it's constantly being flushed for new data, rather than just providing the same static data. And since it’s only 32MBs, it’ll be flushed constantly, making it less useful.

That will be the limitation for the XB1 for the entirety of this "Next-Generation". The XB1 will simply work better at 720p upscaled than it will at 1080p native, because the 720p art assets are smaller.
 

DeviantBoi

Member
If there's one thing that we've learned this past generation is that third-party developers are not going to go out of their way to get the most out of hardware that's difficult to use.

That's why most (if not all) of the multiplatform games looked better on the 360, but the PS3 exclusives looked better than them.
 
I hate to do this to XB1 fans but...

ESRAM creates latency in variable-environment games. Period.

It may help push HD textures better than DDR3 alone, but the data has got to be called and retrieved before it gets into the pipeline. And since ESRAM acts as a cache, it gets flushed for new data whenever repeating data stops being called (when new levels/models/art/etc... are streamed or loaded)

That creates latency, which is ultimately a bottleneck.

The only games that will benefit from ESRAM are games with mostly static environments, like sports games or racing games, where all the art for the backgrounds, player models, etc... are loaded one time and don't change during the individual games or matches. Which is why games like NBA2K14 and Forza will be able to handle 1080p/60fps on the XB1. (ie. NBA2K14 games take place in one, enclosed arena at a time, with no draw distance or atmospheric effects to create. No variable explosions, no sudden bursts of light or particle effects. Here, ESRAM works well. Same with Forza, where each race is on a single track, within a capsulated environment where the background is actually a scrolling plane, as opposed to a true 3D landscape. Again, no major particle effects, no variable explosions... all art is largely loaded one time and that's it. It’s stored in the ESRAM bank and stays there until the next race or match or game loads up.)

But in games that have big, open worlds (Skyrim, GTA, etc…), or big MMOs (Everquest, WoW, ESO, etc…) or large-sandbox FPSes, like BF or large map CoD games, that have literally thousands of unique art assets and effects constantly being called, ESRAM will struggle because it's constantly being flushed for new data, rather than just providing the same static data. And since it’s only 32MBs, it’ll be flushed constantly, making it less useful.

That will be the limitation for the XB1 for the entirety of this "Next-Generation". The XB1 will simply work better at 720p upscaled than it will at 1080p native.

Forgive my naivety, but surely there is static assets in most games? Surely this is just a case of clever management by the driver or the developer.
 
Forgive my naivety, but surely there is static assets in most games? Surely this is just a case of clever management by the driver or the developer.

Yes, but in this case, with only 32MB of ESRAM acting as a Cache for 8GB of DDR3 RAM, you're asking a very small cache pool to handle pretty big HD assets.

You're also asking the developers to build their games AROUND the 32MB of ESRAM, as opposed to just letting them have a single, consistent, large pool of RAM (like 1 big pool of 8GB GDDR5 RAM *cough PS4 cough*).
 
I think developer issues, I think is more bandwidth related then eSRAM size and being able manage bandwidth usage, basically knowing where and reaching that theoretical ceiling.
 

charsace

Member
The X1's design shows that MS needs J. Allard badly. MS needed at least 64mb of esram for the setup be worth while imo. I don't know why they only went with 32.
 

wsippel

Banned
What is the speed of the Wii U 32 MB eDRAM?
Nobody really knows. Doesn't help that the Wii U has more than one embedded memory pool. It has three, two eDRAM (32MB + 2MB) and one eSRAM (1MB) - not counting TCM, another small (96kB) pool of eSRAM for the ARM core.
 
Top Bottom