• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Can someone explain this eSRAM issue in layman's terms?

I think I forgot that Microsoft had a revelation in production that their eSRAM was twice as fast as they designed it to be.

In my mind that wasn't real and I only had a bad dream and calculated the bandwidth using the assumption that eSRAM wasn't double data.

Xbone eSRAM's theoretically max is higher PS4's GDDR5's theoretical max.
Why MS's own lab tests are so much lower than Crytek's real world results is still curious. :p
Well, if there's any studio with the technical savvy to truly max out a complicated system via clever, adroit coding and management, it'd be Crytek. But personally, I think "We're using the whole 204, so there!" isn't really a claim about the precise number so much as a snarky way of asserting that over 109 GB/s is a real thing. ("And we're just the awesome guys to do it, too!")
 

hesido

Member
Microsoft went the cheap route for the RAM, because they needed the moneyz for Kinect and TV guide thing. They also charge 100$more (I imagine it would be enough for the Cable TV enter"taint"ment and Kinect) anyway so it's quite clear not as many of your dollars are going into the box, compared to the PS4.

It wasn't an explanation but I had to let it off my chest!
 

astraycat

Member
Looks OK. But this is once again where the eSRAM comes into play. Latency can cause stalls. In a worst case scenario (cache misses and stuff), assuming a 6-20 cycle latency at the given clock speed, the CPU can stall for roughly 8-24 cycles, the GPU for 3-10 waiting for data. More bandwidth doesn't help here in any way, shape or form. With eSRAM, the stalls are much shorter: about one cycle for the GPU and two for the CPU. Which obviously increases the efficiency and means the system performs closer to its theoretical peak.

AMD GPUs will stall for a hundred cycles or so on a memory access even for an L1 hit. It's not going to be 3-10 cycles for ESRAM; it may not even be that fast for the on-CU LDS memory.

The CPU doesn't have direct access to ESRAM either, so there'll be little benefit there.
 
If its taking full advantage of the hardware, it'll have to do.

You can't dismiss a possible good first party game just because the resolution isn't as high as capable. Not fair to you as a gamer.

But you loose so much detail when upscaling to 1080p from 720p its probably not even worth it to go for more advance shaders.
Just look at bf4 example 900p upscaled to 1080p is a lot more pleasing to the eyes less jaggies and less aliasing in general.
If they game is good i will not skip but i will not praise it for its graphics or anything like that.

the problem with that, is even if halo 5 were to be 1080p@60 then other systems would have to be given up. a not-so-sophisticated physics sim? quite a number of not-so-stellar surface collision resulting in clipping?

those are things which can't be noticed right away (unlike resolution and performance), and worse it does not have a ps4 version to compare to.

so yes, halo 5 might be 1080p and that could be the ammunition of "look it can also do this" crowd but more likely there will be things that will be less-refined.

The physics sim doesn't needs to be that advance and i can life with clipping i don't expect next gen to eliminate that. For halo 5 all they need for me is put halo 2 or 3 gameplay engine into a next gen jacket. Hell if they port halo 3 to X1 in 1080p@60fps i dont even need a halo 5. But that is the halo fanboy speaking :p

Forza is already doing 1080p@60fps so i think the cpu has more then enough performance to do some basic physics stuff. Not sure how much or if they even use the gpu for physics.

Halo is/was known for its gameplay,theater, map editor and AI not for its stellar and above average graphics. Hell they might use something like Forza 5 to improve AI over time the more players play that level the more data 343 has to improve the AI.
 

wsippel

Banned
AMD GPUs will stall for a hundred cycles or so on a memory access even for an L1 hit. It's not going to be 3-10 cycles for ESRAM; it may not even be that fast for the on-CU LDS memory.

The CPU doesn't have direct access to ESRAM either, so there'll be little benefit there.
The stalls are longer overall, of course. Memory latency is just one part of it, which was what I was talking about. Also, I'd be surprised if the CPU couldn't directly access the eSRAM, considering it can on Wii U as far as we know.
 

astraycat

Member
I think I forgot that Microsoft had a revelation in production that their eSRAM was twice as fast as they designed it to be.

In my mind that wasn't real and I only had a bad dream and calculated the bandwidth using the assumption that eSRAM wasn't double data.

Xbone eSRAM's theoretically max is higher PS4's GDDR5's theoretical max.
Why MS's own lab tests are so much lower than Crytek's real world results is still curious. :p

I've still yet to see a satisfactory explanation for how this revelation is possible. That "you can do read/write at the same time for 7/8 cycles" is just totally lost on me. How do you read and write on the same cycle over the same bus? Do they actually have two sets of buses? Otherwise I can't think of a way to do it.
 
Does the 60 fps factor into this? It seems like XB1 doesn't have a single deferred rendered game at higher than 720p60. Even though graphics-wise some of them aren't really lookers (Killer Instinct, Powerstar Golf)
 

astraycat

Member
The stalls are longer overall, of course. Memory latency is just one part of it, which was what I was talking about. Also, I'd be surprised if the CPU couldn't directly access the eSRAM, considering it can on Wii U as far as we know.

Memory latency on GPUs has a lot more to do with the actual memory architecture of the GPU than any inherent latency of the memory itself. They're built to be high-latency because GPU workloads tend not to be latency sensitive. ESRAM is hardly going to change that.

Where did you see that about the Wii U? Its architecture is completely different from that of the XB1, being a non-APU Evergreen(?) variety. I don't think any conclusions about the XB1/PS4 can be reliably drawn from it. It seems pretty silly for the CPU to be able to touch the ESRAM anyway. I don't really see any use for it on the CPU side.
 

GeoffEff

Neo Member
esRAM is this generations Cell Processor in a way. Its something the devs don't fully understand quite yet making their jobs in part more difficult/involving.
 
The graphics chip on the Xbox One has to draw the picture somewhere. The main memory is slow, so it's like picking up pens and doing the drawing underwater. The ESRAM is like a tiny dry spot out of the pool where the graphics chip can work faster, but the problem is it has to find room for both the pens and do the drawing in that little spot, so it's got helpers constantly swapping in and out pens and papers from the pool. It can only fit a few papers at full resolution, so if it's trying to draw a fancy picture with lots of layers it could run out of room and have to reduce the size (resolution) of the drawing. Keep in mind this is how the 360 worked, but the dry spot was faster and larger in relative terms and the swapping more automatic.

With the PS4, everything's laid out in a giant gymnasium with a waxed floor so everything happens faster than even the dry spot in the Xbox One, and there's no real need to shuffle things around in it. The main drawback is that everybody, including Sony, thought the gym was going to be half the size of the pool the Xbox was using up until the last minute of the design phase. There's also suggestions that the waxed floor might make people slip a little when changing directions (latency) compared to the pool, but the GPU doesn't change directions much, and the CPU walks slower and more carefully anyway (different memory controller).

Hope that makes sense.

That's brilliant! Very creative analogy!
 

Cyborg

Member
The graphics chip on the Xbox One has to draw the picture somewhere. The main memory is slow, so it's like picking up pens and doing the drawing underwater. The ESRAM is like a tiny dry spot out of the pool where the graphics chip can work faster, but the problem is it has to find room for both the pens and do the drawing in that little spot, so it's got helpers constantly swapping in and out pens and papers from the pool. It can only fit a few papers at full resolution, so if it's trying to draw a fancy picture with lots of layers it could run out of room and have to reduce the size (resolution) of the drawing. Keep in mind this is how the 360 worked, but the dry spot was faster and larger in relative terms and the swapping more automatic.

With the PS4, everything's laid out in a giant gymnasium with a waxed floor so everything happens faster than even the dry spot in the Xbox One, and there's no real need to shuffle things around in it. The main drawback is that everybody, including Sony, thought the gym was going to be half the size of the pool the Xbox was using up until the last minute of the design phase. There's also suggestions that the waxed floor might make people slip a little when changing directions (latency) compared to the pool, but the GPU doesn't change directions much, and the CPU walks slower and more carefully anyway (different memory controller).

Hope that makes sense.

A mod shoud promote you to......Amazing
 
Think of it like two chutes. One chute is Sony's chute, it's really large and a circular shape which is important because everything you need to put down the chute will fit within that shape. You're free to dump in whatever you want and not really have to worry about it getting clogged up. It'll all go in smoothly.

Microsoft went with a chute that's about the same size, but stuff is much slower to go down because it's rectangular shaped. In order to help the situation they added a second chute, but made it super tiny. You can throw down really small things you need to get rid of right away, but it complicates things because you have a whole lot of stuff you need to put down the chute and now you have to worry about separating the big stuff from the small stuff.

In the end you conclude that one big chute that everything goes down quickly is a better method. There's really no benefit to having a big slow chute and a small faster one.

This is a good explanation, and a good explanation for the future, because I think the resolution situation is the way it is at the moment because the console is coming out earlier than they expected, and devs are having to do exactly that, manually seperate everything. However in the next few months, I imagine the drivers will be updated so that you pour your stuff into one big chute which sorts it into the two chutes for you. It will be it easier for the devs, but only in very specific situations will the two chutes be better than the one big chute.
 

RayMaker

Banned
Is the power gap between the X1 and the PS4 greater than the gap between the 360 and the PS3?

If so, by how much?


Everybody should know by now that the power difference on paper is quite substantial (50-60% more GPUflops, faster RAM etc)

but when it comes to 3rd party games the level of difference wont be greater then the level of differences on PS3 and 360, if BF4 is anything to go by.

3rd parties will maintain all assets,framrate and effects on the X1 by lowering the resolution. When MS have sorted out the dev environment for the X1 I think 3rd parties will be going for 900p on x1 and 1080p on PS4 with similar framerates.

-----

Yeah just realised i failed at math, my mistake sorry.
 

benny_a

extra source of jiggaflops
which is a 20% more pixels for PS4, which seems right. a 50% hardware advantage does not equal 50% more pixels, even now BF4 only has 25% more pixels on the PS4
BF4 on Xbone: 720p (921600 px)
BF4 on PS4: 900p (1440000 px)

921600 px * 1.25 = 1152000 px

You are wrong.

And now you edited it.
 

Skeff

Member
Everybody should know by now that the power difference on paper is quite substantial (50-60% more GPUflops, faster RAM etc)

but when it comes to 3rd party games the level of difference wont be greater then the level of differences on PS3 and 360, if BF4 is anything to go by.

3rd parties will maintain all assets,framrate and effects on the X1 by lowering the resolution. When MS have sorted out the dev environment for the X1 I think 3rd parties will be going for 900p on x1 and 1080p on PS4 with similar framerates.

which is a 20% more pixels for PS4, which seems right. a 50% hardware advantage does not equal 50% more pixels, even now BF4 only has 25% more pixels on the PS4

Wrong:

Firstly 50% hardware advantage is a gap of more than 50% pixels if everything else remains the same.

Secondly: BF4 has 56% more pixels on PS4 than XB1, as well as better framerate and more effects.

Thirdly: The gap is much bigger than 360/ps3, Games this gen had differences of 20p for about 9% more pixels on xb360. We are already seeing a game with 125% more pixels in CoD.
 
actually, gddr5 can carry way more people than the ddr3.

He just forgot to put the number of GBs on it, but his explanation is solid. Here's what I think he should fix:

8GB of DDR3 is slow but can carry alot of people.
32MB of ESRAM is super fast but can't carry many people.
8GB of GDDR5 is simultaneously fast and can carry many people.

So while you can balance off the ESRAM and DDR3 to cooperatively ferry people to your destination, it's more complicated. Whereas the Sony solution (Just GDDR5) is easier to deal with and doesn't require alot of thought. You just load people up and transport them over.

The end result is potentially the same but the way you went about it is much different between the two. Which is why people a re saying ESRAM is a barrier to getting optimal performance on the XB1.

esRAM is this generations Cell Processor in a way. Its something the devs don't fully understand quite yet making their jobs in part more difficult/involving.

Except the Cell processor is actually a beast and is far better the the Xbox 360's CPU. The 8GB DDR3 + 32MB esRAM is still inferior to 8GB GDDR5 RAM, and not to mention even the PS4's GPU is 50% much more powerful than the Xbox One's.
 

kitch9

Banned
esRAM is this generations Cell Processor in a way. Its something the devs don't fully understand quite yet making their jobs in part more difficult/involving.

No,

The Cell was actually quick and used right afforded some benefits over the norm. The Esram is there simply to make something that is poor a little bit less poor.
 

Skeff

Member
So what is this I hear about the eSRAM being divided into 4 8MB modules or something?

There is not a 32mb pool of esram with 109Gb/s bandwidth.

There are 4x 8mb pools all with ~27.25 Gb/s bandwidth.

The 1024bit bus is actually 4x 256 bit buses.
 
Vegeta wants to fight Goku on even terms, so he's allowed himself to become a Majin for the sake of unleashing his potential. Unfortunately, he doesn't realize that Goku has already reached another tier beyond SSJ2 and is only humoring him; the Majin transformation is a dead-end that will never allow Vegeta to reach power comparable to a SSJ3.

Someone had to do it, right?

This actually makes sense.
 

Fredrik

Member
Esram = Saturns tacked on processor
I guess you mean difficulty in programming. Saturns "tacked on processor" was pretty great, at certain tasks at least, a buddy of mine had a Saturn and I was always jealous of him as a PS owner whenever a game was optimized properly. ESRAM don't seem to have many, if any, benefits judging by this thread.
 

gofreak

GAF's Bob Woodward
This is a good explanation, and a good explanation for the future, because I think the resolution situation is the way it is at the moment because the console is coming out earlier than they expected, and devs are having to do exactly that, manually seperate everything. However in the next few months, I imagine the drivers will be updated so that you pour your stuff into one big chute which sorts it into the two chutes for you. It will be it easier for the devs, but only in very specific situations will the two chutes be better than the one big chute.

I'm not sure they can make eSRAM transparent to devs without cutting the legs off of performance - vs manual management of it.

One possibility is a software cache for render input and output (if that's possible in the latter case). But you won't get the same perf out of a setup like that as in the manual case.
 

Fredrik

Member
Does the 60 fps factor into this? It seems like XB1 doesn't have a single deferred rendered game at higher than 720p60. Even though graphics-wise some of them aren't really lookers (Killer Instinct, Powerstar Golf)
I think Forza 5 is 1080p at locked 60fps.
 

RayMaker

Banned
Wrong:

Firstly 50% hardware advantage is a gap of more than 50% pixels if everything else remains the same.

Secondly: BF4 has 56% more pixels on PS4 than XB1, as well as better framerate and more effects.

Thirdly: The gap is much bigger than 360/ps3, Games this gen had differences of 20p for about 9% more pixels on xb360. We are already seeing a game with 125% more pixels in CoD.

Yes you are correct.

but remembering and looking back at some 360 vs ps3 the differences looked more severe.

some PS3 games had missing assets , much poorer framrate and a lot more aliasing.

where as with the X1 the only thing devs need to do is lower the pixels to maintain parity with everything else.

when the majority of ppl look at x1 and PS4 games, there not gonna be like ''arrh man, the difference is way more then the 360 and PS3 was''
 

kitch9

Banned
I think Forza 5 is 1080p at locked 60fps.

Forza 5 uses a forward renderer I believe to achieve that. Not very system heavy but severely limits the fancy effects you can pull so heavy use of smoke and mirrors is employed.

It's like we have gone back a decade.

Yes you are correct.

but remembering and looking back at some 360 vs ps3 the differences looked more severe.

some PS3 games had missing assets , much poorer framrate and a lot more aliasing.

where as with the X1 the only thing devs need to do is lower the pixels to maintain parity with everything else.

when the majority of ppl look at x1 and PS4 games, there not gonna be like ''arrh man, the difference is way more then the 360 and PS3 was''

XB1 already has missing effects along with lower RES. Once compute starts getting used along with Huma its going to be ground into the dust when it comes to effects.
 

RayMaker

Banned
Forza 5 uses a forward renderer I believe to achieve that. Not very system heavy but severely limits the fancy effects you can pull so heavy use of smoke and mirrors is employed.

It's like we have gone back a decade.



XB1 already has missing effects along with lower RES. Once compute starts getting used along with Huma its going to be ground into the dust when it comes to effects.

and thats no bad thing if the game still looks fantasitic.
 

Skeff

Member
Yes you are correct.

but remembering and looking back at some 360 vs ps3 the differences looked more severe.

some PS3 games had missing assets , much poorer framrate and a lot more aliasing.

where as with the X1 the only thing devs need to do is lower the pixels to maintain parity with everything else.

when the majority of ppl look at x1 and PS4 games, there not gonna be like ''arrh man, the difference is way more then the 360 and PS3 was''

Remember XB1 is getting away with cutting the Resolution right now, on cross gen games. Also BF4 was missing effects and at a lower frame rate, you seem to be ignoring that.
 

RayMaker

Banned
Forza 5 uses a forward renderer I believe to achieve that. Not very system heavy but severely limits the fancy effects you can pull so heavy use of smoke and mirrors is employed.

It's like we have gone back a decade.



XB1 already has missing effects along with lower RES. Once compute starts getting used along with Huma its going to be ground into the dust when it comes to effects.

yeah BF4 on X1 is missing ambient occlusion.

but the difference is hardly noticeable, the only thing thats is noticeable is more aliasing mainly in the background, only hardcore graphics nuts are really going to care and its no more severe then worse looking multiplats on the PS3.
 
I'm not sure they can make eSRAM transparent to devs without cutting the legs off of performance - vs manual management of it.

One possibility is a software cache for render input and output (if that's possible in the latter case). But you won't get the same perf out of a setup like that as in the manual case.

My thoughts are that they'd make the brunt of the work handled automatically, but they'd allow the developers to fine tune it where they need to. That way the reduce the micromanagement required, but don't lose the benefit of manually tweaking it to increase the performance of the games. It would still be complicated, but not as complicated.
 

kitch9

Banned
yeah BF4 on X1 is missing ambient occlusion.

but the difference is hardly noticeable, the only thing thats is noticeable is more aliasing mainly in the background, only hardcore graphics nuts are really going to care and its no more severe then worse looking multiplats on the PS3.

Its missing FXAA and other effects.

Whether you like FXAA or not doesn't really matter, its computationally expensive and the XB1 doesn't have it so further indication of what currently can and cannot be done.
 

TUROK

Member
There is not a 32mb pool of esram with 109Gb/s bandwidth.

There are 4x 8mb pools all with ~27.25 Gb/s bandwidth.

The 1024bit bus is actually 4x 256 bit buses.
Well damn. Does this make tiling practically unavoidable?
 
Its missing FXAA and other effects.

Whether you like FXAA or not doesn't really matter, its computationally expensive and the XB1 doesn't have it so further indication of what currently can and cannot be done.

Wasn't it said the AO and AA were both going to be added before launch? (in the case of AA, I fucking hope so)
 

Waaghals

Member
The 360 had a similar thing as esram and it worked well. Why then does it not work well on xbox one?

It worked well in comparison to the PS3, which was a trainwreck.

Many high profile and good looking X360 games had to skim on resolution in order for the frame buffer to fit in EDRAM.
 

dr_rus

Member
esRAM is this generations Cell Processor in a way. Its something the devs don't fully understand quite yet making their jobs in part more difficult/involving.
Not in the slightest. There isn't much to understand in a fast scratch pad memory pool. Consoles been using this since SNES I think. The situation is much worse for XBO than it was for PS3 because it's not a question of understanding anything, it's just that XBO's memory architecture is weak compared to PS4's. This won't go away.
 

keuja

Member
Vegeta wants to fight Goku on even terms, so he's allowed himself to become a Majin for the sake of unleashing his potential. Unfortunately, he doesn't realize that Goku has already reached another tier beyond SSJ2 and is only humoring him; the Majin transformation is a dead-end that will never allow Vegeta to reach power comparable to a SSJ3.

Someone had to do it, right?

In saint seiya power levels :
Does this mean that PS4 is like Phoenix Ikki with huge untapped potential while Xbone is like the Unicorn bronze saint, who will hit his ceiling very fast no matter how much he trains (improve his dev tools).

PC is Virgo Shaka, closest to the gods, with unattainable power.
 
esRAM is this generations Cell Processor in a way. Its something the devs don't fully understand quite yet making their jobs in part more difficult/involving.

Not at all. Cell had power to unlock through programming which gave it an edge over the 360... the ESRAM is a handicap no matter what way you look at it.
 
It seems pretty silly for the CPU to be able to touch the ESRAM anyway. I don't really see any use for it on the CPU side.
Regarding this, from the DF interview:
Digital Foundry: And you have CPU read access to the ESRAM, right? This wasn't available on Xbox 360 eDRAM.

Nick Baker: We do but it's very slow.
That was the extent of what they said about it, I believe, so it's probably not of much benefit like you say.
 

Cronee

Member
Ok, Xbone's architecture simplified.

Imagine you are at the airport, going on a nice trip, and you're about to go through a security checkpoint. There are eight stations, each having just one line, with three attendants at each line. The people there are being screened at a fairly good rate of speed and there is a small line of people waiting. However, if there were to be a sudden increase in people needing screened then there would be an increased wait time as the stations are already working as fast as they can. Well, to help with this problem the airport, having already foreseen this, installed a ninth station off to the side that has two lanes, each having four attendants, but designated the station as servicing only those with special needs. This station works very quickly and is screening people as fast as the larger station, which does help to alleviate the wait time some. This design causes some confusion and requires extra work to keep things moving smoothly but it does work well enough.

Next up, PS4's architecture simplified.

Now, you're at a different airport, returning home from your trip. At this airport's security checkpoint there are eight stations as well, but each station has two lines with each line having four attendants. These stations work considerably faster and allows for double the amount of people to be screened. This means there is little to no waiting and, as a result, eliminates the need for a special needs station. You realize this setup is more efficient and is rather costly since it requires more attendants but everyone is given the same priority which allows for a very smooth and very high throughput.

I hope that helps!
 
Top Bottom