• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

DF: Xbone Specs/Tech Analysis: GPU 33% less powerful than PS4

Truespeed

Member
Pulling numbers out of my ass id say world usage might be seeing bandwidth around the 120GBs mark maybe?

Its a big advantage over just DDR3 for sure almost double, they had to have it they would be screwed without.

It just doesnt hold up to 256bit DDR5.

Sony just got lucky that they could up the DDR5 to 8gb. MS couldn't guarantee it would be available and needed the 8gb for thier planned OS functions, so they whent with thier only choice. Though im sure they could have cone for much higher bandwidth interface to ESRAM which would have made it much more ineresting, like how it was with PS2, im not sure why they didnt??

Luck has jack shit to do with this. This type of planning and procurement takes years. Mark Cerney started working on this in 2007 and to chalk it all up to luck is an insult. Also, what makes you think MS had GDDR5 in mind? Did they say that somewhere or are you making that up? I'm guessing the latter because MS was always under the impression the PS4 was only going to have 4GB of GDDR5 and when Sony dropped the bombshell they interestingly had to reschedule their initial Xbone reveal.
 

obonicus

Member
LOL, I have a hard time believing Sony would ever tell another company to gimp the 360 version because their 360 architected game just wasn't performant on the PS3.

Honestly, because this was mired in NDA talk and vagaries I'm not sure where this poster was pinning the blame. I think most of the 'blame' was foisted on the pub trying to avoid bad press.

And besides, sabotaging your crown jewel just to quell comparison arguments would have been negligent. Now, there were clauses stipulated by Sony that if you time delayed the PS3 version then you had to add additional content which seems fair to me.

It's more that, if you're the person buying the inferior version of the game at the same price point, you'd feel ripped off (and with good reason). So it's not sabotage so much as homogeneization. If the DF conclusion is '360 has better framerate, PS3 better IQ' rather than '360 version better in all regards' then it's a job well done. That was his argument, anyway; I forget the exact username on B3D, but I'm sure one could search for it over there: it started with 'joker' and had maybe 4 digits after it.

As for your Bayonetta example - that was never supposed to be released for the PS3 initially and when Sega did make the call they farmed it out to some unknown Japanese port mill with a broken homepage.

That's not really the point. The point is that when they released a markedly inferior PS3 version there was an enormous shitstorm. A similar thing happened with Red Dead. It started when B3D posters started pixel-counting and got worse when DF started their face-offs. To make matters worse, the shitstorms were a deluge of 'shitty pub/lazy dev', with a constant onslaught of KZ2, UC2 and GoW3 gifs for 'evidence' of said shittiness.

Now, what the other person who responded to me said is true. The reason for the shitstorm on PS3 is that you had very very pretty exclusives on the system (and not that many on 360) and therefore non-tech-savvy users felt cognitive dissonance when presented with inferior PS3 versions.
 

GorillaJu

Member
I think in multiplatngame performance they'll be as identical as the devs can get them. It'll be exclusives where the ps4 does things that the XBO isn't capable of.
 

Truespeed

Member
Honestly, because this was mired in NDA talk and vagaries I'm not sure where this poster was pinning the blame. I think most of the 'blame' was foisted on the pub trying to avoid bad press.

Again, this would have been professionally irresponsible to sabotage your own product. It's one thing to aim for parity during the concurrent development of both platforms, but to intentionally gimp the lead platform is sickening.

It's more that, if you're the person buying the inferior version of the game at the same price point, you'd feel ripped off (and with good reason). So it's not sabotage so much as homogeneization. If the DF conclusion is '360 has better framerate, PS3 better IQ' rather than '360 version better in all regards' then it's a job well done. That was his argument, anyway; I forget the exact username on B3D, but I'm sure one could search for it over there: it started with 'joker' and had maybe 4 digits after it.

The problem with this argument is that you're also competing with games from other competitors that are in the same genre. So if you're not going to make your game the best you can make it, given your resources and time, then you're just setting yourself up for failure when that game from this other company is recommended because it does everything else better.

That's not really the point. The point is that when they released a markedly inferior PS3 version there was an enormous shitstorm. A similar thing happened with Red Dead. It started when B3D posters started pixel-counting and got worse when DF started their face-offs. To make matters worse, the shitstorms were a deluge of 'shitty pub/lazy dev', with a constant onslaught of KZ2, UC2 and GoW3 gifs for 'evidence' of said shittiness.

Now, what the other person who responded to me said is true. The reason for the shitstorm on PS3 is that you had very very pretty exclusives on the system (and not that many on 360) and therefore non-tech-savvy users felt cognitive dissonance when presented with inferior PS3 versions.

And rightfully so especially when they were spoiled by Sony's first party studio's. Why should anyone be content with something inferior when you clearly saw the potential of the system in the right hands. Now, the lazy developer line may have been harsh because it ignores the economics and constraints of game development, but they had a point.
 

Snubbers

Member
I am genuinely intrigued to see what kind of performance difference we end up with in reality.

I also share the view that I expect the PS4 to have an easily noticeable performance advantage, but I'm also of the opinion that the raw number comparisons are just not 1:1 representative of the performance gap.

The ESRAM is looking to be more significant than I originally thought. The simple reasoning is that it takes up transistor space on the APU, that could have been spent on CUs for equivalence, only leaving the difference in memory bandwidth as the (sizeable) differentiator.
But it turns out that ESRAM is a way of effectively creating some extra memory bandwidth, and it also allows for improving CU efficiency and postprocessing effects.

There are also the hardware compression /decompression units to reduce bandwidth, and the SHAPE unit to offload audio processing, and things like the DMEs that are obviously there for a reason. That's not to say the PS4 doesn't have is own architectural IP, but it is different and the way the transistor budget had been spent does not allow as direct a comparison as I first thought.

No magic bullets, and due to the reasonably different architecture, hard to use desktop graphic card comparisons IMO.

We'll have to wait and see, I was very much disappointed at the specs, but B3D discussions have slightly allayed some of my fears enough that I'm going to be open minded until I see the actual games side by side.
 

obonicus

Member
Again, this would have been professionally irresponsible to sabotage your own product. It's one thing to aim for parity during the concurrent development of both platforms, but to intentionally gimp the lead platform is sickening.

I'm not saying it's a correct practice. I don't even know if it's true; I'm saying that if PS4 games don't take advantage of the large difference in performance, I can guess why. And also that I wouldn't be TERRIBLY surprised if it did turn out to be true.

The problem with this argument is that you're also competing with games from other competitors that are in the same genre. So if you're not going to make your game the best you can make it, given your resources and time, then you're just setting yourself up for failure when that game from this other company is recommended because it does everything else better.

Except almost every game achieved parity, with slight advantage on 360. Even late-gen, after years of 'lead on PS3' games that were better on PS3 were relatively rare. I really think it's a boat no one wanted to upset. In fact, the claim I heard (from the same developer) was that it was preferable to launch a game exclusively on 360 than to launch a 360/PS3 one in which the PS3 version was considerably worse, because of how much bad blood you'd get.

And rightfully so especially when they were spoiled by Sony's first party studio's. Why should anyone be content with something inferior when you clearly saw the potential of the system in the right hands. Now, the lazy developer line may have been harsh because it ignores the economics and constraints of game development, but they had a point.

The problem is exactly that. 'The potential of the system' as a phrase is what we're looking at here, and it's mostly a meaningless one, because you end up comparing apples to oranges. Or making technical statements without any details to back them up (devs can't even be up-front about the actual difficulties of developing games on current-gen because of NDAs, and we're 8 years in).
 
Something worth noting, though. Back during the bad PS3 port days, a confirmed dev on B3D mentioned that often devs were encouraged to pare down Xbox versions of games so they wouldn't look that bad when compared to PS3 versions. Sometimes by Sony, sometimes by the company itself (think of PR shitstorms that inferior PS3 versions - e.g. Bayonetta - caused).

I don't know how much of this was hearsay, but the user was a developer.

I wouldn't be surprised if most devs didn't want to chance it, straight off the bat, or are waiting for someone to try making an inferior XBO port and see what the repercussions are.

That claim is absolutely laughable. I don't believe Sony or a publisher would ever say "make the 360 version looks worse". The developer would laugh in their face. There were numerous games that were just as rough on PS3. For some reason Bsyonetta caused a minor shitstorm, but it was not the only shitty PS3 port.

The PS3 version of Portal 2 was noticeably better on PS3, and I bet you never heard a shitstorm over that.
 

Melchiah

Member
It's more that, if you're the person buying the inferior version of the game at the same price point, you'd feel ripped off (and with good reason). So it's not sabotage so much as homogeneization. If the DF conclusion is '360 has better framerate, PS3 better IQ' rather than '360 version better in all regards' then it's a job well done. That was his argument, anyway; I forget the exact username on B3D, but I'm sure one could search for it over there: it started with 'joker' and had maybe 4 digits after it.

I remember a poster named Joker on B3D being a 360 developer, but also very partial to that platform. And if I recall correctly he never worked on PS3. Keeping that in mind, it might be his excuse for the 360 versions of multiplatform games not being even better than they were. I have to say though, that I haven't followed B3D in few years, so things might have changed.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
The ESRAM is looking to be more significant than I originally thought. The simple reasoning is that it takes up transistor space on the APU, that could have been spent on CUs for equivalence, only leaving the difference in memory bandwidth as the (sizeable) differentiator.
But it turns out that ESRAM is a way of effectively creating some extra memory bandwidth, and it also allows for improving CU efficiency and postprocessing effects.

This comes up again and again. One cannot simply beef up the GPU and make a faster system. Without the eSRAM you have a 68GB/s system, which is only about 40-50GB/s left for the GPU. This would hardly be enough to feed 8 CUs, let alone 12. MS used eSRAM for one reason, they wanted 8GB of unified memory, at the time of the design that meant DDR3. They choose the highest bandwidth available (256-bit PC-2133) which was 68GB/s. They knew this was not enough for a decent APU, so they fell back on the same solution the 360 and PS2 used, embedded RAM. At this point they had a decision between off-die eDRAM and on-die eSRAM. They went with the eSRAM which made their APU very large, but allowed it to be fabbed with the same process and shrink at the same rate.

eSRAM is a bandwith solution period.
 

Respawn

Banned
This comes up again and again. One cannot simply beef up the GPU and make a faster system. Without the eSRAM you have a 68GB/s system, which is only about 40-50GB/s left for the GPU. This would hardly be enough to feed 8 CUs, let alone 12. MS used eSRAM for one reason, they wanted 8GB of unified memory, at the time of the design that meant DDR3. They choose the highest bandwidth available (256-bit PC-2133) which was 68GB/s. They knew this was not enough for a decent APU, so they fell back on the same solution the 360 and PS2 used, embedded RAM. At this point they had a decision between off-die eDRAM and on-die eSRAM. They went with the eSRAM which made their APU very large, but allowed it to be fabbed with the same process and shrink at the same rate.

eSRAM is a bandwith solution period.

Good post
 
The problem is none of these points give any indication of the overall gain across the system from having the low latency EDRAM. Most points are also mentioned in a very specific use case. You are using some out of context.

Like with anything you read it is open to your own interpretation of what is being said.

Im a long time member of B3D and have read through all those comments, and many counter comments that are also valid. I certainly haven't come to the same conclutions as you.

How much of an effect do you think the ESRAM will have on XB1s performance based on the posts you have read? can you come up with any guestimates at all? Can you quantify any of what you have read into real world figures?

Its quite obvios lower latency is better, nobody can argue that, but if you are going to hold it up as something other than a minor advantage it needs to be quantified.

Its the same with the ACEs, nobody has any idea how much benefit they will bring, and thus theres not much talk about the 8 in ps4 vs 2 in XB1.

I think in the grand scheme of things, based on what has been said, it will provide a very helpful boost to performance for developers, enough to be considered very significant to Xbox One game development. Then when you consider very useful GCN features such as PRT (Partial Resident Textures), where you can load part of a texture into memory as opposed to the entire thing, suddenly that 32MB starts to seem a lot bigger and more useful than some might've originally expected.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5

In the meantime AMD did throw out one graphics tidbit: partially resident textures (PRT). PRTs allow for only part of a texture to actually be loaded in memory, allowing developers to use large textures without taking the performance hit of loading the entire texture into memory if parts of it are going unused. John Carmack already does something very similar in software with his MegaTexture technology, which is used in the id Tech 4 and id Tech 5 engines. This is essentially a hardware implementation of that technology.

I can't possibly quantify how much it will help performance. All I do know for certain is that it absolutely will. Keep in mind when I say this I'm not saying this in a way to compare it to the PS4. The PS4 is simply stronger. I'm just saying that the One will be quite a bit more capable in the overall graphics performance department than it's currently being given credit for. Virtual texturing in game engines in general will make that 32MB seem almost like the pacific ocean, which is the primary reason why I think those dismissing it as 'just 32MB' aren't looking at the full picture.

Drawing examples from how significant EDRAM was to 360 game development is important because it should give pause to any thinking the ESRAM can't be very vital to overall performance. It's easy to be overshadowed in all the GDDR5, 1.8 teraflops talk, but all Microsoft consoles have been quite capable in the performance department. This time will be no different, because those engineers really know their shit.

This comes up again and again. One cannot simply beef up the GPU and make a faster system. Without the eSRAM you have a 68GB/s system, which is only about 40-50GB/s left for the GPU. This would hardly be enough to feed 8 CUs, let alone 12. MS used eSRAM for one reason, they wanted 8GB of unified memory, at the time of the design that meant DDR3. They choose the highest bandwidth available (256-bit PC-2133) which was 68GB/s. They knew this was not enough for a decent APU, so they fell back on the same solution the 360 and PS2 used, embedded RAM. At this point they had a decision between off-die eDRAM and on-die eSRAM. They went with the eSRAM which made their APU very large, but allowed it to be fabbed with the same process and shrink at the same rate.

eSRAM is a bandwith solution period.

This misses the larger point. Who cares what their primary reason was for putting it in there. The point is that it will have implications beyond simply providing more bandwidth, and experienced programmers have already clearly said as much. People would like some to believe that the ESRAM can't be extremely useful both for its bandwidth benefits as well as for its latency. It isn't one or the other. There are bandwidth benefits, latency benefits, and benefits to existing development techniques. There are performance benefits to the bandwidth, naturally, but then there are also performance benefits to the esram being low latency. In situations where there are cache misses and the required data isn't in the L1 or L2 caches for the One's GPU, developers will be happy that a possible trip to VRAM (for data that will be in the ESRAM) won't be anywhere nearly as expensive as it would have otherwise been with GDDR5 or even with the DDR3. Would it have been simpler to have things the way Sony does with the PS4? Absolutely, but the engineers made their decisions, the Xbox One isn't as powerful as the PS4. But, even so, there are some upsides to the architectural design of the Xbox One, something even a Sony developer acknowledges.
 
I am so intrigued if these conjured figures actually have an impact when the consoles release.

As a technical layman, I can't help but be skeptical given the situation with the PS3 last generation.

I feel it'll actually be pretty indistinguishable. But I can't wait to find out.
 

James Sawyer Ford

Gold Member
I think in the grand scheme of things, based on what has been said, it will provide a very helpful boost to performance for developers, enough to be considered very significant to Xbox One game development. Then when you consider very useful GCN features such as PRT (Partial Resident Textures), where you can load part of a texture into memory as opposed to the entire thing, suddenly that 32MB starts to seem a lot bigger and more useful than some might've originally expected.

http://www.anandtech.com/show/4455/amds-graphics-core-next-preview-amd-architects-for-compute/5



I can't possibly quantify how much it will help performance. All I do know for certain is that it absolutely will. Keep in mind when I say this I'm not saying this in a way to compare it to the PS4. The PS4 is simply stronger. I'm just saying that the One will be quite a bit more capable in the overall graphics performance department than it's currently being given credit for. Virtual texturing in game engines in general will make that 32MB seem almost like the pacific ocean, which is the primary reason why I think those dismissing it as 'just 32MB' aren't looking at the full picture.

Drawing examples from how significant EDRAM was to 360 game development is important because it should give pause to any thinking the ESRAM can't be very vital to overall performance. It's easy to be overshadowed in all the GDDR5, 1.8 teraflops talk, but all Microsoft consoles have been quite capable in the performance department. This time will be no different, because those engineers really know their shit.

Again, this wasn't an engineering decision to increase performance. No high end gpus use this design.

It was a design decision to ensure 8GB of ram with sufficient bandwidth provided by the esram on the same chip.

End of story. There's no secret sauce for esram.
 
And honestly I don't see how "Partial Resident Textures" will be something which can put eSRAM on a similar level as GDDR5. It's still only 32MB, and I guess it will be mainly used for the framebuffer etc. Killzone Shadow Falls for example has over 1321 MB of textures in the RAM, and that's only the non-streaming textures.
 
Again, this wasn't an engineering decision to increase performance. No high end gpus use this design.

It was a design decision to ensure 8GB of ram with sufficient bandwidth provided by the esram on the same chip.

End of story. There's no secret sauce for esram.

Again, you're missing the point. Nobody is talking about some silly secret sauce. I'm talking about facts supported by actual game developers.

Microsoft's engineers may have had a singular, or more vital purpose for EDRAM in the Xbox 360 also, but it ended up having benefits apparently exceeding just what was mentioned on the surface. It was originally stated to be for free AA, but developers ended up finding much more use for it.

A sony dev's take from beyond3d.

No the benefit of the EDRAM in 360 was moving all of the GPU's output bandwidth to a separate pool of memory, with enough bandwidth for it to never be a bottleneck.
The SRAM performs a similar function, and potentially more.
If the pool is actually SRAM as opposed to some EDRAM variant, then it would have very low latency, this would mean that using it as a data source would greatly increase GPU performance in memory limited cases.
If it really is SRAM that memory is a big block on the die, and 64MB was probably not practical.

Look what was possible with Kameo. They don't mention EDRAM, but it obviously played an important role.

http://www.ign.com/articles/2006/03/03/a-rare-epilogue?page=3

We always wanted to have crowd scenes in Kameo and started to do some experiments with the Xbox 1 and figured we could get maybe 100 or so NPCs. Once we moved onto the Xbox 360 we thought, "Let's try something that will slow it down, how about 1,000? Ran fine, no problem whatsoever! How about 3,000? Still fine!" Then we thought we had better try it with something more taxing than a test level so we put them on the Battle Field level which was all parallax and normal mapped, had a huge draw distance and lots of special effects like volumetric smoke; it still ran fine. In the released game we had something like 3000-plus NPCs because more than that was hard to choreograph, but the 360 can do much more. At one point during debug we found that each of the NPCs in one scene were being drawn 4 times by mistake, that's 12,000 being drawn and still no sign of slowdown.

We had a similar story with the GPU particle systems... We had a test running with a 100,000 particles being computed purely on the GPU, no CPU intervention at all. Now the Xbox 1 could do that but on the 360 they all react to the player and hit the floor and are lit, then we tried more, lots more! How about 1 milion? We aren't talking test levels or tech demos here. They are actually in the released game, and you can go and count them in the Throne Room. Most levels don't have quite that many though as only about 300,000 are normally in visible range at one time.

They go on to talk about some other stuff. Keep in mind while reading this that it was impossible to texture from EDRAM or resolve into the EDRAM on the 360. These things that could not be done with EDRAM on the 360, can be done with ESRAM on the Xbox One. This isn't secret sauce talk. It's just the facts.

Capcom on what was possible with EDRAM.

They are very proud of the techniques they been able to employ to get a tremendous amount of good looking particle effects on screen without causing slowdown. They said that utilizing the Xbox 360 EDRAM for certain screen effects gives them great speed without hurting frame rate. They said that this EDRAM, along with learning to properly use the multithreaded processors, are the two "tricks to making Xbox 360 games run well".

These aren't even the only examples out there, but they are the only ones I was willing to search out. The point is that there were uses for it that well exceeded what people initially thought it might help with. ESRAM is even better and doesn't have the same limitations that were present in EDRAM, as clearly pointed out in Microsoft's own documentation. Just because the primary concern for the inclusion of the ESRAM might have been for bandwidth concerns, doesn't mean that bandwidth is the only way in which it will help.

Just take a good look at how helpful EDRAM was on the 360. Even with the EDRAM having access to more memory bandwidth, ESRAM has a number of advantages over EDRAM. I believe it was said that a cache miss on the Xbox 360 was upwards of 300-500+ cycles?
 

Oblivion

Fetishing muscular manly men in skintight hosery
Crazy how this'll be the first time MS will have a console noticeably weaker than Sony. In terms of power, would this be accurate?

Sony >>>>> Xbone > Wii-U
 

Y2Kev

TLG Fan Caretaker Est. 2009
Crazy how this'll be the first time MS will have a console noticeably weaker than Sony. In terms of power, would this be accurate?

Sony >>>>> Xbone > Wii-U

In terms of flops, the Xbone is, I believe, equidistant from the PS4 and Wiiu.

So

Sony >> Xbone >> WiiU

Where I believe each > is 300 gflops.
 

benny_a

extra source of jiggaflops
Crazy how this'll be the first time MS will have a console noticeably weaker than Sony. In terms of power, would this be accurate?

Sony >>>>> Xbone > Wii-U

PS4 and Xbone are going to get the same multiplatform games in 2 years. Wii-U will not be part of that.
 

Brashnir

Member
In terms of flops, the Xbone is, I believe, equidistant from the PS4 and Wiiu.

So

Sony >> Xbone >> WiiU

Where I believe each > is 300 gflops.

I'm not sure that's right for Wii U. I thought the Wii U's GPU was around 4-500 GFLOPs, not 900.
 

Matt

Member
Agreed that it was basically forced by their DDR3 choice. I just think it must be giving them more than a "slight" helping hand or they would have left it out and saved a few bucks. But slight is subjective, and it's going to be hard to judge how the differences really play out until we see a few games running on both platforms.

Without the ESRAM, the system basically would not work. It wasn't put in there to help make up for the DDR3, it was put in there to make the choice of DDR3 workable in the first place.
 

Brashnir

Member
Doesn't that work out?

I thought it was

1800 GFLOP

1200 GFLOP

~600 GFLOP?

oh geez, I brain farted.

I guess so, but absolute power isn't as important as relative power. I meant to say 800 instead of 900 in my last post, which would put the relative equation in order.

The Wii U also has some other glaring disadvantages in addition to FLOP count.
 

TheD

The Detective
The esram is no secret sauce, it's a necessity for the system that is inferior to the PS4 solution in every way.

Yep, The only reason that the Xbone has the SRAM is to try and help make up for the very small amount of bandwidth that comes from the main RAM.
But it's inclusion made the IC too large to include as large as a GPU without pushing costs beyond what MSs wants to pay.

Doesn't that work out?

I thought it was

1800 GFLOP

1200 GFLOP

~600 GFLOP?

WiiU is 300ish at most.
 
~600GB is pretty optimistic.
After reading some analysis I think 350-450 GLOP is way more realistic.

Well, PS4 and Xbox One should be paired in the same next-gen group while the WiiU is some steps behind them.
 

Y2Kev

TLG Fan Caretaker Est. 2009
oh geez, I brain farted.

I guess so, but absolute power isn't as important as relative power. I meant to say 800 instead of 900 in my last post, which would put the relative equation in order.

The Wii U also has some other glaring disadvantages in addition to FLOP count.

Yeah, I agree. I'm just being lazy. Technically I think the floppage of the PS3 is beyond that of the 360 but nvidia is a bunch of lying tatas so it doesn't mean much.
 

Argyle

Member
Again, this would have been professionally irresponsible to sabotage your own product. It's one thing to aim for parity during the concurrent development of both platforms, but to intentionally gimp the lead platform is sickening.

See that's the thing - I think a lot of developers targeted the PS3 as the lead platform. If your game doesn't use 100% of the SPUs, then it's relatively easy to get your game working well enough on 360 such that you have parity between the two versions.
 

Brashnir

Member
Yeah, I agree. I'm just being lazy. Technically I think the floppage of the PS3 is beyond that of the 360 but nvidia is a bunch of lying tatas so it doesn't mean much.

Even if the Nvidia numbers are correct, the PS3 still has more theoretical FLOP potential than the 360 when you count the Cell. The trouble was that it is very difficult to get those SPUs working efficiently in practical applications. Also, the fact remains that even if every SPU was hitting on every cycle, the relative difference was still small.
 

Deuterium

Member
Again, you're missing the point. Nobody is talking about some silly secret sauce. I'm talking about facts supported by actual game developers.

Microsoft's engineers may have had a singular, or more vital purpose for EDRAM in the Xbox 360 also, but it ended up having benefits apparently exceeding just what was mentioned on the surface. It was originally stated to be for free AA, but developers ended up finding much more use for it.

A sony dev's take from beyond3d.



Look what was possible with Kameo. They don't mention EDRAM, but it obviously played an important role.

http://www.ign.com/articles/2006/03/03/a-rare-epilogue?page=3



They go on to talk about some other stuff. Keep in mind while reading this that it was impossible to texture from EDRAM or resolve into the EDRAM on the 360. These things that could not be done with EDRAM on the 360, can be done with ESRAM on the Xbox One. This isn't secret sauce talk. It's just the facts.

Capcom on what was possible with EDRAM.



These aren't even the only examples out there, but they are the only ones I was willing to search out. The point is that there were uses for it that well exceeded what people initially thought it might help with. ESRAM is even better and doesn't have the same limitations that were present in EDRAM, as clearly pointed out in Microsoft's own documentation. Just because the primary concern for the inclusion of the ESRAM might have been for bandwidth concerns, doesn't mean that bandwidth is the only way in which it will help.

Just take a good look at how helpful EDRAM was on the 360. Even with the EDRAM having access to more memory bandwidth, ESRAM has a number of advantages over EDRAM. I believe it was said that a cache miss on the Xbox 360 was upwards of 300-500+ cycles?


Thanks SenjutsuSage,

That was a darn insightful post. I fully expect the actual, real-world performance (graphics/gameplay) of the Xbox ONE will be very similar to the PS4, even though the PS4 is more formidable, "on paper". As I plan on owning both systems, I am happy that each system will have their advantages. Some people are so biased against MS right now (do to the potential DRM implications), that they are figuratively putting fingers in their ears and humming, when someone tries to explain legitimate technical features that are present in the Xbox ONE.
 
Gemüsepizza;60628841 said:
And honestly I don't see how "Partial Resident Textures" will be something which can put eSRAM on a similar level as GDDR5. It's still only 32MB, and I guess it will be mainly used for the framebuffer etc. Killzone Shadow Falls for example has over 1321 MB of textures in the RAM, and that's only the non-streaming textures.

PRT is essentially virtual texturing technology. With virtual texturing you can fit much more than 32MB of data into 32MB of ESRAM. In fact, a lot more.

That isn't secret sauce or pie in the sky. That's the actual truth of what's possible with virtual texturing. And know which first party dev has showcased extensive experience researching virtual texturing with a working engine already? Lionhead.

And you guys are missing the point, ESRAM isn't about matching the GDDR5. This isn't a PS4 and Xbox One comparison that I'm making. I'm talking strictly about what ESRAM may mean for Xbox One game development. Anybody saying it has no positive benefits other than bandwidth has no idea what they are talking about and developers already back this point up.

And I don't think people even realize what they are saying when they say "match GDDR5." The GDDR5 has more bandwidth, we all know that. However, GDDR5 absolutely cannot match ESRAM in the area of latency. It simply cannot. The latency on ESRAM is orders of magnitudes lower compared to GDDR5. Nothing about the PS4 can change this fact. Not a superior GPU, not GDDR5 memory, nothing. But, again, that isn't the point here. The PS4 is the stronger console of the two, easily. However, people seem to also want to believe that every single inner working of the PS4 has everything inside Xbox One beat, and that isn't entirely accurate.

A cache miss on the 360 and PS3 I believe was in excess of 500+ cycles. A cache miss for the PS4's GPU will also be in the hundreds of cycles, likely 300+ and surely around the same for the Xbox One, when the data is in the DDR3 pool. For a cache miss where the data is in the Xbox One ESRAM pool, however, you could be looking at 10-20. It will make a good deal of difference to the efficiency of the system to have such low latency. Hell, confirmed developers have already pointed out examples in which low latency would benefit the Xbox One, and even cited specific techniques, and yet people still are intent on denying it.

None of it makes it stronger than the PS4, or even dead even with the PS4, and I've repeated time and time again that this isn't what I'm saying. It just has some great benefits for Xbox One development. When people see Xbox One games and are impressed with what they see, and there will certainly be games that impress everyone, at the heart of that will be the benefits of having low latency ESRAM.
 

Matt

Member
PRT is essentially virtual texturing technology. With virtual texturing you can fit much more than 32MB of data into 32MB of ESRAM. In fact, a lot more.

That isn't secret sauce or pie in the sky. That's the actual truth of what's possible with virtual texturing. And know which first party dev has showcased extensive experience researching virtual texturing with a working engine already? Lionhead.

And you guys are missing the point, ESRAM isn't about matching the GDDR5. This isn't a PS4 and Xbox One comparison that I'm making. I'm talking strictly about what ESRAM may mean for Xbox One game development. Anybody saying it has no positive benefits other than bandwidth has no idea what they are talking about and developers already back this point up.

And I don't think people even realize what they are saying when they say "match GDDR5." The GDDR5 has more bandwidth, we all know that. However, GDDR5 absolutely cannot match ESRAM in the area of latency. It simply cannot. The latency on ESRAM is orders of magnitudes lower compared to GDDR5. Nothing about the PS4 can change this fact. Not a superior GPU, not GDDR5 memory, nothing. But, again, that isn't the point here. The PS4 is the stronger console of the two, easily. However, people seem to also want to believe that every single inner working of the PS4 has everything inside Xbox One beat, and that isn't entirely accurate.

A cache miss on the 360 and PS3 I believe was in excess of 500+ cycles. A cache miss for the PS4's GPU will also be in the hundreds of cycles, likely 300+ and surely around the same for the Xbox One, when the data is in the DDR3 pool. For a cache miss where the data is in the Xbox One ESRAM pool, however, you could be looking at 10-20. It will make a good deal of difference to the efficiency of the system to have such low latency. Hell, confirmed developers have already pointed out examples in which low latency would benefit the Xbox One, and even cited specific techniques, and yet people still are intent on denying it.

None of it makes it stronger than the PS4, or even dead even with the PS4, and I've repeated time and time again that this isn't what I'm saying. It just has some great benefits for Xbox One development. When people see Xbox One games and are impressed with what they see, and there will certainly be games that impress everyone, at the heart of that will be the benefits of having low latency ESRAM.

What you're saying isn't wrong, but you're way overselling it.
 

Deuterium

Member
What you're saying isn't wrong, but you're way overselling it.

Actually, he sounds completely reasonable and even-keeled...which is a friggin' breath of fresh air, here (IMHO).

I do not understand how the following statement could even possibly be construed as being over the top, or "over-selling" the Xbox One:

None of it makes it stronger than the PS4, or even dead even with the PS4, and I've repeated time and time again that this isn't what I'm saying. It just has some great benefits for Xbox One development. When people see Xbox One games and are impressed with what they see, and there will certainly be games that impress everyone, at the heart of that will be the benefits of having low latency ESRAM.
 
What you're saying isn't wrong exactly, but you're way overselling it.

Which part? The cache miss latency differences between GDDR5 and ESRAM?

The fact that virtual texturing allows for tons of data to be contained in a much smaller memory footprint?

I guess I could see the part about any great looking game on Xbox One having to do with low latency ESRAM, which probably definitely qualifies as overselling. However, there will absolutely be some games where the benefits of ESRAM end up coming into play in a positive way for xbox one development.

If developers texture from the ESRAM, that's a positive. If they resolve into it, that's a positive. If they use techniques that benefit from the low latency through the ESRAM, that's a positive. If it merely just allows them to have much smaller than normal latencies when running certain code, that, too, will be a positive.
 

Matt

Member
Which part? The cache miss latency differences between GDDR5 and ESRAM?

The fact that virtual texturing allows for tons of data to be contained in a much smaller memory footprint?

I guess I could see the part about any great looking game on Xbox One having to do with low latency ESRAM, which probably definitely qualifies as overselling. However, there will absolutely be some games where the benefits of ESRAM end up coming into play in a positive way for xbox one development.


If developers texture from the ESRAM, that's a positive. If they resolve into it, that's a positive. If they use techniques that benefit from the low latency through the ESRAM, that's a positive. If it merely just allows them to have much smaller than normal latencies when running certain code, that, too, will be a positive.

That's basically what I was talking about. Again, you're not wrong, and you have put things in terms much better then most defending the Xbone's architecture.
 
Actually, he sounds completely reasonable and even-keeled...which is a friggin' breath of fresh air, here (IMHO).

I do not understand how the following statement could even possibly be construed as being over the top, or "over-selling" the Xbox One:
I have to agree...his post seemed very reasonable and level-headed...

but we all have to really wait and see what the real-world difference will be like..
 

Rengoku

Member
Which part? The cache miss latency differences between GDDR5 and ESRAM?

The fact that virtual texturing allows for tons of data to be contained in a much smaller memory footprint?

I guess I could see the part about any great looking game on Xbox One having to do with low latency ESRAM, which probably definitely qualifies as overselling. However, there will absolutely be some games where the benefits of ESRAM end up coming into play in a positive way for xbox one development.

If developers texture from the ESRAM, that's a positive. If they resolve into it, that's a positive. If they use techniques that benefit from the low latency through the ESRAM, that's a positive. If it merely just allows them to have much smaller than normal latencies when running certain code, that, too, will be a positive.

But what's to stop the PS4 developers from using this virtual texturing technology? Sure, right now, it will certainly help the Xbox One with its 32MB ESRAM, but it can also benefit the PS4 no? Just means more memory on the PS4 to do other "stuff"? Unless this virtual texturing tech is only available to the Xbox One...
 

TheD

The Detective
PRT is essentially virtual texturing technology. With virtual texturing you can fit much more than 32MB of data into 32MB of ESRAM. In fact, a lot more.

NO YOU CAN NOT!

Anything that goes into it needs to be loaded from the Main RAM, so Main RAM bandwidth is still an issue and if you are filling up SRAM with a small amount of textures then you can not use it for buffers!
 
Thanks SenjutsuSage,

That was a darn insightful post. I fully expect the actual, real-world performance (graphics/gameplay) of the Xbox ONE will be very similar to the PS4, even though the PS4 is more formidable, "on paper". As I plan on owning both systems, I am happy that each system will have their advantages. Some people are so biased against MS right now (do to the potential DRM implications), that they are figuratively putting fingers in their ears and humming, when someone tries to explain legitimate technical features that are present in the Xbox ONE.


the problem is that people are acting as if in real world performance, the ps4 won't improve. If the xbone is on the level of ps4 because efficiencies and what not in real work performance in a closed box, the ps4 will enjoy the same benefits and still be far beyond the xbone.
 
But what's to stop the PS4 developers from using this virtual texturing technology? Sure, right now, it will certainly help the Xbox One with its 32MB ESRAM, but it can also benefit the PS4 no? Just means more memory on the PS4 to do other "stuff"? Unless this virtual texturing tech is only available to the Xbox One...

Nothing because PRTs are part of GCN's architecture.
 

Matt

Member
Thanks SenjutsuSage,

That was a darn insightful post. I fully expect the actual, real-world performance (graphics/gameplay) of the Xbox ONE will be very similar to the PS4, even though the PS4 is more formidable, "on paper". As I plan on owning both systems, I am happy that each system will have their advantages. Some people are so biased against MS right now (do to the potential DRM implications), that they are figuratively putting fingers in their ears and humming, when someone tries to explain legitimate technical features that are present in the Xbox ONE.

This is what made me call Sage's post an oversell. No, the PS4 is, factually, a more capable system then the One, and an easier system to get more out of on top of that. You will see the difference.
 
But what's to stop the PS4 developers from using this virtual texturing technology? Sure, right now, it will certainly help the Xbox One with its 32MB ESRAM, but it can also benefit the PS4 no? Just means more memory on the PS4 to do other "stuff"? Unless this virtual texturing tech is only available to the Xbox One...

Nothing at all. That's the beauty of game development. I just felt the 32MB of ESRAM was being tossed aside as irrelevant solely because it's 32MB. We have to remember how important EDRAM was for the Xbox 360, and that was just 10MB...

The PS4 will absolutely benefit from virtual texturing also. It was never my intent to compare the two, which is why I was mostly defending what I think will be positives for Xbox One game development, positives that it isn't being given any credit for solely due to the fact that the PS4 has stronger hardware. The Xbox One hardware will be very capable in next generation games.
 

astraycat

Member
PRT is essentially virtual texturing technology. With virtual texturing you can fit much more than 32MB of data into 32MB of ESRAM. In fact, a lot more.

That isn't secret sauce or pie in the sky. That's the actual truth of what's possible with virtual texturing. And know which first party dev has showcased extensive experience researching virtual texturing with a working engine already? Lionhead.

And you guys are missing the point, ESRAM isn't about matching the GDDR5. This isn't a PS4 and Xbox One comparison that I'm making. I'm talking strictly about what ESRAM may mean for Xbox One game development. Anybody saying it has no positive benefits other than bandwidth has no idea what they are talking about and developers already back this point up.

And I don't think people even realize what they are saying when they say "match GDDR5." The GDDR5 has more bandwidth, we all know that. However, GDDR5 absolutely cannot match ESRAM in the area of latency. It simply cannot. The latency on ESRAM is orders of magnitudes lower compared to GDDR5. Nothing about the PS4 can change this fact. Not a superior GPU, not GDDR5 memory, nothing. But, again, that isn't the point here. The PS4 is the stronger console of the two, easily. However, people seem to also want to believe that every single inner working of the PS4 has everything inside Xbox One beat, and that isn't entirely accurate.

A cache miss on the 360 and PS3 I believe was in excess of 500+ cycles. A cache miss for the PS4's GPU will also be in the hundreds of cycles, likely 300+ and surely around the same for the Xbox One, when the data is in the DDR3 pool. For a cache miss where the data is in the Xbox One ESRAM pool, however, you could be looking at 10-20. It will make a good deal of difference to the efficiency of the system to have such low latency. Hell, confirmed developers have already pointed out examples in which low latency would benefit the Xbox One, and even cited specific techniques, and yet people still are intent on denying it.

None of it makes it stronger than the PS4, or even dead even with the PS4, and I've repeated time and time again that this isn't what I'm saying. It just has some great benefits for Xbox One development. When people see Xbox One games and are impressed with what they see, and there will certainly be games that impress everyone, at the heart of that will be the benefits of having low latency ESRAM.

I sincerely doubt that going to ESRAM is 10-20 cycles. Going to L1 may not even be that low-latency on a GPU -- the entire GPU architecture is built around hiding the latency of fetching any data not in registers. There'd be little reason to do that if they had a 32MiB cache with only 10-20 cycles of latency!
 
I sincerely doubt that going to ESRAM is 10-20 cycles. Going to L1 may not even be that low-latency on a GPU -- the entire GPU architecture is built around hiding the latency of fetching any data not in registers. There'd be little reason to do that if they had a 32MiB cache with only 10-20 cycles of latency!

L1 and L2 are both likely lower than that mark. At least L1 certainly has to be. GPUs are designed to deal with latency, but latency is still very much a high priority deeper down in the memory architecture. It's the GDDR5 that's allowed to be higher latency, but the L1 data cache, the L2 cache, the shared L1 between every 4 compute units, they are all very low latency.

In fact, better latency caches are a big part of why Nvidia GPUs with lower raw compute power are able to perform so well next to or even outperform AMD GPUs. It isn't the only reason, but it's one of the documented factors, I believe.

The 32MB of ESRAM on the Xbox One is the same kind of SRAM used for the L1 and L2 caches on GCN GPUs, except it isn't a cache. It could be very low latency. Just because it hasn't been done often doesn't mean there isn't a place for it. Just take a look at Intel's Haswell with the 128MB of EDRAM added. Not many GPU on the pc side were doing such a thing before the 360 did it or even when the PS2 did it. Just because it hasn't happened on the pc side yet, doesn't mean it wouldn't have value.
 
And you guys are missing the point, ESRAM isn't about matching the GDDR5. This isn't a PS4 and Xbox One comparison that I'm making.I'm talking strictly about what ESRAM may mean for Xbox One game development. Anybody saying it has no positive benefits other than bandwidth has no idea what they are talking about and developers already back this point up.

And I don't think people even realize what they are saying when they say "match GDDR5." The GDDR5 has more bandwidth, we all know that. However, GDDR5 absolutely cannot match ESRAM in the area of latency.It simply cannot. The latency on ESRAM is orders of magnitudes lower compared to GDDR5. Nothing about the PS4 can change this fact. Not a superior GPU, not GDDR5 memory, nothing. But, again, that isn't the point here. The PS4 is the stronger console of the two, easily. However, people seem to also want to believe that every single inner working of the PS4 has everything inside Xbox One beat, and that isn't entirely accurate.

So you don't want to compare, but then you start comparing? Latency of all things?

Yep, can't wait for those low latency games... It's slowly becoming what the GPGPU was for the Wii U a few months back.
 
So you don't want to compare, but then you start comparing? Latency of all things?

Yep, can't wait for those low latency games... It's slowly becoming what the GPGPU was for the Wii U a few months back.

It was to make a point. It isn't my intent. But that had to be pointed out, because people were getting the wrong idea. The Xbox One isn't a Wii-U, for the record. :)
 

TheD

The Detective
L1 and L2 are both likely lower than that mark. At least L1 certainly has to be. GPUs are designed to deal with latency, but latency is still very much a high priority deeper down in the memory architecture. It's the GDDR5 that's allowed to be higher latency, but the L1 data cache, the L2 cache, the shared L1 between every 4 compute units, they are all very low latency.

In fact, better latency caches are a big part of why Nvidia GPUs with lower raw compute power are able to perform so well next to or even outperform AMD GPUs. It isn't the only reason, but it's one of the documented factors, I believe.

The 32MB of ESRAM on the Xbox One is the same kind of SRAM used for the L1 and L2 caches on GCN GPUs, except it isn't a cache. It could be very low latency. Just because it hasn't been done often doesn't mean there isn't a place for it. Just take a look at Intel's Haswell with the 128MB of EDRAM added. Not many GPU on the pc side were doing such a thing before the 360 did it or even when the PS2 did it. Just because it hasn't happened on the pc side yet, doesn't mean it wouldn't have value.

The 128MB acts as a cache for the CPU and GPU!
What do you think it is connected to the GPU for?.... because the Main DDR3 RAM does not have enough bandwidth for the GPU!

It only has value for Haswell due to 1. it acting as a cache for the CPU and 2. it acting as a cache for the GPU to try and offset the lack of bandwidth.
 

astraycat

Member
L1 and L2 are both likely lower than that mark. At least L1 certainly has to be. GPUs are designed to deal with latency, but latency is still very much a high priority deeper down in the memory architecture. It's the GDDR5 that's allowed to be higher latency, but the L1 data cache, the L2 cache, the shared L1 between every 4 compute units, they are all very low latency.

In fact, better latency caches are a big part of why Nvidia GPUs with lower raw compute power are able to perform so well next to or even outperform AMD GPUs. It isn't the only reason, but it's one of the documented factors, I believe.

The 32MB of ESRAM on the Xbox One is the same kind of SRAM used for the L1 and L2 caches on GCN GPUs, except it isn't a cache. It could be very low latency. Just because it hasn't been done often doesn't mean there isn't a place for it. Just take a look at Intel's Haswell with the 128MB of EDRAM added. Not many GPU on the pc side were doing such a thing before the 360 did it or even when the PS2 did it. Just because it hasn't happened on the pc side yet, doesn't mean it wouldn't have value.

AMD GPUs do not seem to have anywhere near the L1 cache latencies you are expecting: http://www.sisoftware.net/?d=qa&f=gpu_mem_latency

Granted, I suppose this could have gone up an order of magnitude since then, but even Fermi cards seem to have ~20cycles to L1.
 

Y2Kev

TLG Fan Caretaker Est. 2009
Even if the Nvidia numbers are correct, the PS3 still has more theoretical FLOP potential than the 360 when you count the Cell. The trouble was that it is very difficult to get those SPUs working efficiently in practical applications. Also, the fact remains that even if every SPU was hitting on every cycle, the relative difference was still small.

Oh, PS3 had flop potential, let me tell you! ;)
 
Top Bottom