• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Can someone explain this eSRAM issue in layman's terms?

wapplew

Member
I find no fault with the design of XONE.

Policies and DRM is the only hurdle. As of now a mental hurdle. Athe time of concept they rolled with what they thought was the wise solution.

Snapping and gaming sounds very next gen, to be honest with you. Nintendo has survived with weaker graphics.

If Sony stuck with 4GB would that have pulled off 1080p. Every industry really is a game of inches.

Yeah, snap or multiple window is cool, I'm sure PS4 can do it if they shrink the OS foot print enough.
But I rather give those extra ram for game dev, cause multi tasking work better with second screen in my opinion.
Nintendo survived with weaker graphics with many reason, like best first party, family/child appeal( a market do not value graphic much) and blue ocean strategy.
 
Ty for the info.

I will now show off with my friends.

With better development tools, will game data be 1:1 then issues with resolution. Or due to low esram will game data AND resolution always be an issue?

Scale of game will also factor in, I'm sure.

first off, the esram is not the problem. it's the band-aid solution to the ddr3 problem (and ddr3 is a problem only because of gddr5).

highly doubtful that the esram will make up for the difference. it can, but it needs to constantly read/write data from/to the ddr3 since it can't keep a whole level of textures to itself (unless of course the textures are not varied). very doubtful, though. it needs to save bandwidth, afterall. it can't waste it just so the game could be at the same resolution. game will need to give up other stuff if it wants to hit that resolution.

esram+ddr3 can do 1080p, maybeeee high texture. but the same version will look and run better on the gddr5 no doubt. it is up to devs. if a game is made with esram+ddr3 in mind, the game will then look the same on the gddr5 but that's not using the gddr5 at its best.

that is why bf4 runs at 720p and has no global illumination. if bf4 were to run at 900p, it has to give up other things.

also, it's not just the esram+ddr3. gpu is also weaker, so there's that. there is no way around those two, no matter what albert or major says.
 

viveks86

Member
Now I know this isnt a thread about compute but could someone lay out the advantages of GPU Compute for me?

If its used how exactly does that help devs? I know it frees up CPU processes, but what exactly are the kinds of things that can be done with it?

In the beginning the CPU was the brains of the computer. It was versatile and it needed to be because it had to do all the work. Then along the way we decided that we wanted computers to make pretty pictures too. Well that involved some specialized math and being that the CPU was so versatile, it took up those responsibilities too although not very well. Eventually the need for prettier pictures won out and we realized that we could help the CPU out by having a GPU do the work of creating the pretty pictures instead.

So why is the GPU so much better at making pictures? Well as it turns out, the computing necessary to make pictures is much more limited than what the "I can do anything" CPU does. So designers made chips that only did those few things, but do them very very well.

It also turns out that making a pretty picture is very different than running arbitrary code. The picture can be broken up into smaller and smaller sections, and each section can be worked on independently. So given that the GPU required limited computing capabilities, the designers could make that part smaller, but then duplicate it many many times. So the GPU has limited computing complexity, but what it can do it can do very quickly. On top of that, it can do multiple versions of those computations at the same time.

Unfortunately, traditional code requires the complex coding found in the CPU, and it is very difficult to break it down into independent parts that all can be run at the same time. Well that was until GPU Compute came along. Certain problems that the CPU is used to solving, can be broken down so that they resemble the same kinds of tasks that the GPU does to create graphics. When you do that, you can feed those tasks to the GPU instead of the CPU which has two effects:
  • There is one less thing that the CPU has to do so it has time to do other stuff
  • The GPU is really good at what it does. It will chew through the tasks you give it like a beaver through a No.2 pencil.

The three big things I've heard that you can offload to the GPU are...
  • Physics
  • AI, Pathfinding
  • Computing directional sound effects

Nice explanation. Here are some quotes from Cerny to add to this:

"[In] year 1, most of the titles aren't going to use that rich feature set," Cerny told IGN. "They will be great titles, but they will be leveraging very straightforward aspects of the system such as graphics you can get from the very high bandwidth that we have.

"By year 3 or 4, I think we'll see a lot of GPGPU [General-purpose computing on graphics processing units], which is to say that the GPU will be used for a lot of things not directly tied to graphics. So physics, simulation, collision detecting or ray casting for audio or the like."

"Usually when I talk about this people say, 'but wait, won't that make the graphics worse?'" he continues. "Well, if you look at a frame and everything that's being done in that frame, a lot of phases within that frame – it's like 1/30 of a second – some of these phases don't really use all of the various modules within the GPU. Shadow map generation tends not to use ALU [arithmetic logic units] very much, so it's a really optimal time to be doing all of those other tasks.

"So most likely, 3 [or] 4 years in, once you've taken time to study the architecture you can improve the quality of your world’s simulation without decreasing the quality of your graphics."

http://www.videogamer.com/news/cerny_explains_how_ps4s_graphics_will_evolve_over_time.html
 

Arksy

Member
Is the power gap between the X1 and the PS4 greater than the gap between the 360 and the PS3?

If so, by how much?
 

zpiders

Member
Is the power gap between the X1 and the PS4 greater than the gap between the 360 and the PS3?

If so, by how much?

COD Ghosts on the PS4 is pushing 125% more pixels than the X1 version at the same framerate. There is no game on the 360/PS3 that has that big of a gap between them in terms of resolution that i'm aware of.
 

hawk2025

Member
Since this is a thread where we shouldn't be ashamed of asking potentially obvious questions --


Doesn't the memory differences essentially mean that large, open-world games in particular will suffer even more on the X1? Or is that not related?
 

Camp Lo

Banned
Imagine the system RAM as a semi truck.

Large, can carry a lot of stuff but is slow.

Now imagine eSRAM as 2 door sports car. Fast but small.

Now... lets say a normal 1080p screen render involves 2 people and a full trunk.

A more advanced screen render needs 3 or maybe even 4 people and a full trunk.

It simply won't work. You can use the semi truck, but that'll be much slower. You can also move 2 people at a time, but you won't be able to achieve things like 60fps.

Yep, this worked for me. Kudos, sir.
 

Ovek

7Member7
Ps4 = Imagine a large highway with many lanes of traffic and a speed limit of 250mph.

Xbone = Imagine the same highway with less lanes of traffic and a speed limit of 80mph, but to elevate traffic congestion you have a highway bypass (esram), the bypass has an even faster speed limit than the ps4 the only problem is only one car at a time can go down it.
 
This is gonna sound real dumb, but would it be possible to do a partial frame buffer in the esram and the rest on DDR3 then bring them together?
Yes, this is pretty much "tiling". There are several architectural things built into the One (and PS4) GPUs to make this easier. It will undoubtedly be used by games, and perhaps some already are. But there's a problem doing it on One, which is the speed differential between eSRAM and DDR3. You can pull from eSRAM at 102 GB/s but from DDR3 at only 68 GB/s. Every time you have to go there, your performance suffers.

This is also one reason the One's eSRAM in some ways is worse than 360's eDRAM. Because on the prior machine the bandwidths were 32 GB/s and ~22 GB/s, so when you had to go out to main memory the bandwidth hit was a lower percentage.

isn't that 176 gbps already bi-directional? that 102 gbps isn't, and that's why microsoft uses 204 gbps. their mathematical mistake is adding both esram and ddr3 as 271 gbps or some other bullcrap.

again, due to overheating concerns, microsoft is apparently targeting 130-140 gbps on its esram.
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.
 
In the beginning the CPU was the brains of the computer. It was versatile and it needed to be because it had to do all the work. Then along the way we decided that we wanted computers to make pretty pictures too. Well that involved some specialized math and being that the CPU was so versatile, it took up those responsibilities too although not very well. Eventually the need for prettier pictures won out and we realized that we could help the CPU out by having a GPU do the work of creating the pretty pictures instead.

So why is the GPU so much better at making pictures? Well as it turns out, the computing necessary to make pictures is much more limited than what the "I can do anything" CPU does. So designers made chips that only did those few things, but do them very very well.

It also turns out that making a pretty picture is very different than running arbitrary code. The picture can be broken up into smaller and smaller sections, and each section can be worked on independently. So given that the GPU required limited computing capabilities, the designers could make that part smaller, but then duplicate it many many times. So the GPU has limited computing complexity, but what it can do it can do very quickly. On top of that, it can do multiple versions of those computations at the same time.

Unfortunately, traditional code requires the complex coding found in the CPU, and it is very difficult to break it down into independent parts that all can be run at the same time. Well that was until GPU Compute came along. Certain problems that the CPU is used to solving, can be broken down so that they resemble the same kinds of tasks that the GPU does to create graphics. When you do that, you can feed those tasks to the GPU instead of the CPU which has two effects:
  • There is one less thing that the CPU has to do so it has time to do other stuff
  • The GPU is really good at what it does. It will chew through the tasks you give it like a beaver through a No.2 pencil.

The three big things I've heard that you can offload to the GPU are...
  • Physics
  • AI, Pathfinding
  • Computing directional sound effects

Infuckingcredible....I really cannot wait for devs to start utilising compute. The worlds will become so much more rich...Im gonna be going PS4 this gen but I sorta worry about the Xbox when that starts taking place :/
 

DBT85

Member
You know, I'm reluctant to think that all of Microsoft's engineers are incompetent. Which begs the question, why go this route? Why only 32mb of ESRAM? Why not 1-2gigs? Surely it's possible? What's their rationale? Why would Microsoft go down this path when it's CLEARLY going to cause them issues? (From what people are saying).

Microsoft engineers are not incompetent at all. Unfortunately for them, it seems that Cerny was told "Make it under $400, game game games, minimal loss on hardware at launch" while the MS Engineers were told "must include new kinect, must do all these multimedia things, think synergy, togetherness, one big family of Windows, TV, NFL, $500, make a profit. And games. Make it do games"

If someone gives you a short checklist of targets you have a lot more freedom do explore different avenues. If someone gives you a long list you are more constricted in how you can think.

Though I think someone who knows things at B3D said that the ESRAM was there before the 8GB DDR3 was. I don't know if that is because they knew they would need DDR3 in general based on costs or what.
 
In regards to Ryse. That is first party.

No matter the resolution, multiple games looking like that has to get some of you excited.
 

scsa

Member
Would 64mb of esram have fixed that issue or is that something that doesn't even make sense from an architectural or cost perspective?

After reading these threads, I think the conclusion is 64 MB would require too much die space which they ran out of (system on a chip and all that). Plus the capacitors, the resistors etc etc make it a more complicated situation than what it seems to be.
 

McLovin

Member
Imagine a ps4 is a glass of water, the X1 is a sippy cup about the same size. But it costs you $100 more and someone has to watch you while you drink it.
 
Since this is a thread where we shouldn't be ashamed of asking potentially obvious questions --

Doesn't the memory differences essentially mean that large, open-world games in particular will suffer even more on the X1? Or is that not related?
There's the same amount of memory on each machine, so the size of worlds shouldn't suffer comparatively. If eSRAM isn't used efficiently, though, the low speed of One's DDR3 might mean that textures load more slowly, or that distant models are lower-poly.

As for number of NPCs, physics, and other world complexity, the One has a slightly faster CPU which might actually give it an advantage over PS4 sometimes. But that DDR3 might still hobble it...and the PS4 has plenty more "spare" GPU that it can use to assist the CPU. Thus if the dev focused, they could probably flip that aspect back to a PS4 advantage again.
 

viveks86

Member
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.

Ok. So I think we have a consensus that it is indeed 204 GB/s on paper (it's not a lie/PR spin) and it's 133 to 150 in reality.

I'm guessing the GDDR5 has much higher efficiency? I read somewhere that almost all of 176 GBps can be used in reality (remember reading 172 effective, but don't quote me on it).
 
Ok. So I think we have a consensus that it is indeed 204 GB/s on paper (it's not a lie/PR spin) and it's 133 to 150 in reality.

I'm guessing the GDDR5 has much higher efficiency? I read somewhere that almost all of 176 GBps can be used in reality (remember reading 172 effective, but don't quote me on it).
I'm not a dev, so my "consensus" might not mean much. But yes, from the technical stuff I've read Microsoft is exaggerating but not straight-out lying with the 204 GB/s number. As to truly unified memory being more efficient, we did have one apparent confirmation from the Oddworld devs at the number you recall. But that was just one quote, and Microsoft predicted lower (about 155 GB/s, I think?). In that same statement they also said their own DDR3 wouldn't hit 68 GB/s, so I don't think they were just bashing the competition.
 
Imagine if you had 2 shipping companies. One shipping company has 8 jumbo jets to ship products in. The other has 8 18-wheeler trucks to ship products in and one private plane to quickly move a very small amount of cargo. Which shipping company do you think has better performance? This is what's going on inside the Xbox One. It doesn't matter how powerful the CPU and GPU are if they aren't receiving data fast enough.
 
In regards to Ryse. That is first party.

No matter the resolution, multiple games looking like that has to get some of you excited.

But not at 720p. I can accept it for now but 2014 going 720p is just sad and bad if you ask me rather have the devs downgrade the graphics..

But hey every optimization devs do for esramsize will also greatly benefit PS4 either by lowering bandwidth needs so the ps4 can shuffle around even more data sets.
Bolded the size because that is kinda important, shuffling around data in and out of esram will probably not really net you performance benefit on ps4 i think.

I hope 343 can do at least 900p for Halo 5 but 1080p@60 fps would be awesome for multiplayer.
 
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.

that is true, but totally disagree with "nothing to do with heat". esram is made up of transistors and capacitors, meaning heat ceiling is definitely considered, especially because it's on the same chip as the cpu and gpu. there's a reason why the xbone is that big and has that many vents. maybe that's the ceiling the esram can handle without losing performance or without the system crapping out. 150 gbps is at 75% efficiency at best, which is really low. yes, transistors can be a design problem because of their nature but then again 75% efficiency just because of the hardware design?

this maybe conspiracy theory, but with that big a box they can't use the esram to its max therefore they just upclocked the gpu and cpu at 7%. that's totally a situation that's not impossible to have.
 
If the XB1 does not have the right tools for GPU compute, will that have an effect on games on PC and PS4? Will this generation be limited because of the weak GPU in the XB1? ...What I'm trying to say is can developers make a game that utilizes GPU compute and be able to make an XB1 version of the game that is not heavily reliant on it? Or will it have to be left out altogether on all platforms?
Functionally, GPU compute is feasible on both machines; neither is incapable or lacking any tools. The differences are twofold: first, the PS4 simply has more GPU power, so can run more compute jobs alongside the same amount of graphics jobs...and in some cases more compute jobs alongside more graphics jobs too.

Second, the PS4 has been designed to be future-proof on GPS compute. It has more hardware to manage the delegation of compute jobs--4 times as much as One, in fact. This is why Cerny has called the PS4's version "fine-grained". As explained earlier in this thread, the idea of GPU compute is to wait for graphics jobs to have downtime on the GPU, then drop little compute jobs into the idle hardware. PS4 can split its compute jobs much more finely, so it's more likely to find enough idle hardware somewhere. Thus it has not just more power than One, but more opportunities to use that power.

Therefore, it's pretty much guaranteed that, if fully utilized, PS4 will do GPU compute much, much better than One. As to how developers will proceed, it's unknown. Will they bother amping up one platform, or just avoid using it on any? There's no sure way to tell, but look at how devs are treating the other power advantages of PS4: use 'em, and leave the One version behind. I bet that'll be the answer for compute too.
 
But not at 720p. I can accept it for now but 2014 going 720p is just sad and bad if you ask me rather have the devs downgrade the graphics..

But hey every optimization devs do for esramsize will also greatly benefit PS4 either by lowering bandwidth needs so the ps4 can shuffle around even more data sets.
Bolded the size because that is kinda important, shuffling around data in and out of esram will probably not really net you performance benefit on ps4 i think.

I hope 343 can do at least 900p for Halo 5 but 1080p@60 fps would be awesome for multiplayer.

If its taking full advantage of the hardware, it'll have to do.

You can't dismiss a possible good first party game just because the resolution isn't as high as capable. Not fair to you as a gamer.
 
Infuckingcredible....I really cannot wait for devs to start utilising compute. The worlds will become so much more rich...Im gonna be going PS4 this gen but I sorta worry about the Xbox when that starts taking place :/

I can foresee some problems here. Most of the things that compute does are gameplay changing. For multiplatform games, that could be too much of a hurdle to overcome. If used fully it could make the games on consoles that can make greater use of compute totally different that on consoles that can use it less.

Imagine if Borderlands 3 decides to make full use of compute and their AI is much better on the PS4 than the XB1. That would affect everything from level design to the damage that enemies can do. GearBox would have to balance the game twice instead of once. I'm afraid that while the targeting lowest common denominator doesn't appear to be used for graphics, it will be used for features that impact gameplay.
 
I hope 343 can do at least 900p for Halo 5 but 1080p@60 fps would be awesome for multiplayer.

the problem with that, is even if halo 5 were to be 1080p@60 then other systems would have to be given up. a not-so-sophisticated physics sim? quite a number of not-so-stellar surface collision resulting in clipping?

those are things which can't be noticed right away (unlike resolution and performance), and worse it does not have a ps4 version to compare to.

so yes, halo 5 might be 1080p and that could be the ammunition of "look it can also do this" crowd but more likely there will be things that will be less-refined.
 

Applecot

Member
Since he said laymans.

Imagine you have some people you need to transport somewhere.

DDR3 is slow but can carry alot of people.
ESRAM is super fast but can't carry many people.
GDDR5 is simultaneously fast and can carry many people.

So while you can balance off the ESRAM and DDR3 to cooperatively ferry people to your destination, it's more complicated. Whereas the Sony solution (Just GDDR5) is easier to deal with and doesn't require alot of thought. You just load people up and transport them over.

The end result is potentially the same but the way you went about it is much different between the two. Which is why people a re saying ESRAM is a barrier to getting optimal performance on the XB1.
 
Since he said laymans.

Imagine you have some people you need to transport somewhere.

DDR3 is slow but can carry alot of people.
ESRAM is super fast but can't carry many people.
GDDR5 is simultaneously fast and can carry many people.

So while you can balance off the ESRAM and DDR3 to cooperatively ferry people to your destination, it's more complicated. Whereas the Sony solution (Just GDDR5) is easier to deal with and doesn't require alot of thought. You just load people up and transport them over.

The end result is potentially the same but the way you went about it is much different between the two. Which is why people a re saying ESRAM is a barrier to getting optimal performance on the XB1.

actually, gddr5 can carry way more people than the ddr3.
 

ZeroAlpha

Banned
Thanks to all those that gave serious answers, puts it all in perspective. I know MS is in the business to sell consoles but they should have just been upfront in the PR machine about all this. I mean saying 'it doesn't matter' might be true for some people but it just comes off silly now.
 

Bsigg12

Member
This thread is a testament to why GAF is fantastic. I think its fair to say we all understand the Xbox One is a weaker system but being able to bring the technical reasons to why down to terms that any member can understand without being destroyed in a tech thread is a great thing. Civil conversations about tech are always awesome.
 

DBT85

Member
I can foresee some problems here. Most of the things that compute does are gameplay changing. For multiplatform games, that could be too much of a hurdle to overcome. If used fully it could make the games on consoles that can make greater use of compute totally different that on consoles that can use it less.

Imagine if Borderlands 3 decides to make full use of compute and their AI is much better on the PS4 than the XB1. That would affect everything from level design to the damage that enemies can do. GearBox would have to balance the game twice instead of once. I'm afraid that while the targeting lowest common denominator doesn't appear to be used for graphics, it will be used for features that impact gameplay.

This has been a concern of many for months. We just don't know.

The best thing ing for the advancement of compute in consoles right now is for the PS4 to obliterate the xbone, then devs won't have to make a choice. But that isn't going to happen.
 
I can foresee some problems here. Most of the things that compute does are gameplay changing. For multiplatform games, that could be too much of a hurdle to overcome. If used fully it could make the games on consoles that can make greater use of compute totally different that on consoles that can use it less.

Imagine if Borderlands 3 decides to make full use of compute and their AI is much better on the PS4 than the XB1. That would affect everything from level design to the damage that enemies can do. GearBox would have to balance the game twice instead of once. I'm afraid that while the targeting lowest common denominator doesn't appear to be used for graphics, it will be used for features that impact gameplay.

I don't think this at all. When first party games start to out strip third party by a long way. They will just damage their games as they they will just start to get low scores from game sites. EA and Activision will not allow their brands to become muddied because of Microsoft if the competition have 4 lead on user base.
 
that is true, but totally disagree with "nothing to do with heat". esram is made up of transistors and capacitors, meaning heat ceiling is definitely considered, especially because it's on the same chip as the cpu and gpu. there's a reason why the xbone is that big and has that many vents. maybe that's the ceiling the esram can handle without losing performance or without the system crapping out. 150 gbps is at 75% efficiency at best, which is really low. yes, transistors can be a design problem because of their nature but then again 75% efficiency just because of the hardware design?
I don't mean to say that eSRAM won't be hot! It's nearly a third of their entire APU transistor budget, so running it at full capacity all the time is certainly part of the reason for all the space and the fans and vents.

But from my (educated layman's) knowledge, yes indeed they might be getting 75% efficiency just because of the engineering. Normal memory bandwidth numbers represent width of bus (how many transistors you can flip at once) times clock rate (how often you can flip them). Losses of efficiency here are due almost wholly to latency, where transistors simply aren't ready to flip exactly when you ask them to.

With eSRAM, latency is much less of an issue (some variants have zero turnaround time) so efficiency is usually better. But Microsoft's version is two-ported, meaning writes can go in one side of each SRAM cell while reads leave the other side. But this introduces new timing issues not present in one-ported memory cells, because the entering write might have to wait for the transistors in front of it to change states as part of the outgoing read. (This is why specific types of computation give better results than others, because of their memory access pattern.) Those waits to avoid collision drag down the efficiency.

Also, please note that it's Microsoft's technical engineers themselves who have put forth numbers showing their eSRAM to be less efficient than their DDR3 (though still much faster).
 
The question was why MS couldn't settle for 2gb or 4gb of GDDR5 the way Sony was planning on doing

pretty sure it's to do with windows 8 legacy, because wouldn't windows 8 apps have to be re-written to the different memory structures (i.e. all pc's use DDR3 system memory, GDDR5 for GPU memory) so apps are written to the specific memory configuration of DDR3

This has been a concern of many for months. We just don't know.

The best thing ing for the advancement of compute in consoles right now is for the PS4 to obliterate the xbone, then devs won't have to make a choice. But that isn't going to happen.

or Borderlands 3 becomes a PS4 exclusive (not going to happen obviously)...
 

John_B

Member
To use an analogy that we all can relate to: imagine having invited 3 female acquaintances home for sexual intercourse. Your bed is of limited size and can not fit everyone all at one time, so you have to choose which female to accompany you on the bed while the others patiently waits on the side. This situation severely limits the pleasure you should be receiving.
 

joshcryer

it's ok, you're all right now
However they were "lucky" in that 8GB of GDDR5 became an option. (Luck is the wrong word in my opinion, there is no luck in business, only well managed risk.)

The supply chain literally became available when the PS4 was announced. Dev kits were all 4GB. That's taking it damn close to the edge right there, if you ask me. I don't see that as being a logical risk.

"If you go with 4GB of GDDR5, you are done."

I think they did make it happen and I think if the high speed GDDR5 supplier didn't come in on time they would've been screwed. Note: I do think the designers probably worked in how they would expand to 8GB if it became an "option" and GDDR5 is very suitable for it. But it wasn't an option until late in the development cycle of the machine. They might have been able to eat the production of 5k units or something (and been pushed back by 2-3 months) but once it went full scale production I really think it would've been set in stone.

pretty sure it's to do with windows 8 legacy, because wouldn't windows 8 apps have to be re-written to the different memory structures (i.e. all pc's use DDR3 system memory, GDDR5 for GPU memory) so apps are written to the specific memory configuration of DDR3

Doesn't really matter, the OS is a VM anyway, whatever Win 8 apps are compiled for regular PCs would've worked with GDDR5. AMD's hUMA roadmap is that all PCs in the future use GDDR5 and it shouldn't pose a problem for legacy apps.
 

wsippel

Banned
This.

Here is a calculation I found on the internet to debunk this myth. I'm not backing it up since it's not my area of expertise, but it seems to use consistent math. Would like the gurus on this thread to comment on it:

http://systemwars.com/forums/index.php/topic/116470-done-my-research-reports-back-with-verdict-dat-gddr5-ps4/
Looks OK. But this is once again where the eSRAM comes into play. Latency can cause stalls. In a worst case scenario (cache misses and stuff), assuming a 6-20 cycle latency at the given clock speed, the CPU can stall for roughly 8-24 cycles, the GPU for 3-10 waiting for data. More bandwidth doesn't help here in any way, shape or form. With eSRAM, the stalls are much shorter: about one cycle for the GPU and two for the CPU. Which obviously increases the efficiency and means the system performs closer to its theoretical peak.
 
I don't think this at all. When first party games start to out strip third party by a long way. They will just damage their games as they they will just start to get low scores from game sites. EA and Activision will not allow their brands to become muddied because of Microsoft if the competition have 4 lead on user base.

I think that all depends on how much extra work it takes to fully use compute. I agree that there will be pressure to make the best game possible which is why I never bought into the theory that the graphics would target the lowest common denominator. However I just don't see how you can create two versions of a game that has different enemy AI capability without expending a lot of extra effort.

My optimistic side says that the PS4's greater market share will make any extra effort worth the trouble. My realistic side is having trouble believing that.
 
Doesn't really matter, the OS is a VM anyway, whatever Win 8 apps are compiled for regular PCs would've worked with GDDR5. AMD's hUMA roadmap is that all PCs in the future use GDDR5 and it shouldn't pose a problem for legacy apps.

Thanks Josh :)

for a raging fansite full of testosterone and system wars, GAF can be educational too :)
 

benny_a

extra source of jiggaflops
As I understand it the PS4's 176 GB/s counts both directions. The One's eSRAM is 109 GB/s counting the same way, by bus width. But SRAM can be dual-ported, meaning mere bus width isn't accurate. Microsoft's claim of 204 GB/s for eSRAM should be true, not a PR lie. However, a SRAM cell is still limited by how transistors inside it are switched (they can't be registering data coming from both sides simultaneously), so compared to the simple bus counts, it will be harder to get anywhere near the 204 GB/s "maximum". Microsoft themselves have reported between 133 and 150 GB/s in the real world. This shouldn't have anything to do with heat, though.
I think I forgot that Microsoft had a revelation in production that their eSRAM was twice as fast as they designed it to be.

In my mind that wasn't real and I only had a bad dream and calculated the bandwidth using the assumption that eSRAM wasn't double data.

Xbone eSRAM's theoretically max is higher PS4's GDDR5's theoretical max.
Why MS's own lab tests are so much lower than Crytek's real world results is still curious. :p
 
Top Bottom