• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NXGamer - The TFlops are a lie

Tangent? Please read my posts.
The title is a lie.
Well, the very basic argument that tflops are tflops... Obviously.

The title is a lie if everyone understands flops in their full context (they don't). So in a way, it's a "lie", but I would not have picked that title. It doesn't convey what's im the video very well.
 

ethomaz

Banned
Well, the very basic argument that tflops are tflops... Obviously.

The title is a lie if everyone understands flops in their full context (they don't). So in a way, it's a "lie", but I would not have picked that title. It doesn't convey what's im the video very well.
There are flops... it is a mensurable metric.
It is not a lie in any context possible.

You guys are trying to create a false claim where it doesn’t exists.

Maybe the meter is a lie because context lol

The only reason I can see to somebody make a title with a blatant lie is if the owner want to get attention for whatever he will talk in the video...
 
Last edited:

01011001

Banned
NXGamer NXGamer man, you dropped a video with a title that's a bit clickbairy during what is already a heated console war with fanboys going crazy on both sides... 🤣🤣

so you should have expected that some get a bit heated over it.

basically those who believe the GitHub leaks will accuse you of doing damage control on behalf of Sony,
and those who believe the inserders that say that the PS5 is a bit stronger will accuse you of doing damage control for Microsoft.

both of these teams either won't watch your video because they think they know what you are trying to say, or they will watch it and not get wat is being said.

good luck, with not getting drowned by fanboys lol.


BUT to get back to the topic, I think it is save to assume that the TFLOP difference will be the most telling this generation as both companies will try to have no bottlenecks
so I think we can expect that, if one of these consoles will have a 2 or 3 TF advantage, it will pretty much 1 to 1 show that in multiplatform games

even more so this gen as it seems like they use the same type of RAM, most likely a very similar amount and they both, again, use the same vendor and architecture.
 
Last edited:

Lister

Banned
You guys are so hung up on the phrase it's a lie. Its not meant to be taken literally. And it's supposed to fit within certain character constraints.

Tflops is a lie is probably a better title than, tflops alone do not determine the actual performance of a video game engine on a particular piece of graphics hardware, and the architecture of that hardware can make a huge difference in how much of that theoretical throughput can actually be tapped into.
 
You guys are so hung up on the phrase it's a lie. Its not meant to be taken literally. And it's supposed to fit within certain character constraints.

Tflops is a lie is probably a better title than, tflops alone do not determine the actual performance of a video game engine on a particular piece of graphics hardware, and the architecture of that hardware can make a huge difference in how much of that theoretical throughput can actually be tapped into.

NX gamer could have called it "TFLOPS is not the only thing that matters" or "More than just TFLOPS" or something short and sweet along those lines. TFLOPS is a lie is kind of clickbaitish and misleading, but appreciate his in depth analysis.
 

Md Ray

Member
If you want to compare "leaked" (alleged) specs the difference in bus width on the Series X is potentially a huge difference maker in its favour. Honestly, based purely on that information it should be way faster than PS5, but then again I've never argued that wasn't the case. My main thought was that a SKU with those specs is a far more costly device to manufacture, which when passed on to the consumer in terms of RRP could make it restrictively expensive and as a result unpopular at retail.

As NX points out in his video, this is not about console-war bollocks. Its about misinformed people taking a single metric and conferring significance upon it in terms of overall system performance (or lack thereof) that it doesn't warrant in actuality.
Couldn't have said any better.
 

Kagey K

Banned
You guys are so hung up on the phrase it's a lie. Its not meant to be taken literally. And it's supposed to fit within certain character constraints.

Tflops is a lie is probably a better title than, tflops alone do not determine the actual performance of a video game engine on a particular piece of graphics hardware, and the architecture of that hardware can make a huge difference in how much of that theoretical throughput can actually be tapped into.

Except MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.

Do you really think they are going to sacrifice all the research now in order to brute force them?

I can’t imagine how high some of you have to be to imagine this shit.
 

sinnergy

Member
Anyone? If Series X has 56 CU and PS5 has 40 CU 64 ROPS by GITHUB leak, is it possible that Series X has more than 64 ROPS, seeing there is a jump in ROPS every so CU count, lets say 96 ROPS for example.
 
When they're roughly the same architecture with minor differences. Yes, you really can. It doesn't mean the PS5 is weak if it's really 9.2, but if the Xbox Series X is truly 12TFLOPS, it's a clearly more powerful gaming console. Power obviously isn't everything, but in this specific case his attempts to reassure people by basically misleading them about what such a gap would mean with relatively similar architectures is not a good look.
 
Except MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.

Do you really think they are going to sacrifice all the research now in order to brute force them?

I can’t imagine how high some of you have to be to imagine this shit.

This is what I laugh at the most sometimes. Microsoft did all that clever game analysis to understand how the games and their engines operate, and used it to their advantage of developing Xbox One X, and we are now suppose to believe that Microsoft has decided to just go all sloppy on us and just brute force everything with nothing but power. No clever optimizations, features or techniques. Just hulk smash their way through everything. Nope. Some would like to believe that with the amazing power in the console Microsoft won't also be smart to push it as far as humanely possible, but no such luck. Microsoft will ensure this thing sings, and I doubt are relying on purely raw power.
 

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
BUT to get back to the topic, I think it is save to assume that the TFLOP difference will be the most telling this generation as both companies will try to have no bottlenecks
I am pretty sure that all console vendors have always tried to have no bottlenecks. It is not trivial to ensure that because it is a system to be used for many years, with techniques often not yet popularised at time of launch, so a system that may appear well-balanced pre launch may exhibit unexpected bottlenecks due to usage.
 

ethomaz

Banned
Except MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.

Do you really think they are going to sacrifice all the research now in order to brute force them?

I can’t imagine how high some of you have to be to imagine this shit.
From the creators of DirectX that makes only balanced systems lol
 

ethomaz

Banned
Anyone? If Series X has 56 CU and PS5 has 40 CU 64 ROPS by GITHUB leak, is it possible that Series X has more than 64 ROPS, seeing there is a jump in ROPS every so CU count, lets say 96 ROPS for example.
If you have 32 ROPs per Shader Engine in RDNA then 56 CUs doesn’t match with 96 ROPs.... it is either 54 CUs or 60 CUs for 96 ROPs.

56 CUs can match with 128 ROPs and 4 Shader Engjnes thought.
 
Last edited:

sinnergy

Member
If you have 32 ROPs per Shader Engine in RDNA then 56 CUs doesn’t match with 96 ROPs.... it is either 54 CUs or 58 CUs for 96 ROPs.

56 CUs can match with 128 ROPs and 4 Shader Engjnes.
Thank you 😊 could mean MS also have pixel fill rate covered, even with lower than 2 GHz GPU clock. Don’t you think? If the PS5 64 ROPS leak is true.
 

ethomaz

Banned
Thank you 😊 could mean MS also have pixel fill rate covered, even with lower than 2 GHz GPU clock. Don’t you think? If the PS5 64 ROPS leak is true.
Pixel rate will be fine in any case.
AMD increased from 16 ROPs per Shader Engine to 32 ROPs per Shader Engine... if PS5 has 2 Shader Engines then it really doesn’t need more than 64 ROPs.

I say that because I did not see any test of RX 5700 showing that the pixel fill rate with 64 ROPs is bad with for 2 Shader Engine.
 
Last edited:

sinnergy

Member
Pixel rate will be fine in any case.
AMD increased from 16 ROPs per Shader Engine to 32 ROPs per Shader Engine... if PS5 has 2 Shader Engines then it really doesn’t need more than 64 ROPs.

I say that because I did not see any test of RX 5700 showing that the pixel fill rate with 64 ROPs is bad with for 2 Shader Engine.
Thanks for your technical insights, learning something new everyday!
 

R600

Banned
Both, Arden and Oberon from leaked Github have 64ROPs.

Pixel fillrate is result of clock * ROPs
Texture fillrate is result of 4 * CUs * clock.

TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
 

ethomaz

Banned
Both, Arden and Oberon from leaked Github have 64ROPs.

Pixel fillrate is result of clock * ROPs
Texture fillrate is result of 4 * CUs * clock.

TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
Texture fill rate is based in TMUs x clock.

In RX 5700 has 144 TMUs that account for up to 248.4 GT/s with boost clock of 1725Mhz.

Edit - Nevermind it is because there is 4 TMUs per CU... so your math works too.
 
Last edited:

sinnergy

Member
Both, Arden and Oberon from leaked Github have 64ROPs.

Pixel fillrate is result of clock * ROPs
Texture fillrate is result of 4 * CUs * clock.

TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
Ah thanks, didn’t spot in the leak that Series X also had 64 ROPS.
 

NXGamer

Member
NXGamer NXGamer man, you dropped a video with a title that's a bit clickbairy during what is already a heated console war with fanboys going crazy on both sides... 🤣🤣

so you should have expected that some get a bit heated over it.

basically those who believe the GitHub leaks will accuse you of doing damage control on behalf of Sony,
and those who believe the inserders that say that the PS5 is a bit stronger will accuse you of doing damage control for Microsoft.

both of these teams either won't watch your video because they think they know what you are trying to say, or they will watch it and not get wat is being said.

good luck, with not getting drowned by fanboys lol.


BUT to get back to the topic, I think it is save to assume that the TFLOP difference will be the most telling this generation as both companies will try to have no bottlenecks
so I think we can expect that, if one of these consoles will have a 2 or 3 TF advantage, it will pretty much 1 to 1 show that in multiplatform games

even more so this gen as it seems like they use the same type of RAM, most likely a very similar amount and they both, again, use the same vendor and architecture.
It is all good, this is not heat, at all, it is just conversation which is great, we need that, we always need that and I applaud it.

I agree that some will watch with a bias and just ignore, as we have seen here with Ethomaz, some will just read the tite and then go off on a tangent based on assumption, this is systematic of the current eco-system of info and news I am afraid.

The funny thing is, almost all of the people who disagree with the video, then go on to support it and the point with their examples, highlighting they are not really grasping it at all, as you say.

On your other point here:-

"BUT to get back to the topic, I think it is save to assume that the TFLOP difference will be the most telling this generation as both companies will try to have no bottlenecks
so I think we can expect that, if one of these consoles will have a 2 or 3 TF advantage, it will pretty much 1 to 1 show that in multiplatform games

even more so this gen as it seems like they use the same type of RAM, most likely a very similar amount and they both, again, use the same vendor and architecture."

Again, it Won't as the OS allocation, NVME construction, OS layer, API, SDk will all have an affect. If we take the X1 and PS4 at launch, the gap was MORE than just the Tflops it was the RAM allocation, the GPU size, the Bandwidth AND the OS, Driver, Allocation etc etc. All this made the circa 40% deficit at times become 100% or greater, this was reduced as time went on due to changes as we know.

If the 2 SoC's and Architecure are 100% the same, (they wont be as this is not the solution AMD are providing to them but not a discussion for now) hypothetically, the other software factors can and will still have an impact. This is the most interesting part for me, not 1 single metric and I look forward to seeing what both teams have chosen and delivered and then, moreso, what teams will do with it and how they will use it.

Next NXGamer video Title:

E3 was a Lie.

Too late, I called this last year in my E3 2019 video and I got hate for that also, Guess I am an insider now :messenger_sunglasses:
 
Last edited:

Darius87

Member
how Tflop is a lie? does 2 x 2 != 4? or E != mc2? i watched the video and title contradicts what nx gamer talks about or he's mixing Tflops with efficiency also Tflop is prety easy to understand it if you know how to count it:
(Cu's * 64 * 2 * Speed) / 1 000 000 = Tflop's that's it you only need to know how many CU's there is and how fast these CU's are running.
also i think it's hard for some here to distinguish between Tflop and efficiency which are totaly different things, measured by different metrics which i think is culprit of these discussions.
 

K.N.W.

Member
Gaf, please... Read, watch, think and then post. NXGamer is legitimately exposing his opinion, he does have proofs, and you are just naysaing for no reason at all.
Anyway there are many scenarios for a TFLOPS/Real Perfomance discrepancy, I might add some possible explanations:
1 VRAM is full, don't have any space to work, GPU has to wait hence STUTTERS.
2 VRAM bandwidth is over, you cannot transfer data from and to memory as fast as you need and the GPU slows down.
3 You have an old architecture GPU, new games are made to use functions specific for the last models, and it would use many of those bazillion FLOPS to run a bit in circles and work around the issue.
4 Thermal Profile, the theoretical FLOPS count, to be reached, requires that the GPU is in optimal thermal condition.
Plus any nuisance from CPU, main RAM and HDD.

Those are some of the reasons which could cripple performances, but I'm still talking about not reaching peak perfomance for a single GPU.
If we were to compare an Nvidia card to a sameish AMD card, supposing they have almost the same TFLOPS, performace would vary wildly. For example the 6.5 TFLOPS Nvidia RTX 2060 takes the 5.9 TFLOPS AMD R9 390x to a courtyard, grabs its neck, breaks its bones and sends them back via the amazon costumer service (almost a 50% increase in performance) . This applies even with different cards, from the same brand, just different generations. Even a GTX 1060 5 tflops, has around 25% less TFLOPs than 2060, but the latter ends up performing 50% better compared to it.

If you are not convinced yet, do like me, get an Information Technology Engineering Degree and an Engineering Sciences degree, and, then, build your own "PURE TFLOPS" GPU. Or, maybe, try discussing with other users and get educated.

What happened with PS4 vs Xbox One? was the 1.84 vs 1.21 teraflop advantage a lie?
The Xbox One GPU lives on a really fast 32MB ESRAM: having a fast memory helps speeding up rendering task, it already happend with 360, which used a GPU around the same of level of the PS3 one, but the faster eDRAM made possible full resolution alpha effects at a steadier framerate, meanwhile PS3 usually had lower resolution effects and worse dips during those moments. (I hope to god no one ever creates a system like PS3, even if, given the right attentions, it was an extreeeemely powerful system).
But if you are asking yourself why do multiplatforms look TOTALLY equal on XB1 and PS4, well... It's another story, I think :)
 
Last edited:
Thank you 😊 could mean MS also have pixel fill rate covered, even with lower than 2 GHz GPU clock. Don’t you think? If the PS5 64 ROPS leak is true.
I mean is that really even that important? ROP's are merely the theoretical ceiling for the output, it doesn't mean they're being made use of fully or in an actual beneficial capacity.

The Pro has 64 ROP's and as a result a theoretical 21 GPixel/s advantage over the 32 ROP Xbox One X GPU for pixel fillrate. Is the X in any way struggling against the Pro? Not by a country mile, and its practical throughput is far beyond it.
 
Last edited:

Journey

Banned
Sony made sure that their GPU wasn't bound by anything else, so they could take full advantage of the flops they had available to them - the PS4 is a very well balanced system.


I don't disagree, which brings me to my point, the lesser flop console doesn't necessarily mean it will have the better design or viceversa, so at this point it's pretty silly to ignore the fact that we may have consoles with 12 vs 9, that's the discussion.

Taking Xbox One and PS4 as examples, one could look at 1.21 TF and 1.84 TF and shortly after create a whole video just like NXGamer just did and claim that "The Teraflops are a lie", sure it might be nice and educational, but the timing would be odd right when Durango and Orbis are leaked to come out and say that, like don't be excited that PS4 might be 1.84 TF compared to Xbox One 1.21 TF because who knows what wizardry MS has in store to make up for it, or maybe Sony screwed up and the PS4 is chock-full of bottlenecks. :pie_eyeroll:

The bottom line is, all else equal, the higher teraflops WIN.

What we know about the architecture:

Xbox Series X = RDNA 2.0
PS5 = At least RDNA 1.0 maybe 2.0 (No one is sure)

Xbox Series X = GDDR6
PS5 = GDDR6

What we don't know is how many ROPs either will have, so I'll give you that little uncertainty, but everything else looks equal, both are shooting for GDDR6, so there won't be a bandwidth issue like there was with Xbox One, that was the other side of my point. If we had some kind of information that pointed to PS5 having more ROPs or using GDDR6 and XSX still using GDDR5 or something like that, then it would've made sense to publish this video, otherwise it's defensive nonesense.
 

ethomaz

Banned
I don't disagree, which brings me to my point, the lesser flop console doesn't necessarily mean it will have the better design or viceversa, so at this point it's pretty silly to ignore the fact that we may have consoles with 12 vs 9, that's the discussion.

Taking Xbox One and PS4 as examples, one could look at 1.21 TF and 1.84 TF and shortly after create a whole video just like NXGamer just did and claim that "The Teraflops are a lie", sure it might be nice and educational, but the timing would be odd right when Durango and Orbis are leaked to come out and say that, like don't be excited that PS4 might be 1.84 TF compared to Xbox One 1.21 TF because who knows what wizardry MS has in store to make up for it, or maybe Sony screwed up and the PS4 is chock-full of bottlenecks. :pie_eyeroll:

The bottom line is, all else equal, the higher teraflops WIN.

What we know about the architecture:

Xbox Series X = RDNA 2.0
PS5 = At least RDNA 1.0 maybe 2.0 (No one is sure)

Xbox Series X = GDDR6
PS5 = GDDR6

What we don't know is how many ROPs either will have, so I'll give you that little uncertainty, but everything else looks equal, both are shooting for GDDR6, so there won't be a bandwidth issue like there was with Xbox One, that was the other side of my point. If we had some kind of information that pointed to PS5 having more ROPs or using GDDR6 and XSX still using GDDR5 or something like that, then it would've made sense to publish this video, otherwise it's defensive nonesense.
From what data we see (rumors) I don't believe any next-gen console will be RDNA 2.0.
They could of course borrow some features from RDNA 2.0 but they will be at core RDNA 1.0.

Just like no mid-gen upgrade was based on Vega... it was all Polaris with some Vega features (in Pro case).
 
Last edited:
TF dont tell the whole story if we are comparing different architectures. For example 7.9TF 5700 outperforms Vega 64 with 12.7TF easily. But 7.9TF 5700 wont outperform 9.5TF 5700XT as they are same arches.
While your argument is correct, I am fairly certain that both consoles will have similar GPU architecture.
Microsoft did all that clever game analysis to understand how the games and their engines operate, and used it to their advantage of developing Xbox One X
Except MS focused the XDK in conjunction with the XB1X to try to eliminate all bottlenecks from the engines.
Well, this is how Microsoft designed the Xbox one... I have no problem with what others call "brute force", so long as the results are there.

But there are other differences around GPU and memory setup that should give an advantage to the PS5 at least in some situations... And people in the know telling us that Sony's machine is more powerful.

Examples - from the rumor mill
- Split memory pools on the PS5 (no memory access contention)
- A dedicated ARM CPU for background tasks PS5
- A somehow better raytracing solution on the PS5

Now these are unlikely to make up for a 30% GPU speed difference, but it could end up making one console perform better than the other in depending on the scenario, instead of a complete destruction of the console that has the weaker GPU.

Obviously the series x will have its own bespoke hardware for sound processing (which PS5 may have too), but we have less hinting at that kind of thing.

Add that the numbers are still disputed so far, however they were last time.
The bottom line is, all else equal, the higher teraflops WIN.
Yup, and so far it looks like the series x will win, but I claim that this is no so clear.

like don't be excited that PS4 might be 1.84 TF compared to Xbox One 1.21 TF because who knows what wizardry MS has in store to make up for it, or maybe Sony screwed up and the PS4 is chock-full of bottlenecks.
Actually that conversation happened shortly after launch (I believe the arguments came partly from the way DF treated the interview with MS's engineer back in the days).. and MS insisted on their secret sauce.
 

Journey

Banned
From what data we see (rumors) I don't believe any next-gen console will be RDNA 2.0.
They could of course borrow some features from RDNA 2.0 but they will be at core RDNA 1.0.

Just like no mid-gen upgrade was based on Vega... it was all Polaris with some Vega features (in Pro case).


Maybe, you might be right.

What I am pretty confident about is both will be using around the same generation AMD solution, and from the looks of it, from the announcement that XseriesX will be going with GDDR6, they're continuing the XONEX route and going all in, not using embedded ram custom saving solution to cut corners, so there's little room for there to be some hidden bottleneck similar to what we saw with Xbox One. It may just be as straight forward as "This one is more powerful because it has more CU and higher clocked GPU", plain and simple to calculate and see the results in games.
 
Last edited:
This is what I laugh at the most sometimes. Microsoft did all that clever game analysis to understand how the games and their engines operate, and used it to their advantage of developing Xbox One X, and we are now suppose to believe that Microsoft has decided to just go all sloppy on us and just brute force everything with nothing but power. No clever optimizations, features or techniques. Just hulk smash their way through everything. Nope. Some would like to believe that with the amazing power in the console Microsoft won't also be smart to push it as far as humanely possible, but no such luck. Microsoft will ensure this thing sings, and I doubt are relying on purely raw power.

Exactly, they have their own engineers, talents, insights, etc, which leads me to this:


I still feel Xbox will have more horsepower overall but Sony has a faster SSD and a better RT solution where Xbox will try to brute force everything.

Am I suppose to believe him here?
 

sinnergy

Member
I mean is that really even that important? ROP's are merely the theoretical ceiling for the output, it doesn't mean they're being made use of fully or in an actual beneficial capacity.

The Pro has 64 ROP's and as a result a theoretical 21 GPixel/s advantage over the 32 ROP Xbox One X GPU for pixel fillrate. Is the X in any way struggling against the Pro? Not by a country mile, and its practical throughput is far beyond it.
Was asking because I was curious, how it all worked, thanks.
 
6608990.jpeg
 

K.N.W.

Member
not using embedded ram custom saving solution to cut corners, so there's little room for there to be some hidden bottleneck similar to what we saw with Xbox One.
Actually, having a faster embedded RAM pool, doesn't mean your system is bottlenecked. And, not having it, doesn't mean your system is balanced. Going by PS2, Gamecube and XBOX 360, having a strong console and adding embedded RAM is just going to make it fly. On the otherside, XBOX ONE really needed that RAM pool just to breath.

On the perfomance side, having a faster ram pool for rendering is always a good idea, the problem is that developers would have to spend additional time to use it properly. So I think they might be excluding it on Series X just for ease of development, not for performance reasons.


.......... Deep, deep inside, I was wishing for new consoles to have something like 1 or 2 additional GB of some alien ultra fast RAM, just for rendering tasks. But I keep forgetting that consoles have a price xD
 
Last edited:

Journey

Banned
Actually, having a faster embedded RAM pool, doesn't mean your system is bottlenecked. And, not having it, doesn't mean your system is balanced. Going by PS2, Gamecube and XBOX 360, having a strong console and adding embedded RAM is just going to make it fly. On the otherside, XBOX ONE really needed that RAM pool just to breath.

On the perfomance side, having a faster ram pool for rendering is always a good idea, the problem is that developers would have to spend additional time to use it properly. So I think they might be excluding it on Series X just for ease of development, not for performance reasons.


.......... Deep, deep inside, I was wishing for new consoles to have something like 1 or 2 additional GB of some alien ultra fast RAM, just for rendering tasks. But I keep forgetting that consoles have a price xD


I understand all that and maybe my explanation was just oversimplified. I specifically mentioned Xbox One's case, because they got there for reasons I'll explain below, but I agree, for GameCube, PS2, etc. embedded ram made sense at the time, it was more of a necessity back then simply because the kind of bandwidth this cache memory provided was unheard of compared to the available bandwidth and memory allocation that was possible during that time.

Xbox One using ESRAM made sense at the time because MS engineers planned for 8GB of RAM from Day 1, Sony was content with using 4GB of ram, so GDDR5 density was just going to make it. With the ambitious features included in the XONE's OS, including snapping IE during gamplay, screen overlay, broadcast and Kinect features and voice commands all active during gameplay, less than 8GB was not an option, so DDR3 was the only route to get there (Density wasn't nearly close to fit 8GB at the time of panning, probably around 2010) so in order to make up for the 68GB/s max theoretical for DDR3, they added ESRAM. So it wasn't necessarily a move to save money, ESRAM actually made the chip more expensive, and what I like is that it drove Sony to make the PS4 8GB and add the OS functions we got as a result, they did NOT do this from the goodness of their little hearts.
 
Top Bottom