• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

NX Gamer PS5 full spec analysis - a new generation is born

Cobenzl

Member
There's a lot of confusion on why SSD is so important for next-gen and how it will change things.
Here I will try to explain the main concepts.
TL;DR fast SSD is a game changing feature, this generation will be fun to watch!

It was working fine before, why do we even need that?
No, it wasn't fine, it was a giant PITA for anything other than small multiplayer maps or fighting games.
Let's talk some numbers. Unfortunately not many games have ever published their RAM pools and asset pools to the public, but some did.
Enter Killzone: Shadowfall Demo presentation.
We have roughly the following:

TypeApprox. Size, %Approx. Size, MB
Textures30%1400
CPU working set15%700
GPU working set25%1200
Streaming pool10%500
Sounds10%450
Meshes10%450
Animations/Particles1%45

*These numbers are rounded sums of various much more detailed numbers presented in the article above.

We are interested in the "streaming pool" number here (but we will talk about others too)
We have ~500MB of data that is loaded as the demo progresses, on the fly.
The whole chunk of data that the game samples from (for that streaming process) is 1600MB.
The load speed of PS4 drive is (compressed data) <50MB/sec (uncompressed is <20MB/sec), i.e. it will take >30sec to load that at least.

It seems like it's not that big of a problem, and indeed for demo it is. But what about the game?
The game size is ~40GB, you have 6.5GB of usable RAM, you cannot load the whole game, even if you tried.
So what's left? We can either stream things in, or do a loading screen between each new section.
Let's try the easier approach: do a loading screen
We have 6.5GB of RAM, and the resident set is ~2GB from the table above (GPU + CPU working set). We need to load 4.5GB each time. It's 90 seconds, pretty annoying, but it's the best case. Any time you need to load things not sequentially, you will need to seek the drive and the time will increase.
You can't go back, as it will re-load things and - another loading screen.
You can't use more than 4.5GB assets in your whole gaming section, or you will need another loading screen.
It gets even more ridiculous if your levels are dynamic: left an item in previous zone? Load time will increase (item is not built into the gaming world, we load the world, then we seek for each item/item group on disk).
Remember Skyrim? Loading into each house? That's what will happen.
So, loading screens are easy, but if your game is not a linear, static, theme-park style attraction it gets ridiculous pretty fast.

How to we stream then?
We have a chunk of memory (remember 500Mb) that's reserved for streaming things from disk.
With our 50MB/sec speed we fill it up each 10 sec.
So, each 10 sec we can have a totally new data in RAM.
Let's do some metrics, for example: how much new shit we can show to the player in 1 min? Easy: 6*500 = 3GB
How much old shit player sees each minute? Easy again: 1400+450+450+45=~ 2.5GB
So we have a roughly 50/50 old to new shit on screen.
Reused monsters? assets? textures? NPCs? you name it. You have the 50/50 going on.

But PS4 has 6.5GB of RAM, we used only 4.5GB till now, what about other 2GB?
Excellent question!
The answer is: it goes to the old shit. Because if we increase the streaming buffer to 1.5GB it still does nothing to the 50MB/sec speed.
With the full 6.5GB we get to 6GB old vs 3GB new in 1 minute. Which is 2:1 old shit wins.

But what about 10 minutes?
Good, good. Here we go!
In 10 min we can get to 30GB new shit vs 6GB old.
And that's, my friends, how the games worked last gen.
You're as a player were introduced to the new gaming moments very gradually.
Or, there were some tricks they used: open doors animation.
Remember Uncharted with all the "let's open that heavy door for 15sec?" that's because new shit needs to load, players need to get to a new location, but we cannot load it fast.

So, what about SSDs then?
We will answer that later.
Let's ask something else.

What about 4K?
With 4K "GPU working set" will grow 4x, at least.
We are looking at 1200*4 = 4.8GB of GPU data.
CPU working set will also grow (everybody wants these better scripts and physics I presume?) but probably 2x only, to 700*2 = ~1.5GB
So overall the persistent memory will be well over 6GB, let's say 6.5GB.
That leaves us with ~5GB of free RAM in XSeX and ~8GB for PS5.

Stop, stop! Why PS5 has more RAM suddenly?
That's simple.
XSeX RAM is divided into two pools (logically, physically it's the same RAM): 10GB and 3.5GB.
GPU working set must use the 10GB pool (it's the memory set that absolutely needs the fast bandwidth).
So 10 - 4.8 = 5.2 which is ~5GB
CPU working set will use 3.5GB pool and we will have a spare 2GB there for other things.
We may load some low freq data there, like streaming meshes and stuff, but it will hard to use in each frame: accessing that data too frequently will lower the whole system bandwidth to 336Mb/sec.
That's why MSFT calls the 10GB pool "GPU optimal".

But what about PS5? It also has some RAM reserved for the system? It should be ~14GB usable!
Nope, sorry.
PS5 has a 5.5GB/sec flash drive. That typically loads 2GB in 0.27 sec. It's write speed is lower, but not less than 5.5GB/sec raw.
What PS5 can do, and I would be pretty surprised if Sony won't do it. Is to save the system image to the disk while the game is playing.
And thus give almost full 16GB of RAM to the game.
2GB system image will load into RAM in <1 sec (save 2GB game data to disk in 0.6 sec + load system from disk 0.3 sec). Why keep it resident?
But I'm on the safe side here. So it's ~14.5GB usable for PS5.

Hmm, essentially MSFT can do that too?
Yep, they can. The speeds will be less sexy but not more than ~3sec, I think.
Why don't they do it? Probably they rely on OS constantly running on the background for all the services it provides.
That's why I gave Sony 14.5GB.
But I have hard time understanding why 2.5GB is needed, all the background services can run on a much smaller RAM footprint just fine, and UI stuff can load on-demand.

Can we talk about SSD for games now?
Yup.
So, let's get to the numbers again.
For XSeX ~5GB of "free" RAM we can divide it into 2 parts: resident and streaming.
Why two? Because typically you cannot load shit into frame while frame is rendering.
GPU is so fast, that each time you ask GPU "what exact memory location are you reading now?" will slow it down to give you an answer.

But can you load things into other part while the first one is rendering?
Absolutely. You can switch "resident" and "streaming" part as much as you like, if it's fast enough.
Anyway, we got to 50/50 of "new shit" to "old shit" inside 1 second now!
2.5GB of resident + 2.5GB of streaming pool and it takes XSeX just 1 sec to completely reload the streaming part!
In 1 min we have 60:1 of new/old ratio!
Nice!

What about PS5 then? Is it just 2x faster and that's it?
Not really.
The whole 8GB of the RAM we have "free" can be a "streaming pool" on PS5.

But you said "we cannot load while frame is rendering"?
In XSeX, yes.
But in PS5 we have GPU cache scrubbers.
This is a piece of silicon inside the GPU that will reload our assets on the fly while GPU is rendering the frame.
It has full access to where and what GPU is reading right now (it's all in the GPU cache, hence "cache scrubber")
It will also never invalidate the whole cache (which can still lead to GPU "stall") but reload exactly the data that changed (I hope you've listened to that part of Cerny's talk very closely).

But it's free RAM size doesn't really matter, we still have 2:1 of old/new in one frame, because SSD is only 2x faster?
Yes, and no.
We do have only 2x faster rates (although the max rates are much higher for PS5: 22GB/sec vs 6GB/sec)
But the thing is, GPU can render from 8GB of game data. And XSeX - only from 2.5GB, do you remember that we cannot render from the "streaming" part while it loads?
So in any given scene, potentially, PS5 can have 2x to 3x more details/textures/assets than XSeX.
Yes, XSeX will render it faster, higher FPS or higher frame-buffer resolution (not both, perf difference is too low).
But the scene itself will be less detailed, have less artwork.

OMG, can MSFT do something about it?
Of course they will, and they do!
What are the XSeX advantages? More ALU power (FLOPS) more RT power, more CPU power.
What MSFT will do: rely heavily on this power advantage instead of the artwork: more procedural stuff, more ALU used for physics simulation (remember, RT and lighting is a physics simulation too, after all).
More compute and more complex shaders.

So what will be the end result?
It's pretty simple.
PS5: relies on more artwork and pushing more data through the system. Potentially 2x performance in that.
XSeX: relies more on in-frame calculations, procedural. Potentially 30% performance in that.
Who will win: dunno. There are pros and cons for each.
It will be a fun generation indeed. Much more fun than the previous one, for sure.

Should get over to this thread. Lots of interesting analysis.
 

Three

Member
SSD>TF - Mark Cerny 😁🤣

He's right though, for gameplay. MS strategy is make the same games for Xbox one, lockhart and XSX and increase resolution and fps. That’s what the GPU TFs get you.That may win them DF comparisons on 3rd party games and the positive praise from some people. MS strategy is to appeal to developers with the xbox one, lockhart install base and increase some graphics sliders between the games and get people paying the subscription no matter where.

The PS5 SSD speed and no lowest common denominator on the other hand can affect gameplay. Character movement will be less limited, i. e. they can move faster, they can go into every building in open world games in 0.8 milliseconds.

Sony don't care about a strategy to sell subs with little effort. That's not what their PS4 or PS5 strategy has been. MS care more about perception and selling subs/peripherals than the games.

I have no doubt however that MS will try to sell you fast drives as a high price add on though in the future. Who knows, maybe games that actually require faster SSD speeds will require it like an N64 ram pak.
 
He's right though, for gameplay. MS strategy is make the same games for Xbox one, lockhart and XSX and increase resolution and fps. That’s what the GPU TFs get you.That may win them DF comparisons on 3rd party games and the positive praise from some people. MS strategy is to appeal to developers with the xbox one, lockhart install base and increase some graphics sliders between the games and get people paying the subscription no matter where.

The PS5 SSD speed and no lowest common denominator on the other hand can affect gameplay. Character movement will be less limited, i. e. they can move faster, they can go into every building in open world games in 0.8 milliseconds.

Sony don't care about a strategy to sell subs with little effort. That's not what their PS4 or PS5 strategy has been. MS care more about perception and selling subs/peripherals than the games.

I have no doubt however that MS will try to sell you fast drives as a high price add on though in the future. Who knows, maybe games that actually require faster SSD speeds will require it like an N64 ram pak.
Funny thing is that the third party SSDs that will be able to be used in the ps5 seem like they will be more expensive than MS proprietary solution. (and it is unknown what you will be able to use on the ps5, for now you are limited to the......875 GB that the console provides)
So MS will win the DF comparisons, that automatically means that 3rd party games, you know the majority of games, will be better on the XSX.

«Increase some sliders» .....looooool. I guess pc gamers should stop buying newer graphics cards since all they do is increase some sliders, why even bother when you can play the same game at low res, with most effects deactivated, and at very low frame rates, these are all metrics on a slider. Nevermind that you are assuming that MS will continue making games for the Xbox one forever.
 

Three

Member
Funny thing is that the third party SSDs that will be able to be used in the ps5 seem like they will be more expensive than MS proprietary solution. (and it is unknown what you will be able to use on the ps5, for now you are limited to the......875 GB that the console provides)
So MS will win the DF comparisons, that automatically means that 3rd party games, you know the majority of games, will be better on the XSX.

«Increase some sliders» .....looooool. I guess pc gamers should stop buying newer graphics cards since all they do is increase some sliders, why even bother when you can play the same game at low res, with most effects deactivated, and at very low frame rates, these are all metrics on a slider. Nevermind that you are assuming that MS will continue making games for the Xbox one forever.
I'm talking about markup. If the PS5 drives are twice as fast but there is competition because it's an open standard it will be at market value.

The MS proprietary drive is by a third party (likely Samsung) but they will charge more for that lower speed than a standardised low speed NVMe in their plastic caddy.

Yes? They may win DF comparisons because of resolution or fps advantage but how does that go against the fact that a fast SSD can be game changing and resolution sliders being notched slightly higher from a 1.8tflop difference are not?
 
Last edited:
I'm talking about markup. If the PS5 drives are twice as fast but there is competition because it's an open standard it will be the at market value.

The MS proprietary drive is by a third party (likely Samsung) but they will charge more for that lower speed than a standardised low speed NVMe in their plastic caddy.

Yes? They may win DF comparisons because of resolution or fps advantage but how does that go against the fact that a fast SSD can be game changing and resolution sliders being notched slightly higher from a 1.8tflop difference are not?
It just means that the majority of games released will be better on the XSX, that’s what it means, plain and simple. I would also wait for all these game changing ps5 games to appear first, not many just one of these.
 

Three

Member
It just means that the majority of games released will be better on the XSX, that’s what it means, plain and simple. I would also wait for all these game changing ps5 games to appear first, not many just one of these.
That's just a poor argument. What's the point of discussing anything, including DF results, if your only retort is that we need to wait before discussing anything.

The fact is that an SSD is the big game changer this gen, not TF.
So "SSD>TF" is not some comical comment he made.
 
Last edited:
That's just a poor argument. What's the point of discussing anything, including DF results, if your only retort is that we need to wait before discussing anything.

The fact is that an SSD is the big game changer this gen, not TF.
So "SSD>TF" is not some comical comment he made.
You wrote that the XSX will win the DF comparisons, doesn’t that mean that most games, the third party ones, will be better on the XSX ?
Yes SSD>TF is an extremely comical comment until we see this magical SSD in action doing all those wonderful things that you have in your head. Completely impartial tech sites like Tom’s hardware, that have no stakes in console wars, are painting an entire different picture, essentially mocking these outrageous claims we have been seeing about the importance of a fast SSD.
 

Three

Member
You wrote that the XSX will win the DF comparisons, doesn’t that mean that most games, the third party ones, will be better on the XSX ?
Yes SSD>TF is an extremely comical comment until we see this magical SSD in action doing all those wonderful things that you have in your head. Completely impartial tech sites like Tom’s hardware, that have no stakes in console wars, are painting an entire different picture, essentially mocking these outrageous claims we have been seeing about the importance of a fast SSD.
Yes I mentioned it but because I can discuss things without getting emotional and contradictory.

"The magical 1.72TFs difference is comical until we can see it in action doing the wonderful things you have in your head"

Maybe that will give you a taste of how ridiculous what you're saying sounds. There is nothing magical about it and we can discuss it.
 
Yes I mentioned it but because I can discuss things without getting emotional and contradictory.

"The magical 1.72TFs difference is comical until we can see it in action doing the wonderful things you have in your head"

Maybe that will give you a taste of how ridiculous what you're saying sounds. There is nothing magical about it and we can discuss it.
The absolutely smallest difference between the two GPUs is 1.877 TF not 1.72 (the XSX gpu is actually 12.155 TF) when the ps5 gpu is running at full speed and I consider the GPU power something more solid than SSD speed, esp. when combined with a better CPU and much faster Ram with faster bus speed as well.

As you wrote the XSX will be winning the DF comparisons so the majority of games will be better on it.
 

martino

Member
The absolutely smallest difference between the two GPUs is 1.877 TF not 1.72 (the XSX gpu is actually 12.155 TF) when the ps5 gpu is running at full speed and I consider the GPU power something more solid than SSD speed, esp. when combined with a better CPU and much faster Ram with faster bus speed as well.

As you wrote the XSX will be winning the DF comparisons so the majority of games will be better on it.
i will only talk for myself but i will buy an hdmi 2.1 tv... so variable framerate (>30 though) will not be a problem in my case...
i'm also aware people with hdmi 2.1 tv will be a niche most of this gen, so it can matter
 
Last edited:

SirTerry-T

Member
He's right though, for gameplay. MS strategy is make the same games for Xbox one, lockhart and XSX and increase resolution and fps. That’s what the GPU TFs get you.That may win them DF comparisons on 3rd party games and the positive praise from some people. MS strategy is to appeal to developers with the xbox one, lockhart install base and increase some graphics sliders between the games and get people paying the subscription no matter where.

The PS5 SSD speed and no lowest common denominator on the other hand can affect gameplay. Character movement will be less limited, i. e. they can move faster, they can go into every building in open world games in 0.8 milliseconds.

Sony don't care about a strategy to sell subs with little effort. That's not what their PS4 or PS5 strategy has been. MS care more about perception and selling subs/peripherals than the games.

I have no doubt however that MS will try to sell you fast drives as a high price add on though in the future. Who knows, maybe games that actually require faster SSD speeds will require it like an N64 ram pak.

Newsflash.
Both these businesses aren't in it for the charity.
 

Three

Member
Newsflash.
Both these businesses aren't in it for the charity.
You don't say, but there is a difference in how they sell things. You can make some stupid technically good things that are financially questionable (PSVR is a good example) then there is the lower risk mass market paired with 'premium' strategy that someone else usually adopts.
The absolutely smallest difference between the two GPUs is 1.877 TF not 1.72 (the XSX gpu is actually 12.155 TF) when the ps5 gpu is running at full speed and I consider the GPU power something more solid than SSD speed, esp. when combined with a better CPU and much faster Ram with faster bus speed as well.

As you wrote the XSX will be winning the DF comparisons so the majority of games will be better on it.

Cool, if you say so but you missed the point in favour of console wars.
 
Last edited:

rsouzadk

Member
K watched it. Your typical youtuber, reads some specs on reddit and is a expert. makes a video and falls flat on his face.

Conclusion.

Guy has no clue what he's talking about. He just rambles information of a sheet and makes comparisons that are laughable bad. I could make a list but why bother. I must say its not as bad as his tflops are a lie video as he completely missed the ball there. Actually its even worse because he used the same reasoning in this video STILL.

Yikes the video.




Again Terrible video.

Testing a nvidia gpu versus a amd gpu is kinda pointless. Anybody knows this, Nvidia tflops are different from AMD as they use different architectures this is well known. Tflops from nvidia are only interesting towards nvidia products.

This makes the video kinda rough to watch entirely because the point he tries to make is useless.

Tflops is used to see what a card performs against the one before it and that's why nvidia and amd pushes it.

Tflops are not a lie as they are a indication of performance specially when u talk about 2 platforms that use the exact same architecture. Which those consoles funny enough feature rofl.

Also comparing pc games vs console games even while u don't know the settings of said console is also laughable at best to draw conclusions out of it.

Want to compare different architectures or platforms? compare them with same source of data and see what they do, still performance will fluctuate for different games which is why you make a list of non biased games with settings that are realistic ( no hairworks etc ) and compare it to see what the tflops actually mean as result.

All this flew right over his head.

There is no such thing as "nvidia teraflops are different from amd teraflops".

A teraflop is a teraflop. It is a measure for computer performance.

Now, you talking about different archs and performance/efficiency, i can concur on that.
 

ruvikx

Banned
He's right though, for gameplay. MS strategy is make the same games for Xbox one, lockhart and XSX and increase resolution and fps. That’s what the GPU TFs get you.That may win them DF comparisons on 3rd party games and the positive praise from some people. MS strategy is to appeal to developers with the xbox one, lockhart install base and increase some graphics sliders between the games and get people paying the subscription no matter where.

The PS5 SSD speed and no lowest common denominator on the other hand can affect gameplay. Character movement will be less limited, i. e. they can move faster, they can go into every building in open world games in 0.8 milliseconds.

You almost had me until the 0.8 milliseconds part. I've been reading so much stuff about SSD > GPU/CPU in recent days that even a genuine wind up can appear "real" because the standard has already been set.

Unless you're not joking? At which point wowzers. GPU, CPU & RAM is what determines the meat & bones of a game; i.e. a faster SSD isn't going to provide in-game magic which cannot be done on the Series X. I get there's a shit load of propaganda flying around, but seriously, no.
 

Kenpachii

Member
Can we have 1 at least and then let's discuss?

And for your second part here:-

"Anybody knows this, Nvidia tflops are different from AMD as they use different architectures this is well known. Tflops from nvidia are only interesting towards nvidia products. "

This just proves my point AGAIN, if Nvidia Tflops are not the same as AMD Tflops, then Tflops are not 100% accurate for a reference then are they?

Also, I thought this above logic (not mine by the way) states that Nvidia gets more from less Tflops? Which in the tests I did (which are 100% accurate and valid by the way) shows that AMD, in this instance, is getting more from them, further emphasising the Tflop focus issue around it.

Your thoughts please?

I explained all the tflop department in the reaction you quoted. Aint repeating myself sorry.

About your new video.

Fair to ask for some points to discuss, it was late so i couldn't bother at that point.

1) U compare fixed vs non fixed hardware. That's simple misleading because that's not the reality of things. In the absolute best case scenario sony will hit 10,2tflops but once cpu starts to ramp up in open world games clock speeds will matter A LOT because of how multi core games are programmed, the GPU will get a hit as result. It's nice for specific games like resident evil 3 for example where CPU isn't much important and GPU will be tortured to death, but if you play a game like AC that CPU will be pegged 100% on its cores and therefore GPU will take a hit. With xbox that's not the case which makes the reality of performance edge towards xbox far more than that 10.3tflop showcases. Its simple damage control specially when they don't tell you the real performance if everything is used.

So you provide information in your video that is correct, but what you tell people yourself is incorrect as result.

2) 5500xt showcase of fluctuating GPU solution u showcase is pointless and misleading when that 9-10 tflop gpu will be pegged for a 100% all day long as 4k is just that demanding specially with next gen complexity going forwards. Even in today's games that GPU will not hold up at 60 fps 4k without removing setttings in pretty much any title.

So the whole metric of nice balance and different clocks is nothing but sony PR spin to validate there higher temporarily overclock on the GPU to not look bad. U fall straight for it, make comparisons that are simple invalid to explain stuff that simple aren't relevant. my next point will explain why.

3) Locking a game to 60 fps and have resources left where this technique would somewhat work as you do in your test also, doesn't reflect reality for those consoles. Because the PS4 pro was a 4k box, the xbox one was a 1080p box and what happened? did they ever had that gpu locked at 60 fps or 30 fps 1080p/4k in any demanding game? nope its always a clusterfuck of different settings, dynamic resolutions and frame drops. Because both pieces of hardware are pegged to there dead. That's exactly why the whole talk from cerny was so bad and just felt like utter damage control. U need hardware for demanding spots and that's where the PS5 does not deliver its full performance which makes the tflop number misleading and cannot be used as comparison.

U go along with it, showcase comparisons that make no sense in the real world for those boxes and come to conclusions that are simple not relevant. Like testing a card out that has severe overhead in a game or is crippled by CPU completely yet keep the 10,3tflop number alive while a PS5 pro in that situation would down clock the GPU hard or peg that gpu to 100% and clock the cpu down.

4) Both boxes are heavily GPU limited, yet you feel the need to address how tflop GPU measurement is a lie and not important in a generation that is crippled by GPU performance and on top of it only used for some resolution. like wut? GPU performance is the most important factor when it comes to 4k gaming. Its the biggest bottleneck in those boxes for next generation yet because u believe tflops are a lie gpu performance isn't much interesting anymore? oke dude.

5) From what i remember in your video you claim its only going to be a 50mhz difference at the end of the day the GPU clock.
- Why would sony clock a GPU that sits above 2,2ghz report variable clocks with only 50mhz difference, why not just report the real clock at that point as it barely makes any difference.
- if they cared so much about energy consumption why even push a 2,2ghz clock speed to start with why not just straight up clock it down to a fixed 2ghz? yes they don't.
- 10% power consumption increase or 15% whatever u mentioned in your video for 50mhz on a 2,2ghz clock

Mhz and power consumption what it most likely is, are tales from your arse. And if you actually look at it, it doesn't make a whole lot of sense does it?

6) Your GPU demonestration that made no sense at all with xbox vs a 5500xt which uses different architectures and different hardware that supports it that extremely favors the 5500xt on multiple levels which i explained already a bit earlier on etc etc.

Conclusion

That's just 10 minutes into your video, i could go on and on through that entire video with your memory claims your OS claim that makes no sense etc etc etc. That's why after watching a good chunk of it as i am always interested in technology video's ( i couldn't care much about the consoles in general but the technology is interesting to me ) but frankly it's unwatchable.
 
Last edited:

splattered

Member
Though his comparison may be flawed, I think there are some valid points in there, that the general consumer should not to swayed by the allure of a simple TFlop figure, and that bottlenecks can render increased TFlops redundant. Those are just two positive points I can recall that he made.

The general consumer should not be swayed by the allure of power on paper? As in you dont want anyone buying an xbox because you prefer playstation?

If gamers were lucky at all this generation console ownership would be more evenly split pushing both sides to be as innovative possible.

What I find funny in all of these threads is die hard playstation fans legitimately seem worried that more people might buy an xbox now because the power difference (even if slight) seems appealing.

Sony isnt going to reward you with more and better games just because they have big sales numbers. Both sides need to be hungry and pushed to do better for everyone to benefit, ps fans included.
 
36 RDNA2 CUs = 58 PS4 CUs
"Please ignore TFs and CU counts!"

That's not interesting to industry professionals. Sounded an awful lot like damage control if you ask me.
He never said to ignore them, he said TFs are not the most reliable metric of performance and that CU's are a lot more efficient than they used to be. Which DF said as well.

Unless they're damage controlling too, of course.
 

psorcerer

Banned
In the absolute best case scenario sony will hit 10,2tflops but once cpu starts to ramp up in open world games clock speeds will matter A LOT because of how multi core games are programmed, the GPU will get a hit as result.

CPU is not going to eat much power if you're not going to do AVX256 stuff on it 24/7
Nobody in right mind will do AVX on CPU when the GPU is available for compute.
Cerny understands that, you - don't.

GPU performance is the most important factor when it comes to 4k gaming. Its the biggest bottleneck in those boxes for next generation yet because u believe tflops are a lie gpu performance isn't much interesting anymore?

So what? You can upgrade GPU just so much in the power and cost constraints for the console. Bot companies made a lot of compromises within these constrints.
Want an unlimited GPU - PC is your friend.

5) From what i remember in your video you claim its only going to be a 50mhz difference at the end of the day the GPU clock.
- Why would sony clock a GPU that sits above 2,2ghz report variable clocks with only 50mhz difference, why not just report the real clock at that point as it barely makes any difference.
- if they cared so much about energy consumption why even push a 2,2ghz clock speed to start with why not just straight up clock it down to a fixed 2ghz? yes they don't.
- 10% power consumption increase or 15% whatever u mentioned in your video for 50mhz on a 2,2ghz clock

Because Cerny thinks that CPU will be under-utilized. And will for the majority of the frame run some game code. That was the case this gen, and that will probably be the case next gen.
Jaguar was not bad at all. But if you run your game logic in a single threaded lua interpreter, even if ti's 100% of one core it won't bulge the power profile (that's why the whiny shits whined about Jaguar the whole fucking generation).

6) Your GPU demonestration that made no sense at all with xbox vs a 5500xt which uses different architectures and different hardware that supports it that extremely favors the 5500xt on multiple levels which i explained already a bit earlier on etc etc.

You can compare different architectures, in fact I just did.
 

Three

Member
You almost had me until the 0.8 milliseconds part. I've been reading so much stuff about SSD > GPU/CPU in recent days that even a genuine wind up can appear "real" because the standard has already been set.

Unless you're not joking? At which point wowzers. GPU, CPU & RAM is what determines the meat & bones of a game; i.e. a faster SSD isn't going to provide in-game magic which cannot be done on the Series X. I get there's a shit load of propaganda flying around, but seriously, no.
I was being serious but the 0.8 milliseconds should read 0.8 seconds or 800 milliseconds.
 
Last edited:

Bernkastel

Ask me about my fanboy energy!
Vpm8o9C.png

aIlAn8v.png
 
Last edited:

NXGamer

Member
I explained all the tflop department in the reaction you quoted. Aint repeating myself sorry.

About your new video.

Fair to ask for some points to discuss, it was late so i couldn't bother at that point.

1) U compare fixed vs non fixed hardware. That's simple misleading because that's not the reality of things. In the absolute best case scenario sony will hit 10,2tflops but once cpu starts to ramp up in open world games clock speeds will matter A LOT because of how multi core games are programmed, the GPU will get a hit as result. It's nice for specific games like resident evil 3 for example where CPU isn't much important and GPU will be tortured to death, but if you play a game like AC that CPU will be pegged 100% on its cores and therefore GPU will take a hit. With xbox that's not the case which makes the reality of performance edge towards xbox far more than that 10.3tflop showcases. Its simple damage control specially when they don't tell you the real performance if everything is used.

So you provide information in your video that is correct, but what you tell people yourself is incorrect as result.

How much will the GPU be hit in the PS5, what level of CPU is the maximum before a reduction in frequency, work loads, duration?
AC games tend to push GPU very, very hard not just the CPU, recall that 60Hz will be the core target and a 3.2-3.5GHz CPU will likely surfice for the required work loads, this is still nigh on 4X the throughout of the previous Jaguar cores and they manged 30.

The reason they do not "tell you the real performance" is they do not know, it is all dependant on the team, game aims and code. This is what it is variable and will ALWAYS be variable. Some seem to be thinking of the GPU and CPU as a race at all times, it is not and never is like that, which I both explained and demonstrated in my video, which is both correct and matches what I said.

2) 5500xt showcase of fluctuating GPU solution u showcase is pointless and misleading when that 9-10 tflop gpu will be pegged for a 100% all day long as 4k is just that demanding specially with next gen complexity going forwards. Even in today's games that GPU will not hold up at 60 fps 4k without removing setttings in pretty much any title.

So the whole metric of nice balance and different clocks is nothing but sony PR spin to validate there higher temporarily overclock on the GPU to not look bad. U fall straight for it, make comparisons that are simple invalid to explain stuff that simple aren't relevant. my next point will explain why.
I already said that XSX will be (around 15% higher res) with all areas being equal, but a ~15% reduction in resolution at 4K (2160p versus 3552x1998) OR FPS (60 versus 51) will be very minor, but if that is your focus then you can be happy with it.

3) Locking a game to 60 fps and have resources left where this technique would somewhat work as you do in your test also, doesn't reflect reality for those consoles. Because the PS4 pro was a 4k box, the xbox one was a 1080p box and what happened? did they ever had that gpu locked at 60 fps or 30 fps 1080p/4k in any demanding game? nope its always a clusterfuck of different settings, dynamic resolutions and frame drops. Because both pieces of hardware are pegged to there dead. That's exactly why the whole talk from cerny was so bad and just felt like utter damage control. U need hardware for demanding spots and that's where the PS5 does not deliver its full performance which makes the tflop number misleading and cannot be used as comparison.

U go along with it, showcase comparisons that make no sense in the real world for those boxes and come to conclusions that are simple not relevant. Like testing a card out that has severe overhead in a game or is crippled by CPU completely yet keep the 10,3tflop number alive while a PS5 pro in that situation would down clock the GPU hard or peg that gpu to 100% and clock the cpu down.
Settings are lowered for the best results from the hardware, the setting on PC game go up as the more extreme they are the lower the return on quality/result over performance. This was and will always be the case and is the same on PC, diminishing returns are a real thing and for consoles the Devs will target the best balance. E.g. running a Voxel fog volumne at 1/4 resolution will look slightly lower quality but could return 4 times the performance on the same hardware else where. Remember the pixel counts that everyone gets hung up on are only for the Depth or geometry ALL games have MRT and buffers run is a variety of resolutons or precision. This is not new or indeed anything specfic to this or next gen.

4) Both boxes are heavily GPU limited, yet you feel the need to address how tflop GPU measurement is a lie and not important in a generation that is crippled by GPU performance and on top of it only used for some resolution. like wut? GPU performance is the most important factor when it comes to 4k gaming. Its the biggest bottleneck in those boxes for next generation yet because u believe tflops are a lie gpu performance isn't much interesting anymore? oke dude.
if you think that 4K defines next gen then you may already notice that we have that this gen. GPU's will always be a limit, even the 2080Ti is limiiting teams now, they make sacrifices on it to get what they want. Next gen is about working smarter not harder, VRS. Mesh Shaders, Ray Tracing, 3D Audio and that centralised SSD focused I/O (along with other areas I am going to cover soon), this

5) From what i remember in your video you claim its only going to be a 50mhz difference at the end of the day the GPU clock.
- Why would sony clock a GPU that sits above 2,2ghz report variable clocks with only 50mhz difference, why not just report the real clock at that point as it barely makes any difference.
- if they cared so much about energy consumption why even push a 2,2ghz clock speed to start with why not just straight up clock it down to a fixed 2ghz? yes they don't.
- 10% power consumption increase or 15% whatever u mentioned in your video for 50mhz on a 2,2ghz clock

Mhz and power consumption what it most likely is, are tales from your arse. And if you actually look at it, it doesn't make a whole lot of sense does it?
It really does as I explain and so does Mark Cerny, they have set a fixed power limit to reach a maximum 3.5 + 2.2 across the APU. But recall a system is far more than just these 2 components AND Cerny and his team of engineers cannot know what everyone will do from game to game, so IF they start crunching large data sets or thrashing the GPU with compute alongside Pixel work and streaming data from SSD constanty etc etc then the Fixed power limit can no longer go higher, thus the frequencies have to drop, maybe 2-5% but as power and frequency are not linear, more so in an faster by narrower GPU , then a small dip can restore the power. The 50Mhz was an example, lets say it is 100MHz in a worse case, is this from CPU or GPU? the GPU will consume more for that 100MHz as it has more load from that, so makes the most sense.

Mark Cerny is not a saleman, he is an engineer, an architect a game developer. So he was being truthful and telling the teams that this is a edge case but it will be a predicable and managed edge case, which is present in ALL hardware since ever. This is why new consoles come with developer guides, example Libs, data results and much more, to not let everyone find this out the hardware. The XSX will also have these in places, where they are and and what impact any may have will already be known by the teams. As i mention in my video, the one obvious is the split pools and contention on the same bus, a minor risk but just like the PS5 frequency throttling a real one.

6) Your GPU demonestration that made no sense at all with xbox vs a 5500xt which uses different architectures and different hardware that supports it that extremely favors the 5500xt on multiple levels which i explained already a bit earlier on etc etc.

Conclusion

That's just 10 minutes into your video, i could go on and on through that entire video with your memory claims your OS claim that makes no sense etc etc etc. That's why after watching a good chunk of it as i am always interested in technology video's ( i couldn't care much about the consoles in general but the technology is interesting to me ) but frankly it's unwatchable.
I do not understand your comment here, the 5500Xt was performing worse than the X, the exmaple was to show that a 1TFlop gap can be not only removed but exceed by simply turning on Dynamic scaling. This is ONE option teams will have for the 15% gap of Tflops, then everything else (in the GPU render path only) is equal. Many more things that just resolution and I have been trying to demonstrate that over the past x years.

It is early days and I hope all look forward to what both these machines can allow the dev community to create and amaze.
 
You understand that the slow and fast pools are the on the same bus and you can only access one at a time right? and if you access the slow pool you lock the entire bus.
That's just not true. The OS will be using 2.5gb of slow RAM and using your logic as the OS is running all the time the bus will be locked slower at all times, and MS lying about having a fast pool of RAM.
 

-kb-

Member
That's just not true. The OS will be using 2.5gb of slow RAM and using your logic as the OS is running all the time the bus will be locked slower at all times, and MS lying about having a fast pool of RAM.

It is true, its just that Microsoft has been terrible at been explaining this and nearly everyone on this board is misinformed and misunderstands how memory controllers work.

When a memory consumer makes a request to the slow pool, it runs at the slower rate until that transaction has ended and whilst the transaction is happening no other consumer can request memory from the GDDR6. If a consumer requests the fast pool the same thing happens except at the faster rate, but still you cannot access both the fast and slow pool at once.

Also the OS may be running all the time but its probably not accessing memory all the time, otherwise that bus is going to be slow as shit.

The concurrent accesses are pretty much popped into a queue and then sorted putting the CPU requests as priority to reduce latency.
 
Last edited:
It is true, its just that Microsoft has been terrible at been explaining this and nearly everyone on this board is misinformed and misunderstands how memory controllers work.

When a memory consumer makes a request to the slow pool, it runs at the slower rate until that transaction has ended and whilst the transaction is happening no other consumer can request memory from the GDDR6. If a consumer requests the fast pool the same thing happens except at the faster rate, but still you cannot access both the fast and slow pool at once.

Also the OS may be running all the time but its probably not accessing memory all the time, otherwise that bus is going to be slow as shit.

The concurrent accesses are pretty much popped into a queue and then sorted putting the CPU requests as priority to reduce latency.
Again that's off base. The OS is always going to be calling in the background, and not always at predetermined times. You expect MS to put out a system that has game asserts steaming through the RAM at 560GB/s to have the OS use RAM which suddenly smashes the video RAM down to the slow speed?
If you watch the DF you will see they say MS has engineered a solution to where you can access the RAM at its rated speed.
 

-kb-

Member
Again that's off base. The OS is always going to be calling in the background, and not always at predetermined times. You expect MS to put out a system that has game asserts steaming through the RAM at 560GB/s to have the OS use RAM which suddenly smashes the video RAM down to the slow speed?
If you watch the DF you will see they say MS has engineered a solution to where you can access the RAM at its rated speed.

I didn't say it made the video RAM down to the slow speed.
I said both cannot access the memory at the same time.
Theres no getting around the fact that only the GPU or CPU can access the memory separately, and when the slow pool is accessed the fast pool cannot be.
 
Last edited:

Journey

Banned
I like how NXGamer stresses the point over and over how important bandwidth is for performance and talks about how that was the achilles heel for the PS4 Pro vs X1X.

Now we have

XSX: 560GB/s bandwidth for the 10GB of memory that will be used for games, the rest is for system and sound, means they don't have to touch that section.
PS5: 448GB/s bandwidth unified.

Why then is NXGamer comparing an AVERAGE bandwidth? The true difference is 448GB/s vs 560GB/s. One could argue that future games might exceed the need of 10GB for the game alone, but it just goes to show how far a biased person can change the narrative to their choosing.

 

-kb-

Member
I like how NXGamer stresses the point over and over how important bandwidth is for performance and talks about how that was the achilles heel for the PS4 Pro vs X1X.

Now we have

XSX: 560GB/s bandwidth for the 10GB of memory that will be used for games, the rest is for system and sound, means they don't have to touch that section.
PS5: 448GB/s bandwidth unified.

Why then is NXGamer comparing an AVERAGE bandwidth? The true difference is 448GB/s vs 560GB/s. One could argue that future games might exceed the need of 10GB for the game alone, but it just goes to show how far a biased person can change the narrative to their choosing.



Because within a given timed period if the a device access the slow pool it reduces the average bandwidth for a given time period.

For example (a scenario that wouldn't happen to present the point), if the CPU accessed the slow pool for 99% of the frame time then the average bandwidth wouldnt be anywhere near 590GB/s.
 

NXGamer

Member
I like how NXGamer stresses the point over and over how important bandwidth is for performance and talks about how that was the achilles heel for the PS4 Pro vs X1X.

Now we have

XSX: 560GB/s bandwidth for the 10GB of memory that will be used for games, the rest is for system and sound, means they don't have to touch that section.
PS5: 448GB/s bandwidth unified.

Why then is NXGamer comparing an AVERAGE bandwidth? The true difference is 448GB/s vs 560GB/s. One could argue that future games might exceed the need of 10GB for the game alone, but it just goes to show how far a biased person can change the narrative to their choosing.



I have not compared "average" at all, what I stated is that compromises are made on both sides, this is normal, and only that having a split pool of Ram on the same Bus means you CAN have contention issues along with possible space (i.e. you fall out of 10GB into the slower 6GB). KB gives a good but extreme example below to prove the point, like I ALSO say in the video, the slow pool is still very fast but higher demand means higher requirements for both consoles.

Just as I also state that the XSX will always push more Pixels and RT by the relative % over the PS5, this is just fact and is all parts of the piece. Once you stop reading things from a single POV you can see I am simply talking about potential here. If we are looking to clear up the specs the bandwidth of both systems is a best case scenario, it will almost never (aside Loading a level to start) hit these on BOTH systems, just like previous systems and specs ever provided ever in the history of spec provisions.

Because within a given timed period if the a device access the slow pool it reduces the average bandwidth for a given time period.

For example (a scenario that wouldn't happen to present the point), if the CPU accessed the slow pool for 99% of the frame time then the average bandwidth wouldnt be anywhere near 590GB/s.
 
Last edited:

Journey

Banned
I have not compared "average" at all, what I stated is that compromises are made on both sides, this is normal, and only that having a split pool of Ram on the same Bus means you CAN have contention issues along with possible space (i.e. you fall out of 10GB into the slower 6GB). KB gives a good but extreme example below to proof the point, like I ALSO say in the video, the slow pool is still very fast but higher demand means higher requirements for both consoles.

Just as I also state that the XSX will always push more Pixels and RT by the relative ~ over the PS5, this is just fact and is all parts of the piece. Once you stop reading things from a single POV you can see I am simply talking about potential here. If we are looking to clear up the specs the bandwidth of both systems is a best case scenario, it will almost never (aside Loading a level to start) hit these on BOTH systems, just like previous systems and specs ever provided ever in the history of spec provisions.


Game creators have full control of the amount of memory their games consume. Do you see any games approaching 10GB of Vram usage on the PC today with all the crazy ultra settings? The Witcher 3 running at 4K with all the bells and whistles takes up about 3.2GB of Vram, what makes you think 10GB won't be enough for a console, or think there would ever be a scenario where they would be forced to tap the slower pool? even if in 5 years, games tripled the usage of the Witcher 3 at 4K, it would still fall within the 10GB of Vram of Xbox Series X.

In your video, you couldn't over-emphasize the importance of memory bandwidth and the impact that it has on performance, even called it the PS4 Pro's Achilles Heel when compared to X1X, but when it comes to PS5 vs XSX, you normalize it by saying XSX has its bottlenecks, when in reality, it may never have to access memory outside of 10GB for games, so for anything under 10GB of Vram usage, we're looking at 560GB/s vs 448GB/s and that's significant, aside of the TF number which was your point to begin with.
 
Last edited:

NXGamer

Member
Game creators have full control of the amount of memory their games consume. Do you see any games approaching 10GB of Vram usage on the PC today with all the crazy ultra settings? The Witcher 3 running at 4K with all the bells and whistles takes up about 3.2GB of Vram, what makes you think 10GB won't be enough for a console, or think there would ever be a scenario where they would be forced to tap the slower pool? even if in 5 years, games tripled the usage of the Witcher 3 at 4K, it would still fall within the 10GB of Vram of Xbox Series X.

In your video, you couldn't over-emphasize the importance of memory bandwidth and the impact that it has on performance, even called it the PS4 Pro's Achilles Heel when compared to X1X, but when it comes to PS5 vs XSX, you normalize it by saying XSX has its bottlenecks, when in reality, it may never have to access memory outside of 10GB for games, so for anything under 10GB of Vram usage, we're looking at 560GB/s vs 448GB/s and that's significant, aside of the TF number which was your point to begin with.
The Witcher 3 is a bad example as it is a very old gen game that is much lower in demands than even games of the past 12/24 momths. And you are again missing the next stage of a gen shift. RT alone will add many, many MB's of data that now need to reside within RAM or be close at hand. Pushing 4K resolutions, higher quality textures, great materiels, more frames of animation, sound effects and real time accoustics etc etc.

Take a game lik Doom Eternal now on X1X, even with 9GB of RAM it runs lower textures and effects than an 8GB PC GPU. As I state in the video, out of the gate I expect the XSX to be better in areas over the PS5 (Resolution/bandwidth scenarios for example) but the SSD and compression techniques that BOTH companies have made big efforts in tell you they know this will become an issue. Only time will tell WHEN the Ram/GPU/SSD areas become an issue for both consoles, some will impact PS5 first and some will impact XSX worse. This is why I am looking forward to my comparisons to come this gen as it has the potential to have much bigger changes and gaps in areas that we did not see with a much bigger gap in Specs for the current gen.
 
Last edited:

Journey

Banned
The Witcher 3 is a bad example as it is a very old gen game that is much lower in demands than even games of the past 12/24 momths. And you are again missing the next stage of a gen shift. RT alone will add many, many MB's of data that now need to reside within RAM or be close at hand. Pushing 4K resolutions, higher quality textures, great materiels, more frames of animation, sound effects and real time accoustics etc etc.

Take a game lik Doom Eternal now on X1X, even with 9GB of RAM it runs lower textures and effects than an 8GB PC GPU. As I state in the video, out of the gate I expect the XSX to be better in areas over the PS5 (Resolution/bandwidth scenarios for example) but the SSD and compression techniques that BOTH companies have made big efforts in tell you they know this will become an issue. Only time will tell WHEN the Ram/GPU/SSD areas become an issue for both consoles, some will impact PS5 first and some will impact XSX worse. This is why I am looking forward to my comparisons to come this gen as it has the potential to have much bigger changes and gaps in areas that we did not see with a much bigger gap in Specs for the current gen.

The problem with your argument is that the PS5 also needs some of that memory for other functions just as XSX does, the OS alone can take up 2.5GB of ram, then we're talking about using ray traced audio next gen which will also chew up memory. Even if Sony manages to be frugal with these system functions, I can't see them not using close to 6GB, and if by some sort of magic they only use up 5GB, that leaves them with a 1GB advantage? So let's not be disingenuous here, the PS5 will not have 6GB of extra VRAM if XSX sticks to just using the 10GB of its fast 560GB/s allocation.

The bottom line is, by the time we get to the point where we're exceeding 10GB of Vram, this whole squabble between PS5 and XSX will be irrelevant. At launch, consoles are the best value and is the point why I buy a cutting edge console at launch, then 2 years down the road upgrade my PC and will get the PS5 Pro which I'm sure there will be one, and at the perfect timing when Sony's exclusives are coming into stride.

Since I'm interested in performance per dollar, getting a PS5 at launch only to look forward to the point where games might exceed 10GB of Vram is dumb; by then I'll be rocking the PS5 Pro and playing the new Uncharted and GoW at its full glory.
 
Funny thing is that the third party SSDs that will be able to be used in the ps5 seem like they will be more expensive than MS proprietary solution. (and it is unknown what you will be able to use on the ps5, for now you are limited to the......875 GB that the console provides)
We don't know if they will be more or less expensive, at least at first I suspect SSDs that transfer over 7GB/s aren't going to be on the cheap side... However, there is a reason why they are expected to be expensive, unlike MS's solution.

I see so much bad analysis around (not just Sony or MS fans)...
 

Journey

Banned
Question: When these faster PC SSDs arrive, the 7GB/s for example, can MS/Seagate choose those to be used as their external solution or is the port limited?
 
Question: When these faster PC SSDs arrive, the 7GB/s for example, can MS/Seagate choose those to be used as their external solution or is the port limited?

Speed is limited by port. It won't go faster than 2.4 GB/s

Or trying to use them via USB 3.2?

I think you could only use them for storage or play older games ( obviously ) :

"The console will still support external USB 3.2 hard drives, and you can store Xbox Series X games on them, but you won’t be able to run them. They have to be run from either the internal hard drive or the custom units."
 
Last edited:

NXGamer

Member
The problem with your argument is that the PS5 also needs some of that memory for other functions just as XSX does, the OS alone can take up 2.5GB of ram, then we're talking about using ray traced audio next gen which will also chew up memory. Even if Sony manages to be frugal with these system functions, I can't see them not using close to 6GB, and if by some sort of magic they only use up 5GB, that leaves them with a 1GB advantage? So let's not be disingenuous here, the PS5 will not have 6GB of extra VRAM if XSX sticks to just using the 10GB of its fast 560GB/s allocation.

The bottom line is, by the time we get to the point where we're exceeding 10GB of Vram, this whole squabble between PS5 and XSX will be irrelevant. At launch, consoles are the best value and is the point why I buy a cutting edge console at launch, then 2 years down the road upgrade my PC and will get the PS5 Pro which I'm sure there will be one, and at the perfect timing when Sony's exclusives are coming into stride.

Since I'm interested in performance per dollar, getting a PS5 at launch only to look forward to the point where games might exceed 10GB of Vram is dumb; by then I'll be rocking the PS5 Pro and playing the new Uncharted and GoW at its full glory.
As I state in the video, out of the gate I expect the XSX to be better in areas over the PS5 (Resolution/bandwidth scenarios for example) but the SSD and compression techniques that BOTH companies have made big efforts in tell you they know this will become an issue. Only time will tell WHEN the Ram/GPU/SSD areas become an issue for both consoles, some will impact PS5 first and some will impact XSX worse. This is why I am looking forward to my comparisons to come this gen as it has the potential to have much bigger changes and gaps in areas that we did not see with a much bigger gap in Specs for the current gen.
 

Journey

Banned
Speed is limited by port. It won't go faster than 2.4 GB/s

Or trying to use them via USB 3.2?

I think you could only use them for storage or play older games ( obviously ) :

"The console will still support external USB 3.2 hard drives, and you can store Xbox Series X games on them, but you won’t be able to run them. They have to be run from either the internal hard drive or the custom units."
Limited by port and io chip inside the console .speed of ssd is not the only factor


I see, thanks.
 

Evilms

Banned
Summary

  • If Sony can run their gpu at this high frequency of 2.23 GHz, it's because the cooling solution they have designed must be effective this time around.
  • Despite the similarities between the two consoles, Sony and Microsoft have opted for two different strategies:
-The Xbox Series X will have a larger soc with 56 CUs (52 active) and a constant but lower frequency compared to the PS5 and it will be based on the power and variable temperatures.

-The Sony console is rather the opposite with a smaller soc with 40 CUs (36 active) but in return for variable frequencies, much higher and will have constant power and temperatures.

  • The variable frequencies on PS5, whether for the CPU (up to 3.5 Ghz) or the GPU (up to 2230 MHz) are neither an overclock nor a simple boost mode, it's simply the maximum level of the chip, it will run at this theoretical limit when needed, according to NXG the console will run at 10.3TF most of the time and the drops will be only 50 MHz here and there (- 3~5%) from this maximum frequency and will depend on the needs of the games and the choices of the developers.
  • The cooling system seems to be designed around power level consistency which means that the chip performance will always be consistent on every PS5 and every game will run at maximum performance most of the time and will not need to worry about power consumption.
  • On paper the XSX has ~1.8TF (+18%) more than the PS5 which means more pixels on the screen. On the other hand, a game designed around the ram sub-system, and which exploits the possibilities of the PS5's SSD can't be ported to XSX without drastic compromises.
  • It's just a theory: the PS5 can have up to 15.5GB of the 16GB of RAM available for games with operating system caching on the SSD that will free up ram, but that remains to be verified in practice.
  • The PS5's SSD is 55 times faster than the PS4/XB1's hard drive (which was the bottleneck on the older generation) and more than twice as fast (+129%) as the Xbox X-Series.
  • On the whole the available bandwidth of the XSX at its best level is 25% faster, the bandwidth on PS5 is 448 GB/s constant on all 8 2GB memory chips and is therefore 20% slower than the 560 GB/s of the XSX on its 10 GB but 33% faster on the remaining 6GB, for him the 16 GB ram of the PS5 with constant bandwidth remains a better and easier choice for developers.
  • On game resolution, the XSX will mostly have the best resolution but in reality the difference will only be 15% compared to the PS5 (40% between the PS4 and the XBO) so the difference will be less.
  • Concerning ray-tracing he thinks that the XSX will have the advantage, by how much? we'll have to wait and see the games.


--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------


NX Gamer giving a class lesson to Alex Potato (dictator).
Geez, what a gap in knowledge.
 
Last edited:

geordiemp

Member
The problem with your argument is that the PS5 also needs some of that memory for other functions just as XSX does, the OS alone can take up 2.5GB of ram, then we're talking about using ray traced audio next gen which will also chew up memory. Even if Sony manages to be frugal with these system functions, I can't see them not using close to 6GB, and if by some sort of magic they only use up 5GB, that leaves them with a 1GB advantage? So let's not be disingenuous here, the PS5 will not have 6GB of extra VRAM if XSX sticks to just using the 10GB of its fast 560GB/s allocation.

The bottom line is, by the time we get to the point where we're exceeding 10GB of Vram, this whole squabble between PS5 and XSX will be irrelevant. At launch, consoles are the best value and is the point why I buy a cutting edge console at launch, then 2 years down the road upgrade my PC and will get the PS5 Pro which I'm sure there will be one, and at the perfect timing when Sony's exclusives are coming into stride.

Since I'm interested in performance per dollar, getting a PS5 at launch only to look forward to the point where games might exceed 10GB of Vram is dumb; by then I'll be rocking the PS5 Pro and playing the new Uncharted and GoW at its full glory.

Your missing the point, even if VRAM never goes above 10 GB, and the CPU audio / other RAM is say 4 GB in slow RAM...

If that 4GB is accessed say 50 % of the time, bandwidth average is 450 GB or so......as you average the 2 speeds. based on % use is what posters are saying.

I am sure the average will be higher than Ps4, but its not so clear cut massive difference is it ? Add in Ps5 higher clocks who knows ?

If teh whole game, VRAM, audio, CPU, everything is 10 GB or less, then Series X will be much better....[/QUOTE]
 
Last edited:

Journey

Banned
Your missing the point, even if VRAM never goes above 10 GB, and the CPU audio / other RAM is say 4 GB in slow RAM...

If that 4GB is accessed say 50 % of the time, bandwidth average is 450 GB or so......as you average the 2 speeds. based on % use is what posters are saying.

I am sure the average will be higher than Ps4, but its not so clear cut massive difference is it ? Add in Ps5 higher clocks who knows ?

If teh whole game, VRAM, audio, CPU, everything is 10 GB or less, then Series X will be much better....


Operating System, Audio, etc., do not rely on extremely fast bandwidth, in fact the allocated memory bandwidth it has now is already overkill. You do NOT need to fit anything other than VRAM within the 10GB limit.
 

Jigsaah

Gold Member
Since some people already accusing me for making threads. Eh ,the hell....
Anyway, interesting analysis by NX Gamer ( btw. he
he praises the XSX too, so don't call him a Sony fanboy

Love the intro. LOL


Man I needed this explanation. I didn't know what the hell Cerny was talking about.
 

Jigsaah

Gold Member
Can we have 1 at least and then let's discuss?

And for your second part here:-

"Anybody knows this, Nvidia tflops are different from AMD as they use different architectures this is well known. Tflops from nvidia are only interesting towards nvidia products. "

This just proves my point AGAIN, if Nvidia Tflops are not the same as AMD Tflops, then Tflops are not 100% accurate for a reference then are they?

Also, I thought this above logic (not mine by the way) states that Nvidia gets more from less Tflops? Which in the tests I did (which are 100% accurate and valid by the way) shows that AMD, in this instance, is getting more from them, further emphasising the Tflop focus issue around it.

Your thoughts please?
sorry, I'm trying to follow here. Wouldn't TFLOPS matter if XSX and PS5 are both using AMD TFLOPS though? Isn't this the case? Both are using RDNA 2 so they are comparable, correct?
 

geordiemp

Member
Operating System, Audio, etc., do not rely on extremely fast bandwidth, in fact the allocated memory bandwidth it has now is already overkill. You do NOT need to fit anything other than VRAM within the 10GB limit.

The average bandwidth depEnds on percent time accessing the slow RAM, DOES NOT MATTER WHAT IS STORED THERE.

It does not matter what is stored in slow RAM, depends on how often it is accessed.

If you access 50 % of your time in fast RAM and 50 % of the time in slow RAM, that bus will be on average the difference between fast and slow, about 450....

ALL APU RAM ACCESS IS USING THE SAME BUS. ITS NOT A PC WITH DIFFERENT BUS FOR VRAM AND CPU STUFF
 
Last edited:

Journey

Banned
The average bandwidth depEnds on percent time accessing the slow RAM, DOES NOT MATTER WHAT IS STORED THERE.

It does not matter what is stored in slow RAM, depends on how often it is accessed.

If you access 50 % of your time in fast RAM and 50 % of the time in slow RAM, that bus will be on average the difference between fast and slow, about 450....

ALL APU RAM ACCESS IS USING THE SAME BUS. ITS NOT A PC WITH DIFFERENT BUS FOR VRAM AND CPU STUFF


The majority of it being reserved by the operating system which is hardly going to be accessed during gameplay.
 
Top Bottom