• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Matt weighs in on PS5 I/O, PS5 vs XSX and what it means for PC.

chilichote

Member
1. There is no evidence to suggest there will be pop ins.
2.You do not know if people will or will not notice resolution differences as games will lower resolutions typically look softer in general and less crisp and DF will point this out.
3. Notice how you conveniantly left out framerates. I'm not talking only 1st party. I'm talking multiplats as well.
4. The rest you said is another topic entirely

Oh, the differences in resolution are so minimal to spot nowadays. The reconstruction technology is so good, even DF will get problems to find any differences.

Regarding the frame rates: Not all games have stable FPS and nevertheless are among the most successful games e.g. Fallout 4.
 
Oh, the differences in resolution are so minimal to spot nowadays. The reconstruction technology is so good, even DF will get problems to find any differences.

Regarding the frame rates: Not all games have stable FPS and nevertheless are among the most successful games e.g. Fallout 4.

It depends on the engine and devs use different engines and techniques. And again, softer resolutions,lower quality shadows, framerates, ray tracing are more impactful than a 1-3 second loading time advantage. Improved lighting can drastically change the look of a game.

Steady framerates vs fluctuating framerates is noticeable and can hinder the experience through the entirety of a game.

And GPU task go beyond just resolution and lighting. There are other setting like shadows that can be affected. A better gpu also will allow for better AF, and RDNA2 features like mesh shaders.
 
Last edited:
You're making shit up. You DO NOT KNOW if the I/o in the XSX uses the CPU. Please stop making up shit. You read that speculative armchair nonsene on gaming forums and nothing else. And if you do know, provide a link with a dev saying so. Even MATT says XSX is more powerful in MANY WAYS. Thomas Muluer (head of Moon Studios) the other day said to expect XSX games to have the resolution and FRAMERATE advantage.

So your word vs an actual dev
Lol dude its in velcoty architecture presentation video. They say 10% of cpu core is used for velocity architecture.
 
Last edited:

Exodia

Banned
I have had two warnings I have ignored for being disingenuous, misrepresenting facts and making up fanfic to engage in console wars. I am a massive baby.
:LOL:, sure they move around by magic...

Keep telling yourself whatever you prefer, games on the platform by the middle or the end of the generation will look better than that ;).

Why talk about stuff you have no idea about with someone who uses UE4 every day and embarrass yourself? I simply don't understand.
Secondly I dont have to wait for middle/end of the gen. I get my hands on the demo Q1 2021.

If a laptop full of bottlenecks can run it at 1440p at 40 fps using the bottle-necked process of External Sata SSD > USB > CPU > System RAM > CPU > VRAM > GPU. (no DirectStorage, no Hardware Decompression, No unified Ram, No SFS)

A machine with Nvme 4.0 SSD with hardware decompression on board > Unified Ram > GPU will be able to run it at equal or better quality.

Its just that simple.
 

Exodia

Banned
Your response is like something of n4g for sure.

My reasoning is ps5 first party games will be made for zen2, superfast SSD and will use the highest quality assets and therefore look better. Ps5 SSD can steam very high assets unlike no other SSD. Proof ? Go watch Ue5 demo.

And your reasoning is what ? Terrafloppies ?

I will state it again, games made with highest quality ultra assets from super fast SSD and for Zen2 will look a gen above stuff targeted at HDD and Jaguar.

Nah you didn't say that you said "games that do made for ps5 only from sony will look a gen above anything on XSX. "

Keyword: Anything

Let me help you break down what you said:
- games made for Ps5 only
- games made by Sony
- will look a gen above ANYTHING on XSX

Why are you trying to revise history? You no longer feel confident after being called out on that clear BS statement?
 
Last edited:

kuncol02

Banned
The difference between their CPU are small. 3.5ghz vs 3.8ghz. It’s basically nothing.
Only when CPU is in full power. Considering that GPU takes way more power (like 2 to 3 times more) and is probably further of it's sweet spot that can be much much slower. We still don't know how low CPU will go to allow GPU to be on 2,2Ghz.
 

chilichote

Member
And again, softer resolutions,lower quality shadows, framerates, ray tracing are more impactful than a 1-3 second loading time advantage. Improved lighting can drastically change the look of a game.

Steady framerates vs fluctuating framerates is noticeable and can hinder the experience through the entirety of a game.

You massively overestimate the impact of resolution and frame rate. Reality belies you, the majority of people either don't notice it or don't care. Otherwise they would all play on the PC.

And as far as jerky frame rates are concerned in the future ... with TV sets that can handle VRR, the topic is off the table anyway.
 

ToadMan

Member
That's your excuse? A cross gen game? Bungie also had parity with Destiny XB1 and PS4 versions. Wait for the DF head to head as one version probably will be more capable of holding native 4k,steadier framerates etc...
But again, Bungie aims for parity.

It's not an excuse for anything - you made a statement "multiplats will look better on xsex" - and the first major game to test your statement proves it to be wrong.

Destiny is indeed a cross gen muliplat. Exactly the game that the most powerful lowest common denominator console should have been able to throw around.

So, either the xsex isn't able to explode the PS5, or developers are targeting getting it run well on PS5 and porting to xsex making zero use of its extra power...

You're making things up. For instance, you do not know when Hellblade 2 is releasing. It's a full on next gen exclusive that's been in development ever since Hellblade 1 released. Bloobers new horror title is a next gen only game and full demonstrates why they went next gen only and is slated for and launch XSX launch title. Thus far, PS5 has only Godfall confirmed as a launch title and that looks cross gen.

Ok so "most powerful" console, went through best for multiplats, went through BC is king, and now when all of that is swept away by the very first cross gen multiple to announce and you've gone for "but exclusives".

Hehe. Exclusives are Sony's strongest suit. I mean hyping an indie Dev and Hellblade.... yeah.... no.
 
Last edited:

ToadMan

Member
Even MATT says XSX is more powerful in MANY WAYS.

Thomas Muluer (head of Moon Studios) the other day said to expect XSX games to have the resolution and FRAMERATE advantage.

Matt said that after being pushed to say anything nice about xsex - a very generic statement to make. It might be better for putting a drink on, or using to keep a door open and those are the many ways it's more powerful.


Thomas Mahler was the one who said xsex was the lowest common denominator... That's what MS should put on the box - Welcome to the Xbox Series X - lowest common denominator gaming for all lol

He also said those things on a gaming forum. The very thing you said shouldn't be used for an argument rofl

You really are tying yourself up in knots now.
 
Last edited:
Matt said that after being pushed to say anything nice about xsex - a very generic statement to make. It might be better for putting a drink on, or using to keep a door open and those are the many ways it's more powerful.


Thomas Mahler was the one who said xsex was the lowest common denominator... That's what MS should put on the box - Welcome to the Xbox Series X - lowest common denominator gaming for all lol

He also said those things on a gaming forum. The very thing you said shouldn't be used for an argument rofl

You really are tying yourself up in knots now.
More TFLOPS, more bandwidth, higher clock rates for the CPU and GPU, More CUs on the GPU, and somehow the lowest common denominator because somehow a storage solution is now a GPU.
 
Last edited:
Thomas Mahler was the one who said xsex was the lowest common denominator... That's what MS should put on the box - Welcome to the Xbox Series X - lowest common denominator gaming for all lol

He also said those things on a gaming forum. The very thing you said shouldn't be used for an argument rofl

Please do not spread FUD. He never said that. He said:

I would be shocked if most third party developers would not just develop their games for the lowest common denominator. I mean, there's literally 0 chance that levels will get changed just because the PS5 can load them faster, simply because it's way too expensive and work intensive to do that.

Where exactly is he saying that this is XSX? He is clearly talking about PC SSD, PC SSDs will of course still be way slower than Xbox Series X SSD with all the fast I/O.
 

Handy Fake

Member
Apropos of nothing, but I was watching a playthrough/review of of the C&C remakes last night. Apparently the frame rate was very juddery for the first 30 minutes of gameplay and it seems to be a common issue that was solved by installing the game to SSD instead of a mechanical drive. Has me wondering.
 

ToadMan

Member
More TFLOPS, more bandwidth, higher clock rates for the CPU and GPU and somehow the lowest common denominator.

That's what a developer says... take it up with him. I'm a humble messenger.

Still all that extra oomph and xsex can only manage Destiny 2 at 4k 60fps - identical to PS5....
 

Kenpachii

Member
Your response is like something of n4g for sure.

My reasoning is ps5 first party games will be made for zen2, superfast SSD and will use the highest quality assets and therefore look better. Ps5 SSD can steam very high assets unlike no other SSD. Proof ? Go watch Ue5 demo.

And your reasoning is what ? Terrafloppies ?



And i gave reasons, no need to call people morons, try something that argues the point instead of the hyperbole and ad hominem, that just makes you look moronic

I will state it again, games made with highest quality ultra assets from super fast SSD and for Zen2 will look a gen above stuff targeted at HDD and Jaguar.

Its a tech demo mate its not a realistic enviroment. just think about it.
Why push multiple 8k textures for a object when the resolution is only 1440p?
What's the difference of quality between 4k and 8k textures on lower resolutions? nothing
what's the space requirement from 2k textures > 4k textures > 8k textures? 4k = 3x 2k, 8k = 3x 4k
When games already are 170gb, have fun with 500gb games on the PS5, oh wait nobody can download it from there digital platform PS5 ssd is completely filled after 1 game + no blu ray can store it, doesn't seem likely to happen does it?
Why didn't they use 4k textures because ti doesn't matter? tech demo.
Was the entire SSD speed used on the PS5 or only parts of it? we don't know because for some reason tim doesn't want to talk about specifics here.

Why is the demo source files not released on PC so PC modders could test around with it? because for some reason he doesn't want people to spit through the data but only provide bits of it for whatever reason, because that demo would be up and running on PC within a day. All u can conclude out of this he has signed something with sony so he can't do stuff or is forbidden to talk about specific parts or he just doesn't want too.

All with all the SSD performance and required performance and quality u get out of it is heavily overblown and not even realistic. U want more stuff in your screen? u also need more hardware to project that stuff, u also need more memory, u also need more cpu performance, and more GPU performance.

Guess why the demo ran at 1440p at 30 fps? because the hardware can't keep up and that's in a complete scripted enviroment that came with it.

I could go on for a while with this, the thing is tech demo's are tech demo's and that's about it. They do specific things in order to demonstrate new ways of doing things in the most optimal solutions.

My guess for next gen is the SSD will be used for a bit faster loading over the xbox and that's about it, maybe sony first party will use it for a bit more but frankly who cares about first party developers anyway they will make games no matter how trash the box is at the end of the day. They could have released a PS2 and they would still be making games on it. 3rd party however u will see lots of performance issue's straight out of the gate for the simple reason GPU's are simple not fast enough.

There is a reason why that GPU runs at 2,2+ghz cerny knows this.
 
Last edited:
More TFLOPS, more bandwidth, higher clock rates for the CPU and GPU and somehow the lowest common denominator.
Gpu is in favour of xbox easily by 18%

Ram is in favour of xbox as well untill a game needs more that 10 gb ram. Then ps5 will perform better

Cpu is a toss up. xsx cpu needs 10% for velocity architecture while ps5 doesn't need that .
 

ToadMan

Member
Please do not spread FUD. He never said that. He said:



Where exactly is he saying that this is XSX? He is clearly talking about PC SSD, PC SSDs will of course still be way slower than Xbox Series X SSD with all the fast I/O.

He's saying it right there in the text you quoted. If the PS5 isn't the lowest common denominator, there's only one other next gen console... simple deduction.
 
Gpu is in favour of xbox easily by 18%

Ram is in favour of xbox as well untill a game needs more that 10 gb ram. Then ps5 will perform better

Cpu is a toss up. xsx cpu needs 10% for velocity architecture while ps5 doesn't need that .

This may be naive, but with the XSX CPU being 3800 Mhz and the PS5 CPU being 3500 Mhz, isn't that only a 8% difference? They should be relatively on the same playing field give or take ~2% if that's the case. I'm only looking at speed however.

It's been mentioned by Phil Spencer that the SSD in the XSX can be used as RAM. Clearly much slower than PS5's, but it still can be used as such.



That's what a developer says... take it up with him. I'm a humble messenger.

Still all that extra oomph and xsex can only manage Destiny 2 at 4k 60fps - identical to PS5....
Bungie's garbage engines are not a reliable way to calculate system performance.
 

ToadMan

Member
Bungie's garbage engines are not a reliable way to calculate system performance.

Still sore about being dumped by the one and only Halo maker?

Still you make a good point - trash or not, PS5 throws it around with the same performance as the xsex. a cross gen mulitplat no less - exactly PS5's weak spot according to this thread.... What happened to the monster of terror flops chewing up engines left and right?
 
This may be naive, but with the XSX CPU being 3800 Mhz and the PS5 CPU being 3500 Mhz, isn't that only a 8% difference? They should be relatively on the same playing field give or take ~2% if that's the case. I'm only looking at speed however.

It's been mentioned by Phil Spencer that the SSD in the XSX can be used as RAM. Clearly much slower than PS5's, but it still can be used as such.




Bungie's garbage engines are not a reliable way to calculate system performance.
Most games are designed for 8c 16t even this gen so cpu is 3.5 vs 3.6 (3.8 is for 8c 8t).
But yea cpu should be close enough to a point it won't matter.

Issue with xsx ssd is not speed, its latency but it remains to be seen as many devs might learn how to optimize it and use it as virtual ram
 
Most games are designed for 8c 16t even this gen so cpu is 3.5 vs 3.6 (3.8 is for 8c 8t).
But yea cpu should be close enough to a point it won't matter.

Issue with xsx ssd is not speed, its latency but it remains to be seen as many devs might learn how to optimize it and use it as virtual ram
If you're speaking to the SSD in regards to latency, I'm in agreeance there. Sony clearly has an edge and I'm still curious to see their SSD in action. On paper it makes sense, but surely TFLOPS matter to some degree in some ways. But the more I hear about asset streaming, the more I feel like Sony's SSD is that big of a deal. Either way I'm excited for next gen and it can't come soon enough.
 
If you're speaking to the SSD in regards to latency, I'm in agreeance there. Sony clearly has an edge and I'm still curious to see their SSD in action. On paper it makes sense, but surely TFLOPS matter to some degree in some ways. But the more I hear about asset streaming, the more I feel like Sony's SSD is that big of a deal. Either way I'm excited for next gen and it can't come soon enough.
Ofcourse TF matters. Thats why xsx will have higher frame resolution consistently. Question is ,is it to a point where we notice it or not. That remains to be seen . Also xsx will have better RT by 25% ish.

Each console has its own benefits while both being close
 

Elog

Member
When both consoles' overall technology specifications were released I was convinced that XSX was the more powerful machine with a small - but clear - margin. PS5 had an edge on sound. I was more or less convinced I would still buy a PS5 due to the games but felt a small worry regarding the hardware.

Then I started to spend time on the Cerny speech and started to read up on I/O in more detail. While I am no software engineer, I have done my fair share of coding in my spare time and have spent many hours over the years building PC systems with custom loops et cetera - it is one of my hobbies. At some point I started to realise the implications of a low latency/high bandwidth stream of graphical assets straight into VRAM.

Then the UE5 demo was shown.

After that I started to read up on GPU utilisation and internal bottle-necks. I had never thought about the sometimes quite low utilisation of GPU hardware even when bottle-necked from an FPS point of view at a certain resolution. Once the I/O of the GPU-GPU caches-VRAM started to hit me I had to go back to the Cerny speech again and started to realise how variable frequency plus 'cache scrubbers' (as he called it) can - on paper - un-bottleneck GPU hardware to a significant extent. Then I started to realise how TFLOPs can be very misleading if the TFLOPs actually do not hit your screen.

Maybe this is just a marketing narrative from Sony. After all the reading and the comments from developers, I am betting on that Sony has created something quite special this generation and that we are in for a treat. And while MS has chosen a family approach to the hardware design - with the family defined as PC-Lockhart-XSX - they might also by design have kept many of the bottle-necks of the PC architecture in the process.

I fully acknowledge that I might be wrong in my analysis but I am very far from certain today that XSX is the more competent piece of hardware. Maybe I have just been fooled by Sony's marketing. We will all know over the course of the next 6 months.
 

DeepEnigma

Gold Member
Haha 🤣... yep mad last minute overclock (yep sounds likely :rolleyes:) ... it sounds more and more like your wet dream of domination met with a 18% in FLOPS win is really disappointing you. You won, get over it ;).

I find it kind of cute trying to see if yet another narrative, this micro center stuff, sticks :).

Cerny mentioned they could have clocked it even higher, but had to cap it due to the logic not being able to keep up with the clock speeds. It was always planned for speed.
 

Croatoan

They/Them A-10 Warthog
I can't believe some idiot actually thinks that because of a storage solution the PS5 will have better graphics than the PC... FUCKING LOL.

3080TI Ultra 4k/60fps+ (ALL games)
XBX High/Ultra 4k/30fps (most games)
PS5 High(Possibly some Ultra settings) 4k/30fps (most games) but really fast loading and better asset streaming for LOD

A hard drive does not make up for a weak gpu. That doesn't mean the PS5 isn't interesting or worthy of being purchased for those that don't want a PC. The odd man out here is the XBX as there is zero reason to own one since a PC gets you XBX exclusives and way better performance.

Of coarse the 4080ti will come out and PC will adopt something similar to PS5 and any advantage the consoles have will be gone within a few years.
 
Last edited:
Why talk about stuff you have no idea about with someone who uses UE4 every day and embarrass yourself? I simply don't understand.
Secondly I dont have to wait for middle/end of the gen. I get my hands on the demo Q1 2021.

If a laptop full of bottlenecks can run it at 1440p at 40 fps using the bottle-necked process of External Sata SSD > USB > CPU > System RAM > CPU > VRAM > GPU. (no DirectStorage, no Hardware Decompression, No unified Ram, No SFS)

A machine with Nvme 4.0 SSD with hardware decompression on board > Unified Ram > GPU will be able to run it at equal or better quality.

Its just that simple.
It is even more simple.

Here a Microsoft executive acknowledging the huge SSD/IO throughtput difference publically 👇

 
Last edited:

geordiemp

Member
Nah you didn't say that you said "games that do made for ps5 only from sony will look a gen above anything on XSX. "

Keyword: Anything

Let me help you break down what you said:
- games made for Ps5 only
- games made by Sony
- will look a gen above ANYTHING on XSX

Why are you trying to revise history? You no longer feel confident after being called out on that clear BS statement?

I also posted numerous on same topic, wording changed for each post, what i meant was clarified anything on XSX targeted for HDD (last gen in mind) in a forum about ps5 IO topic and you also know i meant big games that would need it, like a halo or HZD, not a tetris or chess..

Next time I will pass all statements through my lawyers to make you happy.
 

geordiemp

Member
Its a tech demo mate its not a realistic enviroment. just think about it.
Why push multiple 8k textures for a object when the resolution is only 1440p?
What's the difference of quality between 4k and 8k textures on lower resolutions? nothing
what's the space requirement from 2k textures > 4k textures > 8k textures? 4k = 3x 2k, 8k = 3x 4k
When games already are 170gb, have fun with 500gb games on the PS5, oh wait nobody can download it from there digital platform PS5 ssd is completely filled after 1 game + no blu ray can store it, doesn't seem likely to happen does it?
Why didn't they use 4k textures because ti doesn't matter? tech demo.
Was the entire SSD speed used on the PS5 or only parts of it? we don't know because for some reason tim doesn't want to talk about specifics here.

Why is the demo source files not released on PC so PC modders could test around with it? because for some reason he doesn't want people to spit through the data but only provide bits of it for whatever reason, because that demo would be up and running on PC within a day. All u can conclude out of this he has signed something with sony so he can't do stuff or is forbidden to talk about specific parts or he just doesn't want too.

All with all the SSD performance and required performance and quality u get out of it is heavily overblown and not even realistic. U want more stuff in your screen? u also need more hardware to project that stuff, u also need more memory, u also need more cpu performance, and more GPU performance.

Guess why the demo ran at 1440p at 30 fps? because the hardware can't keep up and that's in a complete scripted enviroment that came with it.

I could go on for a while with this, the thing is tech demo's are tech demo's and that's about it. They do specific things in order to demonstrate new ways of doing things in the most optimal solutions.

My guess for next gen is the SSD will be used for a bit faster loading and that's about it, maybe sony first party will use it for a bit more but frankly who cares about first party developers anyway they will make games no matter how trash the box is at the end of the day. They could have released a PS2 and they would still be making games on it. 3rd party however u will see lots of performance issue's straight out of the gate for the simple reason GPU's are simple not fast enough.

There is a reason why that GPU runs at 2,2+ghz cerny knows this.

NO you dont understand, supersampling from very high detail to lower is about quality of pixels that results in, thats why movie quality assets look amazing even at 720p.

Its not about native resolution, that is the point to be learned from the UE5 Demo demo, 1440p upscaled looks better than native and lower quality by a long shot.

To take it to extreme, go look at a PC game playing old console games in 4K, its not about the native resolution is it.

If you want to brush the quality of assets vs resolution understanding to the side as itsjust a demo, please do, you know sony will push high quality assets tomorrow so get used to it, its along generation.

Also god war and HZD were 46 GB, add kraken compression, remove all dublicates which is allot, disks now 100/128 GB and there you have it, good luck does not come into it, Sony will push assett sreaming and leverage the SSD IMO.
 
Last edited:

killatopak

Member
Hard to judge without the games. We saw even smaller differences in CPUs in current-gen systems, where in CPU-intensive scenes it was the weaker XB1 that had stable framerate or at least lower frame drops. Not to mention PS5 3,5Ghz is assuming the CPU runs at its full capabilities, taking away some power from the GPU, so the difference might be even bigger if the devs focus on full GPU utilization instead. Cerny during his presentation mentioned that before they incorporated SmartShift the CPU had a hard time running at 3GHz, so I don't know, I expect somewhere between 3.2-3.3Ghz in real-world applications. If/how much impact will it make? Only time will tell.
I’m not entirely sure either if the difference will be big or small since there are no games for them yet but do consider that there are difference between between XBO and PS4 games because their CPUs are so anemic in the first place. It’s different this time with them having a very respectable CPUs.


Precisely. So if you're presuming the difference in clock speeds to be minimal, can you respond to why the PS5 is considered to be the fastest next-gen console?
The I/O system as a whole. Not only the SSD but everything that surrounds it. The system seems to be made to put as less strain and bottleneck on the system as possible. Traditionally, it’s the CPU‘s task to decompress and facilitate the transfer of data from storage to the ram and to the corresponding part that needs it. The custom units on the PS5 removes this burden from the CPU.

Only when CPU is in full power. Considering that GPU takes way more power (like 2 to 3 times more) and is probably further of it's sweet spot that can be much much slower. We still don't know how low CPU will go to allow GPU to be on 2,2Ghz.
Correct.
 
The difference between their CPU are small. 3.5ghz vs 3.8ghz. It’s basically nothing.

TBF couldn't the same be said of the GPUs, then? If a 300 MHz difference is basically nothing, an extra 100 MHz difference on top of that isn't really much of a big deal, either.

I don't know if I necessarily believe that in either instance. We already know the faster GPU clock in PS5 will help with rate of processing through data in the caches, and pixel fillrate in particular (and potentially RT intersection bounce tests). We know generally a faster CPU helps in framerates, though of course in both instances a game's engine and programming determine how efficiently those types of advantages come into being useful.

I think the CPU clock speed differences will have some impacts on certain things such as framerates and rate of work speed on the CPU caches (as well as the rate of which instructions can be issued to the GPU), and maybe a couple of other things. Will it produce a worlds of difference though? Nah. And FWIW I don't think the differences in GPU clocks between the systems will produce a worlds difference, either. Clocks on the GPU only influence but so much.

Actually frame rates might end up smoother on ps5 . xsx cpu still has to get involved with IO and ps5 cpu is completely off that task. So don't assume better gpu would result in better frame. Ofcpurse some game engines that are gpu dependent might but these are all to he seen in real world performance.

Yes overall Ray tracing will be better on xsx but ps5 can shoot more rays as the gpu is clocked much higher and thats desirable for raytracing as well as CU count

It's 1/10th of a single core, and that would be the core the OS resides on (on PS5 a core is reserved for the OS as well). Everything else in terms of the I/O are handled by things like the decompression block, which FWIW we don't know everything about just yet.

There's an advantage to XSX's setup, though, although it's limited. On PS5 the dedicated processor in the I/O block has to share access to RAM with the CPU, GPU, and Tempest. In hUMA architectures there's bus contention, i.e when one component accesses the bus the others have to wait. They can still operate on data in their caches but have to wait until the bus is free before they can access RAM again.

This applies to the I/O block; when it's accessing RAM, the CPU, GPU etc. have to wait their turn. On XSX, since a fraction of a core is still being used for I/O access to/from RAM, while that is being done the game logic which is CPU-bound can still access the RAM across the bus simultaneously. The GPU would still have to wait it's turn (even if a dev were to decide using the 6 GB pool, the GPU'd still have to wait since it isn't the CPU in this instance), but the CPU could still access the bus in this way.

I also suspect this is where the OS could switch access between the two pools of GDDR6 memory for I/O access to/from the RAM. Now, the issue of the I/O accessing the RAM this way might limit some of the RAM access game logic could have with RAM by a fraction, but the point is, it'd still have access to the bus during the task, whereas with PS5 the CPU has to wait until the dedicated unit in the I/O block finishes access before it gets back to RAM.
 
Last edited:

killatopak

Member
TBF couldn't the same be said of the GPUs, then? If a 300 MHz difference is basically nothing, an extra 100 MHz difference on top of that isn't really much of a big deal, either.

I don't know if I necessarily believe that in either instance. We already know the faster GPU clock in PS5 will help with rate of processing through data in the caches, and pixel fillrate in particular (and potentially RT intersection bounce tests). We know generally a faster CPU helps in framerates, though of course in both instances a game's engine and programming determine how efficiently those types of advantages come into being useful.

I think the CPU clock speed differences will have some impacts on certain things such as framerates and rate of work speed on the CPU caches (as well as the rate of which instructions can be issued to the GPU), and maybe a couple of other things. Will it produce a worlds of difference though? Nah. And FWIW I don't think the differences in GPU clocks between the systems will produce a worlds difference, either. Clocks on the GPU only influence but so much.
I agree with a lot of what you said though I believe that in the GPU’s case, a frequency difference is a lot more impactful. Biggest difference is the fact that their GPU’s aren’t equal. It’s 52 vs 36. An increase in XSX frequency is a humongous difference. Their CPUs on the other hand, from what I’ve heard, is basically the same just like this gen with only frequency difference.

i’m not saying there won’t be differences in the way a CPU impacts a game, I’m just saying it will be a lot more negligible unless the game is very CPU intensive. Something like an RTS game being ported to consoles, maybe very high number and complex AI count which destroyed the console versions of AC Unity.
 
Last edited:

Thirty7ven

Banned
TBF couldn't the same be said of the GPUs, then? If a 300 MHz difference is basically nothing, an extra 100 MHz difference on top of that isn't really much of a big deal, either.

100 MHz over 3500 isn't the same % increase as 100 MHz over 1800. But that's honestly quite meaningless, I just wanted to say I'm very impressed how you managed to conflate them both, CPU and GPU, to basically end up saying that GPU clocks mean very little.
 
I agree with a lot of what you said though I believe that in the GPU’s case, a frequency difference is a lot more impactful. Biggest difference is the fact that their GPU’s aren’t equal. It’s 52 vs 36. An increase in XSX frequency is a humongous difference. Their CPUs on the other hand, from what I’ve heard, is basically the same just like this gen with only frequency difference.

i’m not saying there won’t be differences in the way a CPU impacts a game, I’m just saying it will be a lot more negligible unless the game is very CPU intensive. Something like an RTS game being ported to consoles, maybe very high and complex AI count which destroyed the console versions of AC Unity.

As far as most benchmarks show, frequency increases on a GPU of similar architecture to another GPU generally affects pixel fillrates, and like I had mentioned before the speed of data working on the caches. But GPUs with more hardware on them (larger GPUs, basically) tend to hold the advantage in the other areas as those aren't strongly reliant on pushing clocks.

I agree that with things having shifted to GPGPU programming the CPU isn't as much of a focus on handling tasks as it was in the past, but it's still obviously a vital component. Again, it's responsible for things outside of framerate, such as instructing the GPU on rendering tasks; that code still has to run through the CPU before the GPU takes it and does its thing. But as you mention, they're also useful for tasks like AI (even a lot of that is seeing a shift to GPGPU though through asynchronous programming) where the CPU is important.

Basically what I'm trying to say is, if the CPUs are more or less the same in terms of cores, caches, threads etc. and a 300 MHz difference won't make a world's difference, I don't know by that logic if the GPUs, again being mostly the same in a lot of key technologies and features, then a 400 MHz difference won't make things worlds apart to the degree one GPU simply being notably larger will, knowing that most GPU performance is based on things aside from the clock speed.

But even in that case, it really comes down to what the game demands in terms of resources, and how it's programmed.

100 MHz over 3500 isn't the same % increase as 100 MHz over 1800. But that's honestly quite meaningless, I just wanted to say I'm very impressed how you managed to conflate them both, CPU and GPU, to basically end up saying that GPU clocks mean very little.

Percentage increases don't mean anything if you don't know the context WRT them. No two system architecture components are doing the same thing or have the same weight to the performance of the system in the first place.

I did no conflating; I mentioned if a 300 MHz difference between two similar processor components is being seen as basically nothing, then a 400 MHz difference between two other processor components could similarly be seen as basically nothing. Especially knowing in the latter case that a frequency difference only genuinely influences a small fraction of the overall performance of that given part/component, as a vast majority of benchmarks of other similar components of same architectures show.

I'm very impressed how you didn't see that coming.
 

Thirty7ven

Banned
I'm very impressed how basic math is still flying over your head. And you did conflate the two.

How can a 100 MHz difference, in this case a 3.5 GHz CPU vs 3.6 GHz CPU of the same architecture, tell us anything about the difference between a 1.8 GHz GPU and 2.2 GHz GPU of the same architecture?

It really puts it into perspective let me tell you.
 
Last edited:
^You're one to talk; you seem willing to bend any logic to suit your purposes. Show me where the math is wrong. 3.5 GHz < 3.8 GHz, 300 MHz difference (granted that's with SMT disabled). 1.825 GHz < 2.23 GHz, ~ 400 MHz difference.

Not ignoring any percentage differential; it simply doesn't matter to the scope of what was being discussed. That's why I mentioned the context of the percentages you want to cling so desperately to. Once you look at the context, you realize the percentages don't tell you nearly as much as you want them to.

You simply want to flaunt off some percentages and numbers without factoring anything else towards them. There's no difference between you doing that and everyone who was obsessing over wanting the bigger TF number earlier in speculation phase. It's the same misguided mentality.
 
Last edited:

onQ123

Member
If the no load screens & fast I/O become a major selling point it would also help to promote cloud gaming because they can use even faster SSDs on the server side.
 
all of which will not be cross gen,but rather taking full advantage of the most powerful console ever

PS5 having a small advantage in mere few seconds of faster load times. Having a few second faster loading times is not much of a selling point.

Aaron Greenberg is that you??

Wow I can’t believe you’re that simple-minded to still think that the only thing the SSD can bring is a few mere seconds of faster load times.
 

geordiemp

Member
^You're one to talk; you seem willing to bend any logic to suit your purposes. Show me where the math is wrong. 3.5 GHz < 3.8 GHz, 300 MHz difference (granted that's with SMT disabled). 1.825 GHz < 2.23 GHz, ~ 400 MHz difference.

Not ignoring any percentage differential; it simply doesn't matter to the scope of what was being discussed. That's why I mentioned the context of the percentages you want to cling so desperately to. Once you look at the context, you realize the percentages don't tell you nearly as much as you want them to.

You simply want to flaunt off some percentages and numbers without factoring anything else towards them. There's no difference between you doing that and everyone who was obsessing over wanting the bigger TF number earlier in speculation phase. It's the same misguided mentality.

TF difference is 18 %, GPU difference speed is 22 %, CPU 3.5 vs 3.6 with SMT is not worth calculating, SSD raw speed is 100 %, IO difference nobody knows but all suggest Ps5 is in another league here. Bandwidth 400 GBs vs 480 IF 48 GBs use for non GPU work (CPU and sound etc) and less than 10 GB GPU for graphics...

Did is miss anything,
 
Last edited:

Boxman

Banned
tydhNJU.png


For example on a PC you can only read and write blocks of the SSD. With this console's storage you can read and write bits.

I found this interesting and I wonder whether or not the XSX can do the same thing. I wouldn't know how it affects games however.
 

NickFire

Member
If the no load screens & fast I/O become a major selling point it would also help to promote cloud gaming because they can use even faster SSDs on the server side.
What does that have to do with the conversation? Edit: Nvr mind. Maybe it relates to PC gaming somehow.
 
Last edited:

sinnergy

Member
You guys never get tired of using this strawman, do you?
Just like the other side uses the SSD? in the end the GPU draws the pictures , which benefits from more CUs and more bandwidth. Or is there another component that draws the final image ?
 
Last edited:

Yoshi

Headmaster of Console Warrior Jugendstrafanstalt
Basically what we have known. Both have their advantages.

PS5 faster/Series X stronger.
Why do people keep repeating this nonsense?

- XSX is faster in computing everything and transfering work data
- PS5 is faster in transfering static game data into the pool of work data

Both are faster than the other in some respect, what is stronger even supposed to mean here other than being faster?
 

Boxman

Banned
If AMD didn't upgraded RDNA2 clock speeds a lot, then I wouldn't be surprised if opposite is true. That clock is really end of power curve for current RDNA cards and would lower yields significantly. We will see more when RDNA2 cards finally hit market. In addition if final PS5 is anything like devkit, then it will be much more expensive to manufacture than XBox.
That Bloomberg article however says that the XSX is more expensive than the PS5 in terms of BOM.
 

FranXico

Member
Just like the other side uses the SSD? in the end the GPU draws the pictures , which benefits from more CUs and more bandwidth. Or is there another component that draws the final image ?
No there isn't, and nobody ever claimed otherwise. But let me put it like this, the GPU can only draw what it sees loaded in the VRAM, and can't draw it before it's there.

In one case you have more CUs clocked below 2GHz, and a pretty fast IO pipeline.
In the other case you have less CUs, but clocked faster, and an even faster IO pipeline.

There are valid ways to debate either advantages or disadvantages without misrepresenting what others say. And I'm not saying you did that. I only took the time to reply like this because you argued in good faith.
 
Last edited:
Top Bottom