• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

(*)Let's Clear Up Some Misconceptions On PS5 & XSX Specs Shall We.....

Which system do YOU think holds the overall performance advantage?


  • Total voters
    275
  • Poll closed .

psorcerer

Banned
They literally confirmed the expansion card runs at the same spec as the internal one, there is no "very high" chance of otherwise
Two things about this sdd viewport streaming comment. Its not exclusive to PS5 and I'll tell you why.

1. The XSeX is said to be capable of 4.8 GB/s compressed sdd loading. In half a second that gives you 2.4GB of compressed data you can swap out, just 1.6 GB shy of what Cerny said would be required for nextgen games on the ps5.

2. The XSeX features machine learning assisted smart loading of PORTIONS of assets in view with the Sampler Feedback Streaming feature. See below.

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.

I don't understand how so many are missing the point here.

This more or less means, if it works, that where the PS5 may require 4 GBs of asset data to display a changing scene, the XSeX may need no more than 2 GBs....

Meaning their streaming rate as the viewport changes would be able to keep up with what PS5 is doing because it's effectively leaner to display a scene.

Am I the only to pick up on this or did I completely misunderstand something in the I/O breakdowns?

Yep. you did misunderstand.
1. SFS can only load textures. It's actually a well known Partially Resident Textures in disguise.
2. It can load only whole mip levels, IIRC. While PS5 can load any type of buffers.
3. It still uses filesystem and requires a lot of effort to support. While PS5 loads are fully transparent: source, target - load.
4. The 2x, 3x multiplier is a virtual multiplier. It doesn;t make XSeX faster. It's the same. Peak load for PS5 controller is 22Gb/sec, which is 10x multiplier from 2.4 GB/sec on XSeX. It doesn't mean much.

Overall XSeX solution is a very partial implementation of what PS5 has, and more than 2x times slower.
 

Journey

Banned
Why is it so hard for some people? I don't get it:

Historically looking at any modern console, we always focused on the 3 Macros: CPU, GPU and Memory.

CPU:

PS5 = Zen 2 @ 3.5Ghz variable
XSX = Zen 2 @ 3.8Ghz - 3.6Ghz with simultaneous multithreading

Who has the more powerful CPU? Clearly XSX, can this be denied? Not to mention that 3.5 for the PS5 is at its peak, this means it's not running simultaneous multithreading and drawing power away from the GPU disallowing from sustaining 2.23 Ghz, they actually say that if the CPU ever needs to hit its max, it will slow down the GPU frequency, this will NEVER happen on XSX which runs at a locked frequency.


GPU:

PS5: Custom RDNA2 36 CU @ 2.23Ghz variable for 10.28 TF (Variable)
XSX: Custom RDNA2 52 CU @ 1.825Ghz sustained for 12.16 TF (Constant)


Sure the higher clock brings the TF number closer to XSX, but there's no denying that having 12.16 TF guaranteed at all times is better than 10.28 at best case scenario, when in other scenarios where more CPU power is required, that number could drop into the 9TF numbers.

Also, having 52 Compute units mean better potential for Ray Tracing where more CUs are available to handle the load.

Memory:

PS5: 16Gb of GDDR6 with a 256-bit bus which will provide 448GB/s bandwidth
XSX: 16GB of GDDR6, 10GB with a 320-bit bus providing 560GB/s bandwidth and 6GB at 336GB/s


There's just no question which is more powerful. PS5 does have a nice SSD though lol.

 
Firstly, I would like to congratulate you on the effort put into your OP!

I've listened to the parts of Cerny's talk on variable frequency a few times, and I think he was intentionally none specific. He used 2% as an example of saving 10% of power, but he didn't actually say it wouldn't drop more than that.

"We expect the GPU to spend most of it's time at or close to that [2.23 gHz] frequency and performance."

Same story for the CPU:

"Similarly running the CPU at 3 gHz was causing headaches with the old strategy. Now we run it as high as 3.5 gHz. Infact it spends most of it's time at the frequency."

Time will tell, but I think as games squeeze more and more out of the hardware - particularly as AVX 256 gets used more - we'll see increasing deviance from sustained peak boost for both CPU and GPU at the same time.

In a way this variable frequency setup front loads the PS5 to perform most competitively early in the generation. It's a really good way to make the most of your 36CU system, especially when it matters most to building up momentum in the market. There's not really a down side to implementing this once they had the chip that they did.

Yeah, this is partly why I sorta speculated a greater-than-2% drop in GPU clocks. But, I'm not going to commit to that for now, because it's mostly unfounded. We'll have to wait until more on their variable clock implementation is understood (and see real world results in action) before seeing how well it holds up under intense loads.
Two things about this sdd viewport streaming comment. Its not exclusive to PS5 and I'll tell you why.

1. The XSeX is said to be capable of 4.8 GB/s compressed sdd loading. In half a second that gives you 2.4GB of compressed data you can swap out, just 1.6 GB shy of what Cerny said would be required for nextgen games on the ps5.

2. The XSeX features machine learning assisted smart loading of PORTIONS of assets in view with the Sampler Feedback Streaming feature. See below.

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.

I don't understand how so many are missing the point here.

This more or less means, if it works, that where the PS5 may require 4 GBs of asset data to display a changing scene, the XSeX may need no more than 2 GBs....

Meaning their streaming rate as the viewport changes would be able to keep up with what PS5 is doing because it's effectively leaner to display a scene.

Am I the only to pick up on this or did I completely misunderstand something in the I/O breakdowns?

Great mention. Honestly I think it's down to this other stuff getting completely drowned out by the big info points MS posted on Monday (and in the other blog post a couple weeks before that). A lot of people just got fixated on the TF number and didn't really pay attention to these other things. Just like on Monday, most got fixated again on the TF number, the SSD raw speed, memory amount (and config) and bandwidth.

Cerny explicitly stressed their optimizations in the presentation and I think in light of certain performance differentials in other areas people clung even more to specific things regarding SSD optimizations he mentioned in the presentation. Plus, just mentioning that kind of stuff verbally in a video presentation people are paying attention to for what they REALLY wanted to hear, just helps it stick in their head better. So I do give Cerny and Sony kudos on that.

Oh, the difference isn't that large then. Good. Do you know how the ray-tracing numbers compare to Nvidia GPU:s? Thanks for the writeup.

Nah, I don't know much anything about Nvidia's stuff. I just pay attention to the console stuff and Nvidia isn't involved there aside from the Switch :(

There it is. This thread reminds me of the whole secret hidden gpu in the psu.

Trust me that's not what I'm doing xD. I actually made the thread because I saw lots of people emphasizing the SSD as being able to do things it realistically won't be able to, like replacing GDDR6 memory bandwidth or CU compute throughput.

I mean people are probably gonna think what they think regardless but I just wanted to put the similarities and differences into better perspective and do so very neutrally (or as best as able).
 
Last edited:
Oh I absolutely think the PS5 will see out the generation. Probably a lot better (from a performance standpoint) than the X1/S did (and certainly from a sales POV!).

It's just that as utilisation of hardware increases it's very likely that situations that cause frequency drops will become more common and the drops will be deeper.

If Sony have done the variable frequency thing right (and I really think they will have) then durability shouldn't be a problem. One of the biggest causes of GPU failure - and the cause of the famous 360 RRoD epidemic - was mechanical stress (caused by heating and cooling) on a new type of solder - and large, fast changes in temperature caused by inadequate cooling greatly exaggerated the problem. If Sony are maintaining a constant power level, and their cooling is good enough, then the stress caused by heating and cooling is minimised.

There is the whole thing about higher voltages degrading silicon faster, but within the lifespan of a console and within the safe limits of a process I'm betting that's not going to be a significant problem at all.

I think Sony can pull this off just fine!

I hope so, but I don’t have the same prediction.

I think the need for a Pro model is gonna show up far sooner than later
 
I hope so, but I don’t have the same prediction.

I think the need for a Pro model is gonna show up far sooner than later

Hopefully not; I think early adopters would feel VERY jaded if PS5 Pro comes around just two years after launch. Might even feel similar if it happens 3 years after, depending on where software/ecosystem maturity is by then, and if there were any steep price drops in the interim.

1st party games, if performance starts to get too demanding, they'll certainly find ways to play around the potential issues. 3rd-party games may be more of an issue and it depends on how easy-of-use the dev tools are, and/or if Sony specifically collaborates with 3rd parties for optimizations (even if they aren't benefited by exclusivity or timed exclusivity).
 

TBiddy

Member
Trust me that's not what I'm doing xD. I actually made the thread because I saw lots of people emphasizing the SSD as being able to do things it realistically won't be able to, like replacing GDDR6 memory bandwidth or CU compute throughput.

I mean people are probably gonna think what they think regardless but I just wanted to put the similarities and differences into better perspective and do so very neutrally (or as best as able).

Was not referring to you 😊meant the many attempts to refer to the secret sauce.
 
Last edited:

CJY

Banned
I hope so, but I don’t have the same prediction.

I think the need for a Pro model is gonna show up far sooner than later

I think the only need we'll see the need for Pro consoles is if there is a big push for 8K, which there very may well be, but I'm highly dubious about the sales potential of 8K.

With the rate at which AMD is progressing, I'd love Sony to go straight to PS6 after 5 years. Keep the generation shorter.
 
Yep. you did misunderstand.
1. SFS can only load textures. It's actually a well known Partially Resident Textures in disguise.
2. It can load only whole mip levels, IIRC. While PS5 can load any type of buffers.
3. It still uses filesystem and requires a lot of effort to support. While PS5 loads are fully transparent: source, target - load.
4. The 2x, 3x multiplier is a virtual multiplier. It doesn;t make XSeX faster. It's the same. Peak load for PS5 controller is 22Gb/sec, which is 10x multiplier from 2.4 GB/sec on XSeX. It doesn't mean much.

Overall XSeX solution is a very partial implementation of what PS5 has, and more than 2x times slower.
1. How much of the 4 GBs assets in a PS5 scene is going to be textures? The majority? Why are they using a machine learning processor if its just standard PRT?

2. Where is it stated its limited to whole mip levels?

3. Can you clarify how you know the Series X memory file system requires a lot of effort? And are you saying it requires effort or its not possible? My argument was that the Series X can also do the job.

4. I never claimed anything otherwise... Except your math there. 22 GB/s peak compressed data versus 4.8 sustainable, which is 5x best case for momentary instances. What does swapping out 22 GBs achieve other than completely switching entire games out of RAM quicker?

We are talking about viewport memory management to keep gpus fed which is what everyone is losing their minds over. If the need is to manage 4 GBs of assets for a single game in the viewport per half second, I was pointing out a way the XSeX may close that 1.6 GB gap.

But yea the PS5 sdd is fast enough to switch to a completely different game when you rotate the camera. This could be good stuff for a new version of PS Home or Dreams2, but otherwise I fail to see the use case. But it is awesome that loading games will be as fast or faster than even the 64 days with digital games.
 
Yep. you did misunderstand.
1. SFS can only load textures. It's actually a well known Partially Resident Textures in disguise.
2. It can load only whole mip levels, IIRC. While PS5 can load any type of buffers.
3. It still uses filesystem and requires a lot of effort to support. While PS5 loads are fully transparent: source, target - load.
4. The 2x, 3x multiplier is a virtual multiplier. It doesn;t make XSeX faster. It's the same. Peak load for PS5 controller is 22Gb/sec, which is 10x multiplier from 2.4 GB/sec on XSeX. It doesn't mean much.

Overall XSeX solution is a very partial implementation of what PS5 has, and more than 2x times slower.

Actually there's something very important to SFS here worth noting; sampling feedback allows devs to write ideal mip levels, and the status bit can be triggered anytime to detect tile residency for loading in more texture data

You're right that a filesystem is used, but an argument could also be made that allowing the implementation to be tailored per application (per developer's taste) can bring in some of its own benefits. PS5's solution, just from what you mention here, offloads a lot of that from dev hands but that also might take away choice of options from them in how to implement it too.

I don't think anyone expects a virtual multiplier to close up physical performance differentials. In bursts of activity though it could help increase targeted throughput of the SSD. Again, we don't know a lot on the SSDs for either system; I fully expect PS5's SSD to maintain the performance advantage but there are some potentially interesting benefits for what the XSX is doing on this front, too.
 

psorcerer

Banned
1. How much of the 4 GBs assets in a PS5 scene is going to be textures? The majority? Why are they using a machine learning processor if its just standard PRT?

Because you will need to predict how to load them. You don't have a low level exact placement like PS5 has.
And it should work on PC too.

2. Where is it stated its limited to whole mip levels?

In the whitepaper.

3. Can you clarify how you know the Series X memory file system requires a lot of effort? And are you saying it requires effort or its not possible? My argument was that the Series X can also do the job.

Same whitepaper above.

But yea the PS5 sdd is fast enough to switch to a completely different game when you rotate the camera.

Not really. You cannot ensure the best compression.
I would suspect that SFS will be used in XSeX and PC for PRT.
And that's it.
PS5 will load entire buildings while you are walking the street.

PS5's solution, just from what you mention here, offloads a lot of that from dev hands but that also might take away choice of options from them in how to implement it too.

I don't think so. They allow a simple low level interface to their loader: load this thing from here to here, using this priority.
I don't think you can have more freedom than that.
 

I think you're on the money with a lot of what you're saying, but I didn't see where it states explicitly that sampler feedback only allows the loading of whole mip levels. I think G GlassAwful might have a point here.

Infact, this MS devblog seems to be talking up partial loading of mips as being a benefit.

"When targeting 4k resolution, the entire MIP 0 of a high quality texture takes a lot of space! It is highly desirable to be able to load only the necessary portions of the most detailed MIP levels.

One solution to this problem is texture streaming as outlined below, where Sampler Feedback greatly improves the accuracy with which the right data can be loaded at the right times.

  • Render scene and record desired texture tiles using Sampler Feedback.
  • If texture tiles at desired MIP levels are not yet resident:
    • Render current frame using lower MIP level.
    • Submit disk IO request to load desired texture tiles.
  • (Asynchronously) Map desired texture tiles to reserved resources when loaded."
The DF interview with the XSX team basically says the same thing:

"We observed that typically, only a small percentage of memory loaded by games was ever accessed," reveals Goossen. "This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

The way you get the "two to three times memory multiplier" is by being specific about which part of the mip you load in. An 8K texture on something like a floor or wall would typically only need a part of it to be sampled for rendering at any time.
 

psorcerer

Banned
I think you're on the money with a lot of what you're saying, but I didn't see where it states explicitly that sampler feedback only allows the loading of whole mip levels. I think G GlassAwful might have a point here.

Infact, this MS devblog seems to be talking up partial loading of mips as being a benefit.

"When targeting 4k resolution, the entire MIP 0 of a high quality texture takes a lot of space! It is highly desirable to be able to load only the necessary portions of the most detailed MIP levels.

One solution to this problem is texture streaming as outlined below, where Sampler Feedback greatly improves the accuracy with which the right data can be loaded at the right times.

  • Render scene and record desired texture tiles using Sampler Feedback.
  • If texture tiles at desired MIP levels are not yet resident:
    • Render current frame using lower MIP level.
    • Submit disk IO request to load desired texture tiles.
  • (Asynchronously) Map desired texture tiles to reserved resources when loaded."
The DF interview with the XSX team basically says the same thing:

"We observed that typically, only a small percentage of memory loaded by games was ever accessed," reveals Goossen. "This wastage comes principally from the textures. Textures are universally the biggest consumers of memory for games. However, only a fraction of the memory for each texture is typically accessed by the GPU during the scene. For example, the largest mip of a 4K texture is eight megabytes and often more, but typically only a small portion of that mip is visible in the scene and so only that small portion really needs to be read by the GPU."

The way you get the "two to three times memory multiplier" is by being specific about which part of the mip you load in. An 8K texture on something like a floor or wall would typically only need a part of it to be sampled for rendering at any time.

Yep, you're right. They can load parts of a mip. The whitepaper is so long, wasn't reading it whole. :messenger_tears_of_joy:
 
Yep, you're right. They can load parts of a mip. The whitepaper is so long, wasn't reading it whole. :messenger_tears_of_joy:

Hey, I only skimmed it and had to Google around to be sure. I wasn't confident until about five minutes before I posted that.

For once I'm sober enough to think and not drunk enough to be a dick. I get lucky some times!
 
PS5 superior SSD speed won't be utilized by any 3rd party devs, they'll design their game around slowest SSD spec of Lockhart. So, Series X will play all multi-plats at a slightly superior visual fidelity than PS5.

1st party is where PS5 will shine, they don't have to design their game around a lower spec machine aka Lockhart equivalent.
I'm not an developer, but speed is speed. What do you mean saying that won't be utilized. If you have more power you can make the same games smoother, with higher resolution, more effects.
Why it should be different with such a big difference in drive speed?
Even if they will design with worse drives in mind, what stops their games from loading faster, having superior Level Of Detail, booting up faster or things like that?
If the lowest common is the base of design, then there's no way that design will be done with 12TF in mind. Right? Isn't this the same king of argument?
Multiplatform games can (and should imo) utilize systems strong points, there's nothing bad about this. I can't wait to see wich console will get better RT, cant wait to see the benefits of having more horsepower in Series X games, and can't wait to see what they can do with such sdd speed in PS5 games.
 
How is PS5 more interesting. XSX has VRS, machine learning, Velocity architecture. Seriously what is NOT interesting about the XSX?

The fact that we pretty much know everything about the Xbox and the tech you've mentioned, yet there's clearly more information to be had regarding the Ps5.

You should've read the whole post instead of getting irate after a sentence because someone decides to not default to trashing Sony.
 

Tripolygon

Banned
How is PS5 more interesting. XSX has VRS, machine learning, Velocity architecture. Seriously what is NOT interesting about the XSX?
Standard RDNA 2 features. There are lots of interesting things about XSX but those are not it. Any modern GPU can do machine learning if they support INT4/INT8/INTI6/FP16 which RDNA 2 does.
 
Do we have to literally quote devs that are super pumped about the PS5 to make anyone believe?





Jason Schreier also has multiple articles where devs have stated to him how superior the ps5 is, link here: http://www.pushsquare.com/news/2020...devs_seem_disappointed_by_sonys_communication

I think we'll all have to wait and see but i think its going to be very close and people should just enjoy the ride.


You are disingenuously misrepresenting those quotes, especially Schreir's. They were in reference to the PS5's SSD, which is clearly faster, but it's not like devs cannot be excited for PS5 even if it is the lesser of the two in other areas such as GPU raw compute, CPU clocks and volatile memory bandwidth.

Also the point of this thread was never to argue if PS5 is good or not, or imply devs are not excited for it. The point was more to just illustrate how some people supposing the SSD will make up for differences in areas like GPU compute and volatile memory bandwidth & speed are not understanding how SSDs, particularly the NAND memory they are constructed from, actually operate. And due to those fundamental technological differences you cannot directly compared SSD features and advantages against things that are extremely different from it in function.

Plus I also wanted to go through some direct comparisons and show where the two have advantages and disadvantages in specific areas outside of the SSD, under certain conditions, and see how things compared between them on that note (plus illustrating how they BOTH utilize aspects of narrow & fast/wide & slow where needed, instead of the misconception by some that PS5 is ALL narrow & fast (and is the only one using smart optimizations of silicon; not implying this is inherent to narrow & fast designs btw) and XSX is ONLY using wide & slow (and throwing brute force at solutions; again not implying this is inherent to wide & slow designs).

Standard RDNA 2 features. There are lots of interesting things about XSX but those are not it. Any modern GPU can do machine learning if they support INT4/INT8/INTI6/FP16 which RDNA 2 does.

MS has a patent on their implementation of VRS that Sony is not able to use, meaning Sony has to roll their own implementation of it via APIs. Just as an example.
 

Tripolygon

Banned
VRS has nothing to do with Virtual Reality :messenger_tears_of_joy: :messenger_tears_of_joy:

It stands for Variable Rate Shading. It just means rendering some parts of the screen at a lower quality to get performance increases.
SMH. Yes i know what it is. On regular TV screen it provides nice performance gain but in VR, it provides major gains because you only have to shade the center of focus in the display at higher resolution while everything else can be lower resolution. Coupled with foveated rendering it is a major part of VR.


Edit: Oh yea before i forget again, great OP thicc_girls_are_teh_best thicc_girls_are_teh_best
 
Last edited:
Audio is always over rated in my eyes. I don't think anyone will notice or comment on the differences between XSX and PS5 audio. They have both put ALOT of effort into it, yet the vast majority of us dont have good enough hearing to tell the differences.

VRS and ML for MS is a big deal. A massive win for MS here.

The option for devs to use all cores without multithreading for a higher clock speed is an excellent decision. Lots of indie devs who dont have the resources to multithread will get a good kick out of this.

While Sony went full retard on the SSD and put together a crazy solution, real world wont benefit from it.

How Sony managed to get the clock so high on the GPU is just mind boggling.
If you had of asked the internet if it would be possible to clock a retail PS5 at 2.23ghz, absolutely no one would have said it possible. Heck, people said 2.0ghz was impossible. So mad respect to Cerny for managing to do that. They got caught short and did everything they could do to lose the gap. Myself, I would have just enabled all 40cus and wore the cost of lower yields. But hey, they did what I thought impossible.
There will be some benefit to it, but in no way will it close the gap. Xbone had higher GPU clocks and it didnt do shit.
 

Bo_Hazem

Banned
Two things about this sdd viewport streaming comment. Its not exclusive to PS5 and I'll tell you why.

1. The XSeX is said to be capable of 4.8 GB/s compressed sdd loading. In half a second that gives you 2.4GB of compressed data you can swap out, just 1.6 GB shy of what Cerny said would be required for nextgen games on the ps5.

2. The XSeX features machine learning assisted smart loading of PORTIONS of assets in view with the Sampler Feedback Streaming feature. See below.

Sampler Feedback Streaming (SFS) – A component of the Xbox Velocity Architecture, SFS is a feature of the Xbox Series X hardware that allows games to load into memory, with fine granularity, only the portions of textures that the GPU needs for a scene, as it needs it. This enables far better memory utilization for textures, which is important given that every 4K texture consumes 8MB of memory. Because it avoids the wastage of loading into memory the portions of textures that are never needed, it is an effective 2x or 3x (or higher) multiplier on both amount of physical memory and SSD performance.

I don't understand how so many are missing the point here.

This more or less means, if it works, that where the PS5 may require 4 GBs of asset data to display a changing scene, the XSeX may need no more than 2 GBs....

Meaning their streaming rate as the viewport changes would be able to keep up with what PS5 is doing because it's effectively leaner to display a scene.

Am I the only to pick up on this or did I completely misunderstand something in the I/O breakdowns?

Your input is great as well. Forgot to say that all of that is done without the developer doing anything, it's integrated in the system how it deals with the game. Overall, we shouldn't expect a big difference overall between the two systems. Other trickery like dynamic resolution and VRS should help with the matter as well. We just need to see games of 3rd party head to head so we can understand how each system deals and results. Overall, I think 3rd party will shoot for a sweet average between the two to save time. Each 1st party exclusive should push for their prowess, but with Xbox putting PC and its variables into account.
 
Last edited:
Just stopping by to say great OP thicc_girls_are_teh_best thicc_girls_are_teh_best . This is the most informative run-down I've read thus far.

Despite that I know this thread will still unfortunately be a trashfire of fanboy wars so wont even bother reading the rest of it.

Nah, it's been surprisingly civil. You got some folks who are sticking by certain predictions but at least they are rationalizing them and not personally attacking anybody. Tried keeping the OP fair and balanced to encourage that.
 

tmlDan

Member
You are disingenuously misrepresenting those quotes, especially Schreir's. They were in reference to the PS5's SSD, which is clearly faster, but it's not like devs cannot be excited for PS5 even if it is the lesser of the two in other areas such as GPU raw compute, CPU clocks and volatile memory bandwidth.

Also the point of this thread was never to argue if PS5 is good or not, or imply devs are not excited for it. The point was more to just illustrate how some people supposing the SSD will make up for differences in areas like GPU compute and volatile memory bandwidth & speed are not understanding how SSDs, particularly the NAND memory they are constructed from, actually operate. And due to those fundamental technological differences you cannot directly compared SSD features and advantages against things that are extremely different from it in function.

Plus I also wanted to go through some direct comparisons and show where the two have advantages and disadvantages in specific areas outside of the SSD, under certain conditions, and see how things compared between them on that note (plus illustrating how they BOTH utilize aspects of narrow & fast/wide & slow where needed, instead of the misconception by some that PS5 is ALL narrow & fast (and is the only one using smart optimizations of silicon; not implying this is inherent to narrow & fast designs btw) and XSX is ONLY using wide & slow (and throwing brute force at solutions; again not implying this is inherent to wide & slow designs).



MS has a patent on their implementation of VRS that Sony is not able to use, meaning Sony has to roll their own implementation of it via APIs. Just as an example.


It wasn't directed towards you, i totally understand where you're getting at and you explained in good detail the avenues the two console manufacturers are taking, it's for the large majority that think this gap is so massive it'll creative this great divide, which it won't.
 

Bo_Hazem

Banned
This was a good part of the presentation but there's still caveats. Mainly, the way data is stored on SSDs. If the data is compressed, then it has to be uncompressed in order to be read, and space on the drive has to be made available to write the decompressed data back to.

Then, the data has to be read, but at page level, since that's the smallest read-addressable level for data on NAND. A standard page size is about 4KB, so one of the smallest levels of granularity with data on NAND is 4KB. Comparatively, the smallest level of granularity for data to be read in volatile memory like GDDR6 is 1 byte. That's four thousand times smaller.

Also in the case of writing the data back to the SSD, it has to be done in blocks, which are MUCH larger than a 4KB page size (speaking of which, if the NAND data is being read in decompressed state by the GPU, the size of data the GPU can read on a single pass depends on how wide of a bus it has. It also depends on the width of the bus for the flash memory controller and the bus width of the NANC IC of the custom storage. This is why knowing the bandwidth number and bit size of the chip buses is as important as knowing the overall speed. TBF both Sony and MS have only mentioned speed rather than bandwidth which is annoying). So a lot of that speed in communication for data writing might be spend on constant replacing of data that does not actually need to be replaced of its own accord, but has to be necessity due to being in the same block of NAND as data that DOES need to be replaced.

So for texture data that doesn't need to be modified very much and is fairly uniform in size to the page size of the NAND on the SSD, that is where streaming the texture as v-cache will be its MOST beneficial. Even then, this is mainly for decompressed data on the drive. Otherwise there will have to be programming tricks such as duplicating altered copies of modestly changed texture data at a safe "near proximity" to the player that is read from when needed (and has to be at least read and decompressed once and then written back to the SSD), or use a combination of that plus placing texture cache in the GDDR6 for data that is expected to be frequently altered at the bit-and-byte level (or even in cases where that level of granularity isn't needed, but the speed of the GDDR6 will be more beneficial).

I believe he mentions that it reads directly from the SSD, no processing as I understand if you watch it all. Other data you've expressed is above what I can comprehend so thanks for explaining in depth, yet I can't comment as it's beyond my knowledge.
 
I believe he mentions that it reads directly from the SSD, no processing as I understand if you watch it all. Other data you've expressed is above what I can comprehend so thanks for explaining in depth, yet I can't comment as it's beyond my knowledge.

When the data is on the drive the way it needs to be I'm certain no processing is needed, but if it's compressed then at least SOME processing has to be done to decompress it. They could then probably rewrite the data to a portion of the v-cache partition decompressed and from there just read it raw with no processing, though.

Granted, Cerny wen in-depth on the SSD for good reason, but from my own research on how NAND works there's still some caveats to the idea of just reading the info and using it straight-up. They've seemingly engineered to address a lot of these, but the things I'm referring to in particular are just down to the nature of NAND and there's nothing Sony and MS can do with that.

It's why I was hoping for a while some persistent memory like Optane was going to be present, even as a "small" (32GB) cache. Alas, that ain't the deal. Maybe next time tho 👍

Audio is always over rated in my eyes. I don't think anyone will notice or comment on the differences between XSX and PS5 audio. They have both put ALOT of effort into it, yet the vast majority of us dont have good enough hearing to tell the differences.

VRS and ML for MS is a big deal. A massive win for MS here.

The option for devs to use all cores without multithreading for a higher clock speed is an excellent decision. Lots of indie devs who dont have the resources to multithread will get a good kick out of this.

While Sony went full retard on the SSD and put together a crazy solution, real world wont benefit from it.

How Sony managed to get the clock so high on the GPU is just mind boggling.
If you had of asked the internet if it would be possible to clock a retail PS5 at 2.23ghz, absolutely no one would have said it possible. Heck, people said 2.0ghz was impossible. So mad respect to Cerny for managing to do that. They got caught short and did everything they could do to lose the gap. Myself, I would have just enabled all 40cus and wore the cost of lower yields. But hey, they did what I thought impossible.
There will be some benefit to it, but in no way will it close the gap. Xbone had higher GPU clocks and it didnt do shit.

I wouldn't go as far as to say Sony went full retard on the SSD; they had targets in place that shifted on the timeline for various reasons, and an entire APU redesign was out of the question. So they saw an opportunity to push harder on something possible in that time frame which could be changed and reasonably increased, the SSD, and did so.

The GPU clock is honestly a hell of a feat, but it's mainly thanks to RDNA2 efficiencies. I DO think PS5's cooling solution is pretty expensive tho; there's a good chance it might push MSRP to $499 if they want to sell at-cost.
 
Nah, it's been surprisingly civil. You got some folks who are sticking by certain predictions but at least they are rationalizing them and not personally attacking anybody. Tried keeping the OP fair and balanced to encourage that.
Yeah I think the reckoning is burning out now. Points have been proven, crow eaten, and now everyone is deep diving into the specifics of each console. I think the mods did a good job off allowing people to vent without banning everyone.
 
When the data is on the drive the way it needs to be I'm certain no processing is needed, but if it's compressed then at least SOME processing has to be done to decompress it. They could then probably rewrite the data to a portion of the v-cache partition decompressed and from there just read it raw with no processing, though.

Granted, Cerny wen in-depth on the SSD for good reason, but from my own research on how NAND works there's still some caveats to the idea of just reading the info and using it straight-up. They've seemingly engineered to address a lot of these, but the things I'm referring to in particular are just down to the nature of NAND and there's nothing Sony and MS can do with that.

It's why I was hoping for a while some persistent memory like Optane was going to be present, even as a "small" (32GB) cache. Alas, that ain't the deal. Maybe next time tho 👍



I wouldn't go as far as to say Sony went full retard on the SSD; they had targets in place that shifted on the timeline for various reasons, and an entire APU redesign was out of the question. So they saw an opportunity to push harder on something possible in that time frame which could be changed and reasonably increased, the SSD, and did so.

The GPU clock is honestly a hell of a feat, but it's mainly thanks to RDNA2 efficiencies. I DO think PS5's cooling solution is pretty expensive tho; there's a good chance it might push MSRP to $499 if they want to sell at-cost.
What I mean by full retard is that they pushed the tech so hard on the SSD they were beating out companies who all they do is SSD tech. That is alot of money and resources for what I think will be little gain over the XSX solution. However, and this is my caviet, it might be that Sony is also investing in SSD tech for areas outside of just PS5, and so it involved a group effort from other Sony divisions. Maybe they intend to take on Samsung in this area? Which if true would make it a bit more understandable as to.why so much resources got put into it.
 
What I mean by full retard is that they pushed the tech so hard on the SSD they were beating out companies who all they do is SSD tech. That is alot of money and resources for what I think will be little gain over the XSX solution. However, and this is my caviet, it might be that Sony is also investing in SSD tech for areas outside of just PS5, and so it involved a group effort from other Sony divisions. Maybe they intend to take on Samsung in this area? Which if true would make it a bit more understandable as to.why so much resources got put into it.

They might just, Sony is that kind of company after all xD. I can see them pushing versions of PS5's SSD out onto the PC market in the near future, and maybe there are other reasons they've done this. They have a ReRAM R&D team for example; if by some chance they can actually develop any commercial product with it in the next few years, and the controller in PS5 can support it, they could design drives with ReRAM in them over traditional NAND.

Could bring the benefits of ReRAM but the speeds would be capped at the rates of the built-in memory controller. Maybe something to keep an eye on. Hopefully MS is eying into this with PCM/Optane-like memory as well; persistent memory could potentially replace NAND altogether.

I just wanna know... Who in the WORLD said PS5, especially with the hard facts and info we know thus far?

You might be surprised ;). Actually I would say give the NX Gamer video a watch; does a great job showing what ways the SSD could free up the GDDR6 for game operations (potentially). I think it's possible a lot of that can be done with the XSX's internal storage too, but it'd be slower since the flash memory controller is lower-speced than PS5's.

Obviously it doesn't close the gap on things like raw compute performance or peak fastest memory bandwidth, or ray-tracing, but it really does sound like the SSD can lead to some crafty workflow solutions that can provide great performance if done right, paired with PS5's other abilities. We're in for some fantastic stuff from both systems this gen especially those of us who were worried they'd be boring twins.
 
What I mean by full retard is that they pushed the tech so hard on the SSD they were beating out companies who all they do is SSD tech. That is alot of money and resources for what I think will be little gain over the XSX solution. However, and this is my caviet, it might be that Sony is also investing in SSD tech for areas outside of just PS5, and so it involved a group effort from other Sony divisions. Maybe they intend to take on Samsung in this area? Which if true would make it a bit more understandable as to.why so much resources got put into it.

I think Sony over-engineered the SSD part so they could have a marketing campaign hype point. Microsoft's is not as sexy with 'our console is so powerful and balanced'.
 

Bo_Hazem

Banned
Just for fun, Challenger Demon 840hp with semi slicks vs Nissan GT-R 600hp with regular tires (watch all rounds)




They stop before the turn so Demon doesn't hit the wall or go off track or smash the GT-R.

Teraflops = Horespower

It's not the whole story.
 
Too many assumptions in your post. Cerny and Sony started work on this hardware over 5 years ago. Until we know real performance, not console war childish flops, we have no idea which is the better designed console or who was caught with their pants down and dick up.
No we already know, you just don't want to accept it.

It's cut and dry, stop trying to rationalize your inability to accept that Sony dropped the ball.

Yes, let's keep talking about Jason Schreier shall we. The guy along with other BS "insiders" who for the last year have been clamoring on about the PlayStation 5 being more powerful when it clearly ended up not being the case. Also don't even for a second try to convince ANY of us as if they were not talking about Teraflops because that's the core of everything that was ever debated about this entire power struggle.

They were full of shit then, they're full of shit now, they will continue to be full of shit. Also it doesn't make things any better when the person he's citing a tweet from is a developer and the founder of Ready at Dawn, a PlayStation developer.

You people have lost the plot.
 
Last edited:

Bo_Hazem

Banned
No we already know, you just don't want to accept it.

It's cut and dry, stop trying to rationalize your inability to accept that Sony dropped the ball.

Yes, let's keep talking about Jason Schreier shall we. The guy along with other BS "insiders" who for the last year have been clamoring on about the PlayStation 5 being more powerful when it clearly ended up not being the case. Also don't even for a second try to convince ANY of us as if they were not talking about Teraflops because that's the core of everything that was ever debated about this entire power struggle.

They were full of shit then, they're full of shit now, they will continue to be full of shit. Also it doesn't make things any better when the person he's citing a tweet from is a developer and the founder of Ready at Dawn, a PlayStation developer.

You people have lost the plot.

He said above RTX2080 level, and that's true.
 
He said above RTX2080 level, and that's true.
Anybody could have made that connection, I did without any incite from any of these people. It's not some crazy prospect, a 2080 is basically an OC'd 1080 Ti with RT hardware.

The reality is the PlayStation 5 is about on par with a 5700 XT Anniversary Edition (sometimes) and the Series X is about on par with a 2080 Super. They're spaced pretty far apart.
 

Rentahamster

Rodent Whores
I'll wait until we see how this all works out when we can compare actual games head to head. Arguing about paper specs on the internet without even having the actual hardware in our hands yet is useless.
 
Top Bottom