• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Lort

Banned
You guys should watch the following video from 14:59 to 20:42. It explains why Sony chose to sacrifice a higher number of CUs for I/O optimization.

1. Accessing and storing information is the biggest bottleneck in modern computer systems.

2. Performing arithmetic operations doesn't consume nearly as much energy and bandwidth as accessing and storing the information needed to perform the operations - and accessing and storing the information that results from the operations.

3. Hence, CU count and frequency, which are proportional to the number of operations that a GPU can perform per cycle, are not as important as conveying and accessing the information associated with the computations that are performed by the CUs per cycle.

4. This is why Sony opted to allocate more of the budget for the PS5's design to its I/O system rather than the number of CUs in its GPU.

5. Hence, the XSX GPU will be hindered by the massive bandwidth consumption of the larger number of computations that its larger number of CUs perform per cycle, and it will consume much more power.

6. Hence, the greater computational capacity of the XSX's GPU will counteract itself due to its greater consumption of bandwidth and energy - and it will also be counteracted by the PS5's significantly faster I/O system.


This guy knows more than you but not half as much as he thinks he does.

His confused arguments fall apart when you actually listen.

He says arithmetics doesn’t matter ( it actually does) because nVidia says it’s free .. then he says 2 min later “that’s why it’s so important to reduce precision” .. completely contradicting himself. The whole fp16 tflops rating of the ps5 did not help.. because games actually generally need fp32 most of the time ( but not always). The xbox one x has a higher mem bandwidth and CU .. which is why it easily won head to heads. He then goes onto say that communication costs are high ( which they are) but then fails to acknowledge the way things actually work in games ( he probably doesn’t actually know)... games are generated using procedural variants from base textures .. they are no statically read from the disk. This means the hardware decompression helps but only to a limited extent.( which both consoles have and he didn’t realise that even the xbox one had on chip as well .. his obviously new to this). He also seems to try and infer the RDNA vram is something that only Sony has .. even though it’s been a system feature developed with Microsoft as part of its velocity engine and talked about since before the ps5 reveal.

The total SSD speed doesn’t make much difference because in order to use it fully and stream static textures consistently you would have to have 5gbytes of data per second of the game. A game is likely to be max 100 gigbytes.. and you can fit 10 gig of that in RAM at any time. I’ll do the math for you that’s about 20 seconds to load all 100gig or 40 on the xbox. Games simply will not be big enough to actually make use of this consistant streaming fallacy.

Nobody has latency time differences for SSD of xbox vs ps.. both are using the same RDNA 2 chip with likely the exact same VRAM implementation

....what we can say is the esram low latency ram of the xbox one .. did NOT help much .. because few coders leveraged it. Each CU has its own caches so whilst the ps5 will be clocked higher ... the xbox will have more. Also the memory on the xbox is significantly faster (again) which by his measure is very important.

Microsoft's patented enhanced VRS implementation on lighting, ads to the simpler VRS the ps5 likely has.. to actually significantly reduce the amount of internal cache misses( communication penalty) and computational requirements.

If you think this video is the be all and end all your going to be sadly mistaken .. but that’s fine ... that’s your opinion but it is not fact.
 
Last edited:

Dory16

Banned
The on paper difference this time is 16%, so too small to notice. BUT, real-world performance between the two consoles is almost certainly even smaller, because of the PS5's more customised design and absolute focus on features to remove all bottlenecks for maximum performance, instead of series X brute force approach and theoretical tflops figure (tflops is theoretical as there are so many factors that can get in the way of reaching tflops figures, and that is ignoring the fact that tflops is one part of performance of a GPU alone, and doesn't factor in all other performance metrics).

I think the difference in performance is about 5% either way and this has been alluded to by multiple developers working on both consoles.
If you think it hard enough and embrace denial enough it may become true.
 

Shio

Member
If the difference is native 4k vs upscaled, it is going to make people want native.

A lot of people apparently just brush over the XB1s pathetic marketing, the fact that it was significantly weaker AND more expensive.



Oh look, it's irrelevant 1.3 vs 1.3 comparisons.

Maybe try it with 1.3 vs 1.8 next time.

They were just parroting Cernys talking points.



I guess we're at the "selectively choosing the direction of percentage calculations to downplay the difference" stage.



So again, the XBone was the superior, more powerful console. It had less CU, higher clocks. Apparently developers just decided to render their multiplats at a significantly lower resolution because... Reasons... I guess?
So how much faster was the Xbox One CPU compared to the PS4 CPU?, must have been large?
 
PS5 is not going to sustain 10 TF, that's just a marketing number. Which is theoretical anyway and has little to do with the real world. The power difference is still small.

All I care about is the noise. Sony should be called out if they can't make a quiet console. This is how many generations of noiseboxes now? Especially since MS clearly has the cooling tech sorted out.

You should watch Cerny presentation again and listen it carefully ( if you can ).
 

SatansReverence

Hipster Princess
So how much faster was the Xbox One CPU compared to the PS4 CPU?, must have been large?

Someone has missed the sarcasm in the obviously false statement of the XBone being the superior console, that said it did have a CPU advantage as they both had the same processor but the xbox had a higher clock.
 
Last edited:

Evilms

Banned
IGN

le7p71bnnap41.jpg
 

B_Boss

Member
But yeah, if Sony can somehow hit that $399.99 price point then Microsoft better have that Lockhart at the ready because we already saw what happened when Microsoft came out with a console $100 more expensive than their direct competition.

To be fair and honest, this time is significantly different. You have MS’ message at E3, you had the +$100 price, you had NS with the technically weaker console as well. There were a number of factors (including the very important pricing) that contributed to the relative disaster that was the XB1’s unveiling announcement. Aside from price (we do not have official prices for either console as far as I’m aware), we seem to have a positively different MS this time around 🤔.
 

llien

Member
But yeah, if Sony can somehow hit that $399.99 price point then Microsoft better have that Lockhart at the ready because we already saw what happened when Microsoft came out with a console $100 more expensive than their direct competition.

But that was a much more nuanced clusterfuck:

1) 30% slower than PS4
2) that taste from "digtial only... and how to share games" push
3) +$100 on TOP of it

That being said, I doubt Microsoft would dare to price SeX beyond PS5+$50.
 

TTOOLL

Member
To be fair and honest, this time is significantly different. You have MS’ message at E3, you had the +$100 price, you had NS with the technically weaker console as well. There were a number of factors (including the very important pricing) that contributed to the relative disaster that was the XB1’s unveiling announcement. Aside from price (we do not have official prices for either console as far as I’m aware), we seem to have a positively different MS this time around 🤔.

There's no way XSX will be cheaper than PS5, right? I mean, if it is so much more powerful it has to be more expensive.
 

Tqaulity

Member
No. Just no.



Cool, but 12.1 tflops of RDNA 2 > "10.2" tflops of RDNA 2.

I find it funny that some users are trying to act as if the XSX is GCN and PS5 is RDNA2.
See...somone had to do it. When in my entire post did I mention or compare to Xbox? Xbox has more TFLOPs, we all know that bro. Not the point of my post :messenger_winking:
 

Nickolaidas

Member
So how much faster was the Xbox One CPU compared to the PS4 CPU?, must have been large?
It was noticeable. Games like Alien Isolation and Tekken run on a pitiful 720p - which looks like crap on a 4K TV - while the same games on PS4 run at a 1080p resolution, looking much better, sharper.

So yeah, I can definitely see a lot of multiplat games not running on Native 4K on the PS5, but rather on 1440p - which isn't nearly as bad as 720p on a 4K TV. But again, it sucks that the most impressive looking games on the PS5 PROBABLY won't manage to have constant 4K resolution.

But hey, this whole dynamic resolution thing makes the whole conversation kinda moot.
 

Shio

Member
Someone has missed the sarcasm in the obviously false statement of the XBone being the superior console, that said it did have a CPU advantage as they both had the same processor but the xbox had a higher clock.
I appreciate your comment but i was looking for number as just saying "more" or "less" doesn't help much. The numbers i found were
Xbox One
CPU:1.75Ghz
GPU: 853Mhz

PS4
CPU: 1.6Ghz
GPU: 800Mhz

Make of that what you will.
 

Kagero

Member
The XSX GPU chip will be substantially more costly than the the PS5 GPU but PS5's custom flash and IO controllers, SSD drive and perhaps audio chip will probably offset most or all of that extra cost. We just have to see.
You forgot the exotic cooling solution. Last I heard, it was this that was driving up cost substantially.
 

B_Boss

Member
There's no way XSX will be cheaper than PS5, right? I mean, if it is so much more powerful it has to be more expensive.

If I had to purely guess based on at least what I think I know, I’d say it’ll probably be a bit more expensive than the PS5 but I could (obviously) be wrong. Someone (sorry I cannot recall!) here made some interesting points about how the PS5 and it’s hardware could be costly for the consumer at the end of the day but it was their opinion, an interesting one nonetheless. My point was that there is more to consider (alongside +100.00 more) when considering the dark ages at the beginning of Gen.8 for MS.
 

SamWeb

Member
The on paper difference this time is 16%, so too small to notice. BUT, real-world performance between the two consoles is almost certainly even smaller, because of the PS5's more customised design and absolute focus on features to remove all bottlenecks for maximum performance, instead of series X brute force approach and theoretical tflops figure (tflops is theoretical as there are so many factors that can get in the way of reaching tflops figures, and that is ignoring the fact that tflops is one part of performance of a GPU alone, and doesn't factor in all other performance metrics).

I think the difference in performance is about 5% either way and this has been alluded to by multiple developers working on both consoles.
5% difference ? You are optimistic))
Closer to reality, the difference - RX 5700 XT > RX 5600 XT
 

BluRayHiDef

Banned
This guy knows more than you but not half as much as he thinks he does.

His confused arguments fall apart when you actually listen.

He says arithmetics doesn’t matter ( it actually does) because nVidia says it’s free .. then he says 2 min later “that’s why it’s so important to reduce precision” .. completely contradicting himself. The whole fp16 tflops rating of the ps5 did not help.. because games actually generally need fp32 most of the time ( but not always). The xbox one x has a higher mem bandwidth and CU .. which is why it easily won head to heads. He then goes onto say that communication costs are high ( which they are) but then fails to acknowledge the way things actually work in games ( he probably doesn’t actually know)... games are generated using procedural variants from base textures .. they are no statically read from the disk. This means the hardware decompression helps but only to a limited extent.( which both consoles have and he didn’t realise that even the xbox one had on chip as well .. his obviously new to this). He also seems to try and infer the RDNA vram is something that only Sony has .. even though it’s been a system feature developed with Microsoft as part of its velocity engine and talked about since before the ps5 reveal.

The total SSD speed doesn’t make much difference because in order to use it fully and stream static textures consistently you would have to have 5gbytes of data per second of the game. A game is likely to be max 100 gigbytes.. and you can fit 10 gig of that in RAM at any time. I’ll do the math for you that’s about 20 seconds to load all 100gig or 40 on the xbox. Games simply will not be big enough to actually make use of this consistant streaming fallacy.

Nobody has latency time differences for SSD of xbox vs ps.. both are using the same RDNA 2 chip with likely the exact same VRAM implementation

....what we can say is the esram low latency ram of the xbox one .. did NOT help much .. because few coders leveraged it. Each CU has its own caches so whilst the ps5 will be clocked higher ... the xbox will have more. Also the memory on the xbox is significantly faster (again) which by his measure is very important.

Microsoft's patented enhanced VRS implementation on lighting, ads to the simpler VRS the ps5 likely has.. to actually significantly reduce the amount of internal cache misses( communication penalty) and computational requirements.

If you think this video is the be all and end all your going to be sadly mistaken .. but that’s fine ... that’s your opinion but it is not fact.

1. Variable rate shading enables hardware to determine what portion of a 3D environment is in focus, what portion is in peripheral vision, and what portion is in the background. Hence, this technology enables GPUs to determine which portion of a frame to render with high precision and which portions to render with low precision. Hence, for the PS5 and XSX, computational costs will indeed be cheap and subsequently not as important as memory-access costs, since only a portion of a frame will be rendered with high precision. Hence, the assertion that computation is free stands true.

2. Your assertion that SSD size doesn't matter due to the size of games maxing out at 100GB is unsubstantiated in regard to games that will be designed for the PS5 and XSX. The lightning-fast SSDs of the PS5 and XSX, especially that of the former, may encourage developers to make games even larger than 100GBs, perhaps to an extent that would give the PS5 an advantage in regard to multiplatform games and to an extent that the XSX would be incapable of handling in regard to PS5 exclusives.

3. The XSX's RAM is faster across only 10GB of its total of 16GB. If developers want to design a game that requires the GPU to process more than 10GB of data within a particular range of time, they have to drop all the way down to the speed of the remaining 6GB of RAM, whuch is only 339GB/s. This is a potential bottleneck.

4. Traditionally, the data that is loaded into RAM from storage is data that MAY be used before the system has to load new data into RAM. Hence, a portion of the data may wind up being unnecessary (e.g. perhaps a player may decide to not travel a route whose corresponding data has been loaded into memory), which is a waste of processing time.

Because the amount of time that the XSX takes to load data from its SSD into RAM and then render it is greater than the amount of time that it takes the PS5 to do the same, the XSX has to load more data into RAM to accommodate for what MAY be needed at any given time since it'll be a longer while relative to the PS5 until it's loaded new data into RAM. This counteracts its larger bandwidth; in other words, its slower SSD is a bottleneck.
 

Felessan

Member
The XSX GPU chip will be substantially more costly than the the PS5 GPU but PS5's custom flash and IO controllers, SSD drive and perhaps audio chip will probably offset most or all of that extra cost. We just have to see.
As far as I understand audio is a part of APU (GPU to be exact), it's just some CU customized for audio (compute) purposes.
SSD itself will most probably be cheaper than MS, as it's use low density chips, unlike MS 1 TB solution.
 

yewles1

Member
Personally, I believe the potential of PS5 in multiplats, despite slightly worse visuals, will be in the superior loading and streaming of assets on the fly AND--considering the superior SSD speeds and a certain Sony patent--smaller install sizes. This is all just opinion of course.
 

SamWeb

Member
1. Variable rate shading enables hardware to determine what portion of a 3D environment is in focus, what portion is in peripheral vision, and what portion is in the background. Hence, this technology enables GPUs to determine which portion of a frame to render with high precision and which portions to render with low precision. Hence, for the PS5 and XSX, computational costs will indeed be cheap and subsequently not as important as memory-access costs, since only a portion of a frame will be rendered with high precision. Hence, the assertion that computation is free stands true.

2. Your assertion that SSD size doesn't matter due to the size of games maxing out at 100GB is unsubstantiated in regard to games that will be designed for the PS5 and XSX. The lightning-fast SSDs of the PS5 and XSX, especially that of the former, may encourage developers to make games even larger than 100GBs, perhaps to an extent that would give the PS5 an advantage in regard to multiplatform games and to an extent that the XSX would be incapable of handling in regard to PS5 exclusives.

3. The XSX's RAM is faster across only 10GB of its total of 16GB. If developers want to design a game that requires the GPU to process more than 10GB of data within a particular range of time, they have to drop all the way down to the speed of the remaining 6GB of RAM, whuch is only 339GB/s. This is a potential bottleneck.

4. Traditionally, the data that is loaded into RAM from storage is data that MAY be used before the system has to load new data into RAM. Hence, a portion of the data may wind up being unnecessary (e.g. perhaps a player may decide to not travel a route whose corresponding data has been loaded into memory), which is a waste of processing time.

Because the amount of time that the XSX takes to load data from its SSD into RAM and then render it is greater than the amount of time that it takes the PS5 to do the same, the XSX has to load more data into RAM to accommodate for what MAY be needed at any given time since it'll be a longer while relative to the PS5 until it's loaded new data into RAM. This counteracts its larger bandwidth; in other words, its slower SSD is a bottleneck.
LOL!) NO. It is fiction! Minimum memory bandwidth XSX is 560GB/s for all 16GB of VRAM.
 
Last edited:

Shio

Member
I think people fail to realize how much of a difference custom silicon could have, looking at the video at which is about Advanced Mesh Shaders in Xbox Series X, you can see that the amount of model data in both scenes is the same but the way the hardware renders it makes a huge difference to the render time. We assume that the hardware doesn't draw things that are hidden/not seen but that is not always the case and this is where a lot of power could be wasted. Just being more efficient when rendering and being able to ignore faces/objects which do not need to be drawn could help alot.
 
The on paper difference this time is 16%, so too small to notice. BUT, real-world performance between the two consoles is almost certainly even smaller, because of the PS5's more customised design and absolute focus on features to remove all bottlenecks for maximum performance, instead of series X brute force approach and theoretical tflops figure (tflops is theoretical as there are so many factors that can get in the way of reaching tflops figures, and that is ignoring the fact that tflops is one part of performance of a GPU alone, and doesn't factor in all other performance metrics).

I think the difference in performance is about 5% either way and this has been alluded to by multiple developers working on both consoles.
The on paper difference is >20%
The real world difference will probably be much bigger in favor of Series X, because it's a more balanced system with no bottlenecks and more modern GPU features.

From where does it come please?
The better question would be WTF is that and supposed to mean.
 
Last edited:

Evilms

Banned

SamWeb

Member
LOL correct if game is below 10 GB, if the CPU and audio is in slow part of RAM, read my previous post, they are both similar and Ps5 might creep up on xsx to the point it makes timdog cry.
Not slower, but "less wide" (if I may say so). Both memory pools have one common bus. 10 physical memory chips form a common 320-bit bus (10*32 = 320) with a maximum bandwidth - 560GB /s
 

kensama

Member
The on paper difference is >20%
The real world difference will probably be much bigger in favor of Series X, because it's a more balanced system with no bottlenecks and more modern GPU features.


The better question would be WTF is that and supposed to mean.


Iasked this because on french forum guys are spreading that above 10Go for GPU is useless (becaus apparently XSX will allow a maximum of 10Go for GPU.)
So i explain to guys that the choice to have ununified RAM like PS5 for XSX could be damageable, if developper want more and the fact that above 10go RAM is slower on XSX.
 

Audiophile

Member
I'll reiterate some of my old posts as more recent frequenters are assuming the Sony design is just overclocking to fix design flaws. When in reality the design inherently maximizes performances of all workloads by making maximum power draw available at all times, as opposed to only making that power available in rarer peak loads. High clocks and variability appear to be inherent to the design, not a band aid solution.

-------

TF is the theoretical peak amount of floating point operations that one component of a GPU can do in a second. Think of this as the ceiling, not where you spend most of your time (or in reality, any of it).

Running all processors at full clocks doesn't equate to full utilisation and peak power draw. It's primarily the nature of the calculations being done and the utilisation of the threads that will draw more power.

Hence you can run at full clocks and "10.3TF" (but not really) without drawing peak power. When the GPU does approach peak utilisation and subsequently peak power draw, it can utilise smartshift so that the CPU can then lend some of its power to the GPU to get it even closer to peak utilisation; which in turns allows it to throw a few more pixels/frames/fx at the screen. It's relatively rare that CPU and GPU are being fully saturated at the same time in game software, so I expect this to be the dominant and preferable option rather than the GPU clocking down.

Sony are placing their power budget in a place where the system is likely to spend the vast majority of its time, rather than where the higher peaks are. This allows them to run everything except those peaks higher than they otherwise would have given the thermal/financial/power envelope inherent to their design. This also means they don't have to over-engineer the cooling solution for a state that the console is rarely in.

Don't think of this a base clock and a boost clock. Think of the inverse, with the boost as the base clock and in certain circumstances (very high thread utilisation, power hungry instructions (AVX on CPU for eg.]) it will come down to a throttle clock.

This is based on power only and not random thermals (a set, uniform thermal ceiling will be factored in to the cooling already); it is also deterministic and based around a model SoC. Developers will be able to determine when and where certain changes may occur and how to handle them. I'll hazard a guess they'll have this implemented in their profiling software and will likely be able to automate much of the process with algorithms; and throttling could likely be handled by a small dynamic resolution scale, more aggressive VRS, reduced LOD or something of that nature.

Also, with a ~2% reduction in clocks giving a 10% reduction in power. Don't expect massive down clocks beyond that range.

Why variable? Because it at least allows for more than would otherwise be possible with the given piece of hardware and surrounding components; with the caveat that in rarer scenarios decisions may have to be made in terms of where to direct your power budget. It's a smart, economic compromise that turns an existing idea on its head and allows for the system to punch a little above its weight while allowing for resources to be put into -- what are in my opinion -- more important areas... [Despite mitigating the gap somewhat, XSX is still the more powerful piece of hardware in terms of compute, but I feel there were greater gains to be made elsewhere.]

In addition to this, a constant, known power draw means a constant known thermal footprint, which means you can tailor your cooling system and fan speeds for a singular point; maximising efficiency and acoustics.

It takes a little while to get your head around it as the idea is a reversal of what we've seen elsewhere, the point from which it's built outwards is shifted mainly from variable thermals to fixed power draw; and it's optimised for what the system is likely to do most as opposed to what it does rarely.

Edit: The only thing that points to a 9.2TF version of the PS5 GPU is the GitHub leak, which admittedly was right about the CU count, but was wrong about architecture and feature sets. GitHub also reported a different clock and CU count for the XSX. Clocks tend to be subject to change, as are disabled/active CU counts.

-------

Running at max clocks does not mean it's drawing maximum power. Utilization is being conflated with clock frequency; they are not the same thing.

Think...

PS5:
Constant Power Draw: Max Clockspeed = Common Workload
Constant Power Draw: Reduced Clockspeed = Complex Workload

XSX:
Reduced Power Draw: Constant Clockspeed = Common Workload
Increased Power Draw: Constant Clockspeed = Complex Workload

Now let's say you spend >90% of your time on the Common Workload, which box is making more efficient use of its total computational capability?

As console design convention dictates, MS' solution is over-engineered for the rarer Complex Workload but they could in theory increase the clocks for their Common Workload because there will be a surplus of power available in that state. They're literally leaving power (performance) on the table for the Common Workload.

Sony's solution is using all of the power (performance) available to them in both states; and more importantly, in the state where it's likely to spend most of its time. In addition, smartshift can be used to mitigate pullbacks further when the CPU has power left over. Even if the GPU is at a full 2.23GHz already it can still benefit from more power if you intend to run more power hungry calculations that saturate more of the silicon.


The XSX is a box that will spend most of its time punching a little bit below its weight.

The PS5 is a box that will spend most of its time punching a little bit above its weight.

-------
 
Last edited:

BluRayHiDef

Banned
The on paper difference is >20%
The real world difference will probably be much bigger in favor of Series X, because it's a more balanced system with no bottlenecks and more modern GPU features.


The better question would be WTF is that and supposed to mean.

PlayStation 5's GPU Clock Speed = 2,230MHz
PlayStation 5's GPU = 2,304 Cores
PlayStation 5' Fill Rate = 2,230Mhz x 2,304 cores = 5,137,920 texels per second

Xbox Series X's GPU Clock Speed = 1,825MHz
Xbox Series X GPU = 3,328 Cores
Xbox Series X's Fill Rate = 1,825MHz x 3,328 Cores = 6,073,600 texels per second

Percentage Difference between Fill Rates of both consoles = [(5,137,920 t/s) / (6,073,600 t/s)] x 100 = 84.6% -> 100% - 84.6% = 15.4%

The PlayStation 5's texture fill rate is 15.4% less than that of the Xbox Series X.
 

SamWeb

Member
"The XSX is a box that will spend most of its time punching a little bit below its weight.

The PS5 is a box that will spend most of its time punching a little bit above its weight."


when it comes to power consumption.))
 
Last edited:

Audiophile

Member
"The XSX is a box that will spend most of its time punching a little bit below its weight.

The PS5 is a box that will spend most of its time punching a little bit above its weight."


when it comes to power consumption.))

I agree, as long as it's understood that it's not in regards to eco friendliness, but in regards to that power being directed to the APU to provide the system with more performance.

Let's say for eg. that both the PS5 and XSX both have a 180w limit for their respective APUs...

PS5 APU can utilise that 180w at all times regardless of load.

XSX APU can only utilise that 180w in edge cases with extremely high load, but may use say ~150w most of the time in general workloads. The rest of the time the system is literally leaving power and subsequent performance on the table.

PS5 is still less powerful but it is making more out of what it has.
 
Last edited:

splattered

Member

Well there goes the whole "I'll buy a ps5 because it will be cheaper" theory out the window...

Surprisingly closer split between XsX and Ps5 than I would have thought.

If MS comes out swinging firing on all cylinders with Halo Infinite and others at launch and Sony doesnt have anything new at launch besides enhanced versions of games that came out this spring and summer they very well may be in trouble.
 

SamWeb

Member
I agree, as long as it's understood that it's not in regards to eco friendliness, but in regards to that power being directed to the APU to provide the system with more performance.

Let's say for eg. that both the PS5 and XSX both have a 180w limit for their respective APUs...

PS5 APU can utilise that 180w at all times regardless of load.

XSX APU can only utilise that 180w in edge cases with extremely high load, but may use say ~150w most of the time in general workloads. The rest of the time the system is literally leaving power and subsequent performance on the table.

PS5 is still less powerful but it is making more out of what it has.
Try to recalculate this based on the ratio of the difference in computing power.

10.28 TF PS5> (conditionally) 180W power consumption
12.14 XSX> (conditionally) 200W power consumption

200-180 = 10%
12.14 - 10.28 = 15%

(+ 3.5 vs 3.6 GHz processor speed)

The PS5 chip just was overclocked. SSD PS5 is likely to consume more energy and generate more heat. In any case, we should wait for the real tests.
 
Last edited:

Neo Blaster

Member
The on paper difference is >20%
The real world difference will probably be much bigger in favor of Series X, because it's a more balanced system with no bottlenecks and more modern GPU features.
When I see PC related YouTubers discussing how Cerny spent so much effort to mitigate bottlenecks and use resources efficiently and not a single mention regarding this about XSX besides the Godly BCPack, I just have to laugh on such posts.

And both are using RDNA 2 GPUs, can you please tell me what more modern GPU features XSX is using that PS5 is not?
 
Last edited:

Chumpion

Member
Also, with a ~2% reduction in clocks giving a 10% reduction in power. Don't expect massive down clocks beyond that range.

People are pointing that out like it's a good thing. It's actually a completely insane ratio.

Still, the only thing I don't like is the thermals. All this talk about TF will be forgotten when the games are shown, especially Sony 1st party, because they will be mindblowing.
 
Status
Not open for further replies.
Top Bottom