• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

azertydu91

Hard to Kill
No, The Rock is more than perfect, and he's already has a good relationship with Sony Pictures and even made a silly game on PS4 for Jumanji. But I'd rather have the voice actor of GOW PS4. He has massive body, and majestic voice.
His face is too nice, too soft for the role but he definitely has the body.And if there were a The last of us movie we all know which actor would be perfect for Joel .... Hugh Jackman and for those disagreeing go watch prisoners instead of being wrong.
 

azertydu91

Hard to Kill
Jason Momoa seems like the better choice to me
giphy.gif
 

Bo_Hazem

Banned
I may eventually get a PS5 one day and yeah I am a pretty diehard Sony fan I just dont care about anything they have for now.

I do plan on having a new prebuilt pc with a 3080 and probably play 90% of my games there.

Honestly its hard to me to compare the D pad because I don't use stock controllers I am a big fan of the Razer Wolverine Ultimate I think it destroys anything Xbox has to offer.

I loved you, man. This is unfair...

mSGHltI.gif
 

PaintTinJr

Member
There's more parts dissipating heat than the GPU/CPU. PS5 has a power hungry faster SSD and a few extra fixed function units.
I considered that before floating my thought, but the XsX is using the same Nvidia RTX IO solution in the XsX and that is PCIe, so it is either 10watts or 25watts for their 2.5GB/s - which is probably less power efficient that the PS5 SSD because Xbox made noise about cooling and performance when SSDs get hot from high throughput, so that still wouldn't offset the difference in PSU wattage IMHO., and the SSD in each is probably using less than 20watts IMO.

The TDP of the XsX chip was something Microsoft didn't want people to know, as they deflected the hotchips' question - with a excuse akin to suggesting it was too complicated to explain, despite the audience. The additional units in the PS5 CPU side of the APU will increase the TDP, but IMO it will be small wattages in the 5-10 watts difference on 7nm.

Ultimately GPU silicon is by far the most power hungry item in both consoles, and with the PS5 APU having just 70% of the CUs of the XsX, and XsX GPU clock being 82% of the PS5's maximum clock, the power needed for work-done in each system will have been selected by their engineer teams based on real-world capability - the XsX will have been done the way Cerny explained how the PS4 Pro power/cooling was designed in the Road to PS5.
Xbox have played the "fixed clocks" line, which if sincere should mean that at comparative max throughput as the PS5, it should probably have a 450watt PSU, because the PS5 power usage is a paradigm shift - of deterministic power, as Cerny describes removing the unknowns, so they aren't doing guess work - and so don't need the PSU headroom they needed in their other consoles (or the Xbox designs).

On that basis, the PS5 is will be using proportionally more of its PSU power than the PS4 Pro, and deterministically using more all the time, because of the constant boosting, and need to be transforming it into actual work-done to fit the new paradigm for cooling.

IMO, even if we accounted for some of the differences in external power output between the XsX and PS5 - the PS5 looks to have a USB3.0 type A hub(10watts max?) on the rear and a USB-type C hub on the front(36watts max?), and the internal NVMe upgrade can require 25watts like PCIe.
If by comparison the XsX only offers up 40-50watts of that 70watts - by not doing the NVMe option or supplying the 36watts for USB type-c, then if the actual internal XsX hardware was more power hungry for work-done then I'd still expect those other 16 CUs(14 CUs if we say Tempest is 2CUs) to push the XsX APU TDP by 40-50watts more. But seemingly it doesn't, and will have the same PSU headroom as the Pro or X1X designs did.

So, I think this is the biggest indicator, that despite the XsX fixed clocks the CU occupancy at those clocks will be low enough for that system to use less power than the less TFlops ps5.
 
Last edited:
Hitman is a game that runs at 1080p with identical graphics settings on both XB1 and PS4 and the frame-rate also happens to be uncapped. So we can see how both consoles perform:

ezdnBv0.jpg


PS4 has 40% more TFLOPs, but real world performance translates to a bit less than that in this instance.
Interesting example.

I posted a video earlier from DF with a test on hitman, trying to illustrate both the performance differences clock speed produces, AND the performance differences CUs produce


-On a 36 CU rdna gpu, going from 1.9Ghz to 2.1Ghz produces only a 6% advantage
-While a 40 CU rdna card at 1.8xGhz already overperforms the 36 CU GPU at 2.1Ghz, with only 4 CUs more, AND this at the same teraflops

(Edit: For those of you that are not able to understand, and put smiley emoticons on this post, here basically DF disproves cerny's "rising tide" bull-theorem)

Although the entire video segment is important and quite eye opening, you can just jump to just before the 7th minute point for the test.
 
Last edited:

geordiemp

Member
I've posted this before in this thread long ago.

pcy2z0g.png


I've pro-rated the memory on the XSX, as not sure how else you can make the comparison.

Visualizing helps.

Memory bandwidth feeds the caches, and the speeds all teh way to the cU are also important.

Your missing how all the CU are being fed work, the cache speeds and cache sizes and how wide or fast they are.

L2 XSX is 5 MB feeds 4 shader arrays and 4 x L1, each L1 feeds 14 CU

L2 Ps5 is unknown and feeds 4 shader arrays and 4 x L1, each L1 feeds 10 CU

With 20 % faster clocks you need to wait, they could be closer, they me not be,....?????
 

Kerlurk

Banned
Memory bandwidth feeds the caches, and the speeds all teh way to the cU are also important.

Your missing how all the CU are being fed work, the cache speeds and cache sizes and how wide or fast they are.

L2 XSX is 5 MB feeds 4 shader arrays and 4 x L1, each L1 feeds 14 CU

L2 Ps5 is unknown and feeds 4 shader arrays and 4 x L1, each L1 feeds 10 CU

With 20 % faster clocks you need to wait, they could be closer, they me not be,....?????

I know, I've seen posts bringing up those differences.

Overall, I think both consoles are a bit of a wash. Different approaches, but similar results.

Just wanted to point out, that the differences between XSX and PS5 are smaller than PS4 Pro and X1X.
 

Aceofspades

Banned
I've posted this before in this thread long ago.

pcy2z0g.png


I've pro-rated the memory on the XSX, as not sure how else you can make the comparison.
((10/16)*560) + ((6/16)*336) = 476 GB/s

Yes, I know you mentioned the orig PS4 and X1X, but have not make the comparison.

Visualizing helps.

CPU differences should be 100/97
You didn't use single threaded as your base for XSX?

Yup, you just compared single threaded 3.8 to SMT 3.5 on PS5. Should have been 3.6 vs 3.5
 
Last edited:

Kerlurk

Banned
you will be disappointed.

I already own a PS4 Pro over the X1X, and I have no regrets at all, so I fail to see how a smaller difference in the next generation is going to make me disappointed.

If it's all about power, I hope you getting a high end PC over the XSX, or you're going to be disappointed.

Nope. The differences are bigger.

Because you say so??? Explain how the differences are greater.
 
Last edited:
On a different process with a different architecture. You don't know how PS5's RDNA2 architecture scales on TSMC's more advanced 7nm node.
:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:
so performance will scale up only for playstation 5? not for xbox?
:messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy:
and this test is as sweet for sony as it can be, because the new architecture indeed boasts some important changes that boost performance,
and while we know that xbox includes them, official sony has remained silent about them for 6 months. (sony architect already said they are not in, but "official sony" aka cerny simply has remained dead silent)
oh, and to the above recipe for "success" add the variable clocks.
 

PaintTinJr

Member
To help you understand "what's next", just watch the video I posted above, and try to understand it IF you can
The video is a bit pointless IMO, as so much of what is being compared is opaque in what causes the problems. If this was on open source linux drivers then I suspect the results would be different, but we still can't rule out that AMD do all this type testing long before DF get a chance, and that they performance limit in firmware to ensure that a heavily clocked standard version of an XT card doesn't out perform the later.

Nothing about what is being compared is scientifically sound - because we can't even rule out that closed-source DirectX hasn't been specifically altered to play a negative narrative like this. As mentioned open source linux and GPu drivers, opengl, even if using WINE and then we'd be getting closer to testing that looked transparent - apart from the GPU firmwares.
 

Lethal01

Member
Interesting example.

I posted a video earlier from DF with a test on hitman, trying to illustrate both the performance differences clock speed produces, AND the performance differences CUs produce


-On a 36 CU rdna gpu, going from 1.9Ghz to 2.1Ghz produces only a 6% advantage
-While a 40 CU rdna card at 1.8xGhz already overperforms the 36 CU GPU at 2.1Ghz, with only 4 CUs more, AND this at the same teraflops

(Edit: For those of you that are not able to understand, and put smiley emoticons on this post, here basically DF disproves cerny's "rising tide" bull-theorem)

Although the entire video segment is important and quite eye opening, you can just jump to just before the 7th minute point for the test.


What were their results when they did this test on RDNA2?

We don't know how much of a boost higher clocks will provide on the new architecture.
 
Last edited:

Kerlurk

Banned
CPU differences should be 100/97
You didn't use single threaded as your base for XSX?

Yup, you just compared single threaded 3.8 to SMT 3.5 on PS5. Should have been 3.6 vs 3.5

Yeah I know, but that CPU difference is so small, it's going to make no difference in any game.

SMT usually gives you 15% extra performance making the comparison to the single threaded XSX option more or less even.
 
Last edited:
The video is a bit pointless IMO, as so much of what is being compared is opaque in what causes the problems. If this was on open source linux drivers then I suspect the results would be different, but we still can't rule out that AMD do all this type testing long before DF get a chance, and that they performance limit in firmware to ensure that a heavily clocked standard version of an XT card doesn't out perform the later.

Nothing about what is being compared is scientifically sound - because we can't even rule out that closed-source DirectX hasn't been specifically altered to play a negative narrative like this. As mentioned open source linux and GPu drivers, opengl, even if using WINE and then we'd be getting closer to testing that looked transparent - apart from the GPU firmwares.
In the video clearly Bandwidth is described as one problem, that even if you could clock higher, you'd still get close to zero benefit.
Bandwidth on 5700 is the same as in playstation 5

p.s. I had a good laugh with your comment that "DirectX may have been specifically altered to play a role" :messenger_beaming:
 

jonnyp

Member
:messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy: :messenger_tears_of_joy:
so performance will scale up only for playstation 5? not for xbox?
:messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy::messenger_tears_of_joy:
and this test is as sweet for sony as it can be, because the new architecture indeed boasts some important changes that boost performance,
and while we know that xbox includes them, official sony has remained silent about them for 6 months. (sony architect already said they are not in, but "official sony" aka cerny simply has remained dead silent)
oh, and to the above recipe for "success" add the variable clocks.

What on earth are you even talking about anymore?
 
Is Jim Ryan becoming the Don Mattrick of Sony and why/why not
For sony? In absolutely no way. He brings in the monies, as long as he does that, that's the only corpo measure for his success
For customers? we shall see...


What on earth are you even talking about anymore?
Although I always try to write in a very simple and explanatory manner, it is not my obligation for every reader to understand.
Some people understand, some people just won't understand.
And that's life for you...
 
Last edited:
I considered that before floating my thought, but the XsX is using the same Nvidia RTX IO solution in the XsX and that is PCIe, so it is either 10watts or 25watts for their 2.5GB/s - which is probably less power efficient that the PS5 SSD because Xbox made noise about cooling and performance when SSDs get hot from high throughput, so that still wouldn't offset the difference in PSU wattage IMHO., and the SSD in each is probably using less than 20watts IMO.

The TDP of the XsX chip was something Microsoft didn't want people to know, as they deflected the hotchips' question - with a excuse akin to suggesting it was too complicated to explain, despite the audience. The additional units in the PS5 CPU side of the APU will increase the TDP, but IMO it will be small wattages in the 5-10 watts difference on 7nm.

Ultimately GPU silicon is by far the most power hungry item in both consoles, and with the PS5 APU having just 70% of the CUs of the XsX, and XsX GPU clock being 82% of the PS5's maximum clock, the power needed for work-done in each system will have been selected by their engineer teams based on real-world capability - the XsX will have been done the way Cerny explained how the PS4 Pro power/cooling was designed in the Road to PS5.
Xbox have played the "fixed clocks" line, which if sincere should mean that at comparative max throughput as the PS5, it should probably have a 450watt PSU, because the PS5 power usage is a paradigm shift - of deterministic power, as Cerny describes removing the unknowns, so they aren't doing guess work - and so don't need the PSU headroom they needed in their other consoles (or the Xbox designs).

On that basis, the PS5 is will be using proportionally more of its PSU power than the PS4 Pro, and deterministically using more all the time, because of the constant boosting, and need to be transforming it into actual work-done to fit the new paradigm for cooling.

IMO, even if we accounted for some of the differences in external power output between the XsX and PS5 - the PS5 looks to have a USB3.0 type A hub(10watts max?) on the rear and a USB-type C hub on the front(36watts max?), and the internal NVMe upgrade can require 25watts like PCIe.
If by comparison the XsX only offers up 40-50watts of that 70watts - by not doing the NVMe option or supplying the 36watts for USB type-c, then if the actual internal XsX hardware was more power hungry for work-done then I'd still expect those other 16 CUs(14 CUs if we say Tempest is 2CUs) to push the XsX APU TDP by 40-50watts more. But seemingly it doesn't, and will have the same PSU headroom as the Pro or X1X designs did.

So, I think this is the biggest indicator, that despite the XsX fixed clocks the CU occupancy at those clocks will be low enough for that system to use less power than the less TFlops ps5.
and given the same power efficiency, more power consumption *usually* means to be more powerful. pure speculation :messenger_halo:
 
Status
Not open for further replies.
Top Bottom