• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
even funnier with all this real rdna2 fuss ;)

zkgur8m7hb25.gif



Yeah the test is interesting because they kept clocks the same on all 4 cards.

When compared to the full 80 CU Navi21,
50% CUs = 60-68% FPS
75% CUs = 80-86% FPS
90% CUs = 93-94% FPS

So by adding more CUs even at same clocks, there are clear diminishing returns. They went from 40 to 60 CUs at same 2.0 GHz (and increased bandwidth), so thats a 50% increase in shaders, yet only saw about a 33% increase in average performance.

Therefore if one was to add say, 44% more CUs, but also lowered the clock speed by almost 20%, then one could expect almost no improvement in performance.

1.0x1.5x.88=1.33 (going from 40 to 60 CUs @ same clocks, but just 33% perf gain.)
.818x1.44x.88=1.037 (36 vs 52 CUs @ 2.23 vs 1.825 GHz would give just 4% perf gain, if all else was equal)

And remember, when they went from 40 to 60 CUs for Navi 21/22, they kept the same 10 CUs per shader array, like PS5, instead of just adding more CUs into each shader array like they did for XSX. They doubled the shader engines and everything else when going from Navi 22 to Navi 21. Neither console has the 4 SEs of Navi21. So that minor 4% perf gain could be even smaller, aka zero, if adding more CUs per SA decreases efficiency. Not to mention how various other things like split memory bandwidth or slower I/O could bottleneck performance.

charts


F6B5PqH.jpg

Fh0UfCp.jpg


So, are we back to discussion about this around a month ago :

In fact, it is not! Clock would be higher if they chose over 25%, right. And power consumption lower??

But anyway, same from Anandtech



and

Here is the official full HotChips conference for XSX.




UpNlOnE.jpg


Timestamped at 16:50

 
Yeah the test is interesting because they kept clocks the same on all 4 cards.

When compared to the full 80 CU Navi21,
50% CUs = 60-68% FPS
75% CUs = 80-86% FPS
90% CUs = 93-94% FPS

So by adding more CUs even at same clocks, there are clear diminishing returns. They went from 40 to 60 CUs at same 2.0 GHz (and increased bandwidth), so thats a 50% increase in shaders, yet only saw about a 33% increase in average performance.

Therefore if one was to add say, 44% more CUs, but also lowered the clock speed by almost 20%, then one could expect almost no improvement in performance.

1.0x1.5x.88=1.33 (going from 40 to 60 CUs @ same clocks, but just 33% perf gain.)
.818x1.44x.88=1.037 (36 vs 52 CUs @ 2.23 vs 1.825 GHz would give just 4% perf gain, if all else was equal)

And remember, when they went from 40 to 60 CUs for Navi 21/22, they kept the same 10 CUs per shader array, like PS5, instead of just adding more CUs into each shader array like they did for XSX. They doubled the shader engines and everything else when going from Navi 22 to Navi 21. Neither console has the 4 SEs of Navi21. So that minor 4% perf gain could be even smaller, aka zero, if adding more CUs per SA decreases efficiency. Not to mention how various other things like split memory bandwidth or slower I/O could bottleneck performance.

charts

F6B5PqH.jpg

Fh0UfCp.jpg

Not sure this comparison is really interesting, simply because the bandwidth is finally not taken in account in the equation.
They said that the 6700XT has less memory bandwidth and less IC quantity and it can't be compensate, but in reality, that's not like that the things need to be seen because the 6700XT is in fact less limited by that point. For the 40 CU GPU, you have a bandwidth of 384GB/S + 96MB of IC memory (so 9.6 GB/S per CU + 2.4MB IC per CU) vs 512GB/S for the 60 CU + 128MB IC (so 8.53GB/S + 2.13MB IC per CU). For the 72 and the 80 CU versions, that's worst because they keep the same memory bandwidth for each ones + same IC quantity. It's obvious from that point that they won't scale perfectly, same problem you'll encouter if you increase too much the GPU frequency without increasing BW. If you want a good comparison, you need to increase the BW in the same way than CU number.
Not here to say the contrary to "frequency increase has better impact to simply CU increase" (because I agree about that point), simply speaking about the comparison interest.
 
Last edited:

LiquidRex

Member

This for me is a big disappointment, with the CPU capabilities of the new consoles being less restrictive than the last, and the highly enjoyable destructive experience of Just Cause... They decide to take the franchise into Mobile gaming. 🔥🤦‍♂️

I just hope there will be a Just Cause for consoles in the near future.
 
Last edited:

SlimySnake

Flashless at the Golden Globes
even funnier with all this real rdna2 fuss ;)
Well thats a bit different i think. If the xsx has mesh shaders which a more advanced form of primitive shaders then there could be an advantage there. We have seen just how amazing those mesh shader benchmarks are. 1700% performance increase? Thats massive.

That said, these findings are very curious. You can see that the scaling gets better at higher resolutions but that could just be due to the limited memory bandwidth (384 GBps vs 512 GBps) on the 6700xt. Or the smaller infinity cache. But whats more curious is that this is with all the clocks being the same. I would love to see them do a test with the 60 CU 6800 at 1.6 ghz (12.2 tflops) and the 40 CU at 2 Ghz (10.2 tflops) and see what results they can get. That could help us explain why the PS5 is able to keep up with the xsx in at least some of the games.

XmU1JLE.jpg
 

dwish

Member
BGs BGs , I guess you probably aren't allowed to say but, I feel the chances of PSVR2 supporting BC for PSVR1 are really slim due to different tracking methods and controllers?
 

assurdum

Banned
Well thats a bit different i think. If the xsx has mesh shaders which a more advanced form of primitive shaders then there could be an advantage there. We have seen just how amazing those mesh shader benchmarks are. 1700% performance increase? Thats massive.

That said, these findings are very curious. You can see that the scaling gets better at higher resolutions but that could just be due to the limited memory bandwidth (384 GBps vs 512 GBps) on the 6700xt. Or the smaller infinity cache. But whats more curious is that this is with all the clocks being the same. I would love to see them do a test with the 60 CU 6800 at 1.6 ghz (12.2 tflops) and the 40 CU at 2 Ghz (10.2 tflops) and see what results they can get. That could help us explain why the PS5 is able to keep up with the xsx in at least some of the games.

XmU1JLE.jpg
I doubt ps5 doesn't support mesh shader at all.
 
Well thats a bit different i think. If the xsx has mesh shaders which a more advanced form of primitive shaders then there could be an advantage there. We have seen just how amazing those mesh shader benchmarks are. 1700% performance increase? Thats massive.

That said, these findings are very curious. You can see that the scaling gets better at higher resolutions but that could just be due to the limited memory bandwidth (384 GBps vs 512 GBps) on the 6700xt. Or the smaller infinity cache. But whats more curious is that this is with all the clocks being the same. I would love to see them do a test with the 60 CU 6800 at 1.6 ghz (12.2 tflops) and the 40 CU at 2 Ghz (10.2 tflops) and see what results they can get. That could help us explain why the PS5 is able to keep up with the xsx in at least some of the games.

XmU1JLE.jpg
The 6700XT has less L1 Caches, rasterizers, ROPs, tesselators, shader arrays, Infinity Cache and memory bandwidth.

PS5 has same L1 cache, same ROPs, same shader arrays, more rasterizers if you compare to Xbox Series X.
 

kyliethicc

Member
BGs BGs , I guess you probably aren't allowed to say but, I feel the chances of PSVR2 supporting BC for PSVR1 are really slim due to different tracking methods and controllers?

I hope you are wrong.

I assume it will vary by game, but I'd assume no it won't be for most games. Of course all PS5 VR games will require the new next gen hardware. Maybe devs could patch an existing PS4 VR game to work with the new PSVR2 hardware? Otherwise, expect ports/remasters.

I could see Sony launching this headset alongside a new PS Plus Collection expansion of like 10-20 existing PSVR games with new native PS5 VR versions, so they can be played on the new hardware. (Like Astro Bot Rescue Mission, Blood & Truth, etc.)

Otherwise, they'll probably just recommend keeping your existing PSVR hardware in order to play PS4 VR games.
 

Elog

Member
PS5 has same L1 cache, same ROPs, same shader arrays, more rasterizers if you compare to Xbox Series X.
I do not want to nit-pick but the PS5 has significantly more L1 and L2 cache per CU than the XSX - and in addition those caches are faster since they are clocked higher (i.e. faster turnaround time of data). So even before taking cache scrubbers into account, the disparity in L1 and L2 cache per CU is one of the biggest differences between the two consoles. While the XSX has more CUs and higher theoretical peak Tflops, it is a much more cache starved design.
 

sncvsrtoip

Member
Well thats a bit different i think. If the xsx has mesh shaders which a more advanced form of primitive shaders then there could be an advantage there. We have seen just how amazing those mesh shader benchmarks are. 1700% performance increase? Thats massive.

That said, these findings are very curious. You can see that the scaling gets better at higher resolutions but that could just be due to the limited memory bandwidth (384 GBps vs 512 GBps) on the 6700xt. Or the smaller infinity cache. But whats more curious is that this is with all the clocks being the same. I would love to see them do a test with the 60 CU 6800 at 1.6 ghz (12.2 tflops) and the 40 CU at 2 Ghz (10.2 tflops) and see what results they can get. That could help us explain why the PS5 is able to keep up with the xsx in at least some of the games.

XmU1JLE.jpg
I'm sure primitive shaders will be good enough and from missing freatures I think only int8 could have real meaning but only if proper dlss amd equivalent will be created using machine learning and in yesterday interview amd vice president Scott Herkelman said about amd super resolution:
You don’t need machine learning to do it, you can do this many different ways and we are evaluating many different ways.
 

ethomaz

Banned
Yeah the test is interesting because they kept clocks the same on all 4 cards.

When compared to the full 80 CU Navi21,
50% CUs = 60-68% FPS
75% CUs = 80-86% FPS
90% CUs = 93-94% FPS

So by adding more CUs even at same clocks, there are clear diminishing returns. They went from 40 to 60 CUs at same 2.0 GHz (and increased bandwidth), so thats a 50% increase in shaders, yet only saw about a 33% increase in average performance.

Therefore if one was to add say, 44% more CUs, but also lowered the clock speed by almost 20%, then one could expect almost no improvement in performance.

1.0x1.5x.88=1.33 (going from 40 to 60 CUs @ same clocks, but just 33% perf gain.)
.818x1.44x.88=1.037 (36 vs 52 CUs @ 2.23 vs 1.825 GHz would give just 4% perf gain, if all else was equal)

And remember, when they went from 40 to 60 CUs for Navi 21/22, they kept the same 10 CUs per shader array, like PS5, instead of just adding more CUs into each shader array like they did for XSX. They doubled the shader engines and everything else when going from Navi 22 to Navi 21. Neither console has the 4 SEs of Navi21. So that minor 4% perf gain could be even smaller, aka zero, if adding more CUs per SA decreases efficiency. Not to mention how various other things like split memory bandwidth or slower I/O could bottleneck performance.

charts

F6B5PqH.jpg

Fh0UfCp.jpg
It is nice to have confirmation for common sense.

Clock speeds without Arch limitation give more performance than increase in Processing Units.
That is true to any processor... even CPU.

20 units at 2Ghz have better performance than 30 units at 1.5Ghz.
 
Last edited:

chilichote

Member

This for me is a big disappointment, with the CPU capabilities of the new consoles being less restrictive than the last, and the highly enjoyable destructive experience of Just Cause... They decide to take the franchise into Mobile gaming. 🔥🤦‍♂️

I just hope there will be a Just Cause for consoles in the near future.
WTF? Mobile? What?
 

suEcide

Member
WiFi on the PS5 is fucking useless. It used to take a solid 15-20 minutes to recognize our 5G WiFi now it can't connect to it at all. A lot of troubleshooting has resulted in jackshit. Going to have to figure out how to run a hardline throughout the house as the modem is located in the basement.
 

Garani

Member
WiFi on the PS5 is fucking useless. It used to take a solid 15-20 minutes to recognize our 5G WiFi now it can't connect to it at all. A lot of troubleshooting has resulted in jackshit. Going to have to figure out how to run a hardline throughout the house as the modem is located in the basement.
Gaming on wifi is kinda, ahem... not great? Things do get better with WiFi6, but it doesn't do miracles.

Anyhow, try to run a hardline of get an AP closer to the console.
 

IntentionalPun

Ask me about my wife's perfect butthole
I still don't think Sony is going to buy any big studios any time soon; they seem just fine with buying exclusives, and likely get great deals on those due to being such a dominant leader in the AAA console space.

In the meantime they'll keep working with smaller studios, growing them, and turning them into AAA studios post cheap acquisition lol
 


Coooor.... phwoaaaar!!

Well thats a bit different i think. If the xsx has mesh shaders which a more advanced form of primitive shaders then there could be an advantage there.

No no no no no... we've been through this a bajillion times.

Mesh Shaders == Primitive Shaders (RDNA2)

The software architecture is different but they achieve the same result.

How is that fairer? So instead of it releasing elsewhere after 24 months it would never release elsewhere.

I think he's making fun of DarkMage619 DarkMage619
 
Last edited:
WiFi on the PS5 is fucking useless. It used to take a solid 15-20 minutes to recognize our 5G WiFi now it can't connect to it at all. A lot of troubleshooting has resulted in jackshit. Going to have to figure out how to run a hardline throughout the house as the modem is located in the basement.
So "trash" I'm even playing ps5 remotely by streaming to my laptop in bed and playing online while we're at it xD
I do have wifi6 though, so maybe that's why I'm not complaining :messenger_winking:
 

skit_data

Member

Still living rent free in some peoples heads one gen later I see. I see the benefit of Smart delivery but trying to equate the slightly (unless one is a complete retard) more cumbersome way of making sure the PS5 version is installed to DRM-restrictions, always online and no 2nd hand market of games says more about the creator of this clip than anything else.
 
Not sure this comparison is really interesting, simply because the bandwidth is finally not taken in account in the equation.
They said that the 6700XT has less memory bandwidth and less IC quantity and it can't be compensate, but in reality, that's not like that the things need to be seen because the 6700XT is in fact less limited by that point. For the 40 CU GPU, you have a bandwidth of 384GB/S + 96MB of IC memory (so 9.6 GB/S per CU + 2.4MB IC per CU) vs 512GB/S for the 60 CU + 128MB IC (so 8.53GB/S + 2.13MB IC per CU). For the 72 and the 80 CU versions, that's worst because they keep the same memory bandwidth for each ones + same IC quantity. It's obvious from that point that they won't scale perfectly, same problem you'll encouter if you increase too much the GPU frequency without increasing BW. If you want a good comparison, you need to increase the BW in the same way than CU number.
Not here to say the contrary to "frequency increase has better impact to simply CU increase" (because I agree about that point), simply speaking about the comparison interest.

After having a look at the complete article, the part doing the comparison between RDNA and RDNA2 at same frequency is not really accurate for the same reason than I explained above (not the same memory management/architecture, 5700XT BW > 6700XT BW, but 6700XT has IC) but it seems to show that, yes, RDNA IPC is on par with RNDA2 IPC. Many people claimed a pure IPC increase of 25% and RDNA2 not sensible to memory BW, if it was the case, it should be visible in the comparison.
 
Anyone know how to clean Xbox Series S?

Just asking for now cause I'm going to have to clean the vents sometime in future

Do I just use a dust blower or compressed air into the vents. I pray to god i won't have to open this fucking thing up.
Please GAF, say it ain't so. Please don't say I have to open the console.
 
Last edited:

reksveks

Member
Did they say anything about the resolution?

Our understanding is that native resolution rendering replaces checkerboarding on the Series X performance mode, with the 4K60 target dropping to 4K30 on the quality mode. Meanwhile, on Series S, 1440p30 is the aim for the higher fidelity mode, dropping to 1080p for the 60fps performance alternative

From the DF article
 

Mr Moose

Member
Our understanding is that native resolution rendering replaces checkerboarding on the Series X performance mode, with the 4K60 target dropping to 4K30 on the quality mode. Meanwhile, on Series S, 1440p30 is the aim for the higher fidelity mode, dropping to 1080p for the 60fps performance alternative

From the DF article
Like the Pro and One X versions, interesting, and a bit weird.
 
Status
Not open for further replies.
Top Bottom