• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GhostRunner PS5/XSX/XSS DF Performance Analysis

onQ123

Member
O onQ123

PS5 having more ROPS does not equal a performance advantage, the reason it has double the ROPS is because it uses the older RDNA1 RB's.
XsX uses RDNA2 RB+ which doubles the output per unit, so only half is needed.

SMH it does when you're pushing the console up to 4K 120fps or 8K 60fps
 

Loxus

Member
O onQ123

PS5 having more ROPS does not equal a performance advantage, the reason it has double the ROPS is because it uses the older RDNA1 RB's.
XsX uses RDNA2 RB+ which doubles the output per unit, so only half is needed.

It's still 64 ROPs on XBSX.

It's not more ROPs, but the higher clocks that increase Rasterization performance.

VRS is kind of useless on a 2d screen, it's much more suited for VR. And you clearly don't know much about PSVR 2.

PSVR 2 uses VRS + eye tracking.
Plus a Machine Learning upscaling technique, which obviously use the PS5 hardware for these features.
 
It's still 64 ROPs on XBSX.

It's not more ROPs, but the higher clocks that increase Rasterization performance.

VRS is kind of useless on a 2d screen, it's much more suited for VR. And you clearly don't know much about PSVR 2.

PSVR 2 uses VRS + eye tracking.
Plus a Machine Learning upscaling technique, which obviously use the PS5 hardware for these features.
Was eye tracking confirmed? And why would VRS be useless on a 2D screen? Sure VRS is very beneficial in VR. Nvidia has literally the best variation of that, compared to AMD or the consoles. But why do you think it's not beneficial in 2d games?

I just wanna know if you are using speculation, or going off hard cold facts with your previous statement?
 

Sosokrates

Report me if I continue to console war
SMH it does when you're pushing the console up to 4K 120fps or 8K 60fps

You claimed that the ps5 had a ROP advantage like it was fact, you were wrong.
There ROP performance is the same, but XSX uses the more modern and efficient method from RDNA2, PS5 sticks with the RDNA1 Method.
It explains this in the text I quoted.
 
Last edited:

01011001

Banned
It's still 64 ROPs on XBSX.

It's not more ROPs, but the higher clocks that increase Rasterization performance.

VRS is kind of useless on a 2d screen, it's much more suited for VR. And you clearly don't know much about PSVR 2.

PSVR 2 uses VRS + eye tracking.
Plus a Machine Learning upscaling technique, which obviously use the PS5 hardware for these features.

the PS5 hardware doesn't even support Tier 2 VRS, and Tier 1 VRS sucks
 
Last edited:

onQ123

Member
You claimed that the ps5 had a ROP advantage like it was fact, you were wrong.
There ROP performance is the same, but XSX uses the more modern and efficient method from RDNA2, PS5 sticks with the RDNA1 Method.
It explains this in the text I quoted.
It is a fact PS5 has the ROPs advantage , a small Color ROPs advantage from the clock rate and a large depth ROPs advantage from using the older RBE setup + the higher clock rate.
 

Arioco

Member
"higher geometry output , higher internal bandwidth or better data management?"

Where are you getting this from? Im pretty sure the PS5 does not have higher geometry output and higher internal bandwidth, apart from the ssd+io setup. (But series consoles have there own unique solutions (hardware based SFS) which may alleviate that advantage


Maths, I'm afraid, that's where. If you clocked Series X GPU 20% higher rasterization would be that much higher and all caches would have that much bandwidth. Obviously L2 cache will have higher bandwidth at 2.23 Ghz than it has at 1.825 Ghz.

Honestly, your statement doesn't make much sense. Imagine someone saying "I'm pretty sure Series X does not have a compute advantage", even though we can know by the number of CUs and the clockspeed that that console has almost 2 more Tflops than PS5. Well, that's how you sound like right now. 🤷‍♂️

Series S and PS4 Pro have both a roughly 4 Tflops GPU, but Series S is clocked much higher, so even if they were both GCN Series S would still have higher rasterizations and internal bandwidth than PS4 Pro.


That does not magically turn PS5 into a more powerful console than Series X, it just means that it gives PS5 and advantage in some aspects. Remember than in a GPU there are a lot of units other than the ALUs we use to calculate the Tflops. If you clock them faster those units will do their job faster.
 

Sosokrates

Report me if I continue to console war
Maths, I'm afraid, that's where. If you clocked Series X GPU 20% higher rasterization would be that much higher and all caches would have that much bandwidth. Obviously L2 cache will have higher bandwidth at 2.23 Ghz than it has at 1.825 Ghz.

Honestly, your statement doesn't make much sense. Imagine someone saying "I'm pretty sure Series X does not have a compute advantage", even though we can know by the number of CUs and the clockspeed that that console has almost 2 more Tflops than PS5. Well, that's how you sound like right now. 🤷‍♂️

Series S and PS4 Pro have both a roughly 4 Tflops GPU, but Series S is clocked much higher, so even if they were both GCN Series S would still have higher rasterizations and internal bandwidth than PS4 Pro.


That does not magically turn PS5 into a more powerful console than Series X, it just means that it gives PS5 and advantage in some aspects. Remember than in a GPU there are a lot of units other than the ALUs we use to calculate the Tflops. If you clock them faster those units will do their job faster.

The only difference is that modern gpus performance always corelates with its compute performance (tflops) and memory bandwidth.

You can clock a rx5700 higher then a rx5700xt but with equal tflops and they perform roughly the same. The higher frequency of the 5700 does not give any advantage.

There is simply more evidence supporting that the XsX's worse performance is an optimisation issue not inferior hardware.
 

MonarchJT

Banned
Dude, you literally just proved my point by listing not one but five games the performs better on PS5.

But here is a proper list of games the perform better on PS5.
Updated October 31, 2021
Every PS5 Game That Outperforms The Xbox Series X Version
That's 16 games.
With that many games, how can one say XBSX has a performance advantage. Some games will perform better on either console depending on many other scenarios, not just 18% better teraflops.
this list is useless and completely wrong
 
Last edited:

Arioco

Member
The only difference is that modern gpus performance always corelates with its compute performance (tflops) and memory bandwidth.

You can clock a rx5700 higher then a rx5700xt but with equal tflops and they perform roughly the same. The higher frequency of the 5700 does not give any advantage.

There is simply more evidence supporting that the XsX's worse performance is an optimisation issue not inferior hardware.


I said higher clocks provides some advantages, not more compute power above the Tflops that GPU has.

https://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

As you can see Microsoft engineers themselves have explained this to Digital Foundry in the past.

Did we make the right balance decisions back then? And so raising the GPU clock is the result of going in and tweaking our balance. Every one of the Xbox One dev kits actually has 14 CUs on the silicon. Two of those CUs are reserved for redundancy in manufacturing. But we could go and do the experiment - if we were actually at 14 CUs what kind of performance benefit would we get versus 12? And if we raised the GPU clock what sort of performance advantage would we get? And we actually saw on the launch titles - we looked at a lot of titles in a lot of depth - we found that going to 14 CUs wasn't as effective as the 6.6 per cent clock upgrade that we did. Now everybody knows from the internet that going to 14 CUs should have given us almost 17 per cent more performance but in terms of actual measured games - what actually, ultimately counts - is that it was a better engineering decision to raise the clock. There are various bottlenecks you have in the pipeline that [can] cause you not to get the performance you want [if your design is out of balance].

That is, according to MS an upgrade of 16.67%
In CUs provided less performance than an upgrade of 6.6% in clock speed. That's their experience analysing the performance in many games.
 
Last edited:

onQ123

Member
How so? The article I posted states otherwise, xsx is literally using the newer more efficient RDNA2 RB+ units.

Because the new RB+ double the Color ROPs but didn't double the Depth ROPs so when Xbox Series X use half the RBEs to reach 64 color ROPS it only has 128 Depth ROPs while PS5 has 72 Color ROPS & 288 Depth ROPs.



Xbox SX is 8 RB+ with 8 Color ROPs & 16 Depth ROPs

PS5 is 18 RBE with 4 Color ROPs & 16 Depth ROPs (2 RBE rumored to be disabled but I haven't seen anything to back that up yet)
 
Last edited:

MonarchJT

Banned
I said higher clocks provides some advantages, not more compute power above the Tflops that GPU has.

https://www.eurogamer.net/articles/digitalfoundry-the-complete-xbox-one-interview

As you can see Microsoft engineers themselves have explained this to Digital Foundry in the past.



That is, according to MS and increase of 16.67%
In CUs provided less performance than an increase of 6.6% in clock speed. That's their experience analysing the performance in several games.
this using exactly the same GPU tech...the two consoles are slightly different and one should be more (from what we have heard to date)albeit slightly more advanced than the other . We wait for the developers to start releasing only next gen games (and become familiar with these) and we will see the real trend take shape. However, it is certain that using the same configuration and architecture having the highest clock has its undoubted advantages.
 

Arioco

Member
this using exactly the same GPU tech...the two consoles are slightly different and one should be more (from what we have heard to date)albeit slightly more advanced than the other . We wait for the developers to start releasing only next gen games (and become familiar with these) and we will see the real trend take shape. However, it is certain that using the same configuration and architecture having the highest clock has its undoubted advantages.


I said higher clocks can have certain advantages, but it obviously doesn't provide a console with features whose hardware doesn't support. It goes without saying.

Anyway the story is the same this generation, this time on the CPU side. Why would MS allow the developers to choose between 3.6 Ghz and 16 threads or 3.8 Ghz and 8 threads if fewer cores/threads at higher frequency had no advantage whatsoever in any circumstance? That would be stupid, right?
 

01011001

Banned
Dude, you literally just proved my point by listing not one but five games the performs better on PS5.

But here is a proper list of games the perform better on PS5.
Updated October 31, 2021
Every PS5 Game That Outperforms The Xbox Series X Version
That's 16 games.
With that many games, how can one say XBSX has a performance advantage. Some games will perform better on either console depending on many other scenarios, not just 18% better teraflops.

many of these are straight up lies, for example the supposed locked 4K in Immortals on PS5, that's simply a lie
or that Dirt 5 one was literally a bug that was fixed
 

rnlval

Member
reVDrBB.jpg

Take a very good look at this.
How in the world can the PS5 be closer to a 5700XT?

CU are not the same,
ALU are not the same, and
ROPs are not the same.

PS5 components are laid out the same way as Navi 10, but when looking closely, XBOX CUs and ALU looks the same as Navi 14.

If their was a 6700 (not XT), it would be a PS5.

RDNA2 ROPs are there for compatibility with DirectX VRS. PS5 does not use DirectX API, so why implement RB+ ROPs?

PS5 has their own VRS solution.
"Using primitive shaders on PlayStation 5 will allow for a broad variety of techniques including smoothly varying level of detail addition of procedural detail to close up objects and improvements to particle effects and other visual special effects."

Yes according to their patent Sony seem to have another strategy with VRS (particularly for VR purpose). Instead of reducing the resolution of (the ideally) less visible parts of the image they want to increase the resolution of the most visible parts of the image and render only the visible polygons (thanks to their geometry engine).

You end up with varying levels of quality either way.

As for Machine Learning, this link says it all.
PlayStation 5 and Machine Learning: An Analysis

Being based on RDNA 2 means your using RDNA2 instruction set.
https://developer.amd.com › ...PDF Web results "RDNA 2" Instruction Set Architecture: Reference Guide
Ray Tracing support includes the following instructions:
-IMAGE_BVH_INTERSECT_RAY
-IMAGE_BVH64_INTERSECT_RAY

Dot product ALU operations added accelerate inferencing and deep-learning:
V_DOT2_F32_F16 / V_DOT2C_F32_F16
V_DOT2_I32_I16 / V_DOT2_U32_U16
V_DOT4_I32_I8 / V_DOT4C_I32_I8
V_DOT4_U32_U8
V_DOT8_I32_I4
V_DOT8_U32_U4

Edit:
As for Infinity Cache.
You should read this.
Infinity Cache for APU? AMD is probably planning something better


Infinity Cache might not have been ready for APUs.
1. RX 5700 XT (NAVI 10) is NOT NAVI 14 e.g. NAVI 10 is missing the Machine Learning dot math feature from NVAI 14. NAVI 10 includes the baseline Rapid Pack Math feature.

NAVI 10 and NAVI 14 have different CU designs. Don't assume NAVI 14 has the same dot math feature as NAVI 10.

2. Machine Learning software can be run on any DirectX11/DirectX12 class GPU i.e. INT4 and INT8 data formats can run on native INT16 hardware. DirectML can run on DirectX11 class GPUs.

Sony hasn't confirmed native hardware octa-rate INT4 and quad-rate INT8 TOPS that resolves into 32-bit datatype INT32/FP32.

Prove PS5 has native hardware octa-rate INT4 and quad-rate INT8 TOPS that resolves into 32-bit datatype INT32/FP32.

For dot math feature, lesser data type (e.g. octa-rate 4 bit, quad-rate 8 bit, double-rate 16 bit) resolves into 32-bit datatype results. Tensor workload hardware accelerates lesser datatypes (INT4, INT8, INT16, FP16) that resolves into 32-bit datatype (INT32, FP32) results.

For the rapid pack math feature, double rate16 bit data type resolves into 16-bit datatype results.

FNZaD2h.png


Dot product operations are optional features for some NAVI variants.
 
Last edited:

rnlval

Member
It is a fact PS5 has the ROPs advantage , a small Color ROPs advantage from the clock rate and a large depth ROPs advantage from using the older RBE setup + the higher clock rate.

XSX has PC's RDNA v2 per RBE configuration.

For PC NAVI 21 and NAVI 22, each RBE has 8 Color ROPs and 16 Depth ROPs.


IPC comparison between RX 6700 XT vs RX 5700 XT

RX 6700 XT (at 1.85 Ghz) vs RX 5700 XT (at 1.85 Ghz)
9REzVcG.png


At the same 1.85 Ghz clock speed, RX 6700 XT has a slight edge over RX 5700 XT despite RX 6700 XT having less depth ROPs.

RDNA 2's RBE units have higher efficiency when compared to RDNA 1's RBE units.
 
Last edited:

rnlval

Member
It's still 64 ROPs on XBSX.

It's not more ROPs, but the higher clocks that increase Rasterization performance.

1. VRS is kind of useless on a 2d screen, it's much more suited for VR. And you clearly don't know much about PSVR 2.

PSVR 2 uses VRS + eye tracking.
2. Plus a Machine Learning upscaling technique, which obviously use the PS5 hardware for these features.
1. Variable Rate Shading is useful for conservating pixel and ROPS resources while preserving geometry edges.

2. Again, Prove PS5 has native hardware octa-rate INT4 and quad-rate INT8 TOPS that resolves into 32-bit datatype INT32/FP32.

Machine Learning software can run on DirectX11 class GPUs, hence I'm asking the specific hardware feature to be declared.
 

onQ123

Member
XSX has PC's RDNA v2 per RBE configuration.

For PC NAVI 21 and NAVI 22, each RBE has 8 Color ROPs and 16 Depth ROPs.


IPC comparison between RX 6700 XT vs RX 5700 XT

RX 6700 XT (at 1.85 Ghz) vs RX 5700 XT (at 1.85 Ghz)
9REzVcG.png


At the same 1.85 Ghz clock speed, RX 6700 XT has a slight edge over RX 5700 XT despite RX 6700 XT having less depth ROPs.

RDNA 2's RBE units have higher efficiency when compared to RDNA 1's RBE units.

The RX 6700XT has 96MB of Infinity Cache which could benefit it over the RX 5700 XT & there is nothing showing that these games are ROPs limited.
 

MonarchJT

Banned
A unique case where PS5 outperforms XSX. Is it optimization? or one of those games that favor a high clock speeds?

No console warring or chest beating please.
seen the results that sees reverse the hardware advantages that the SX (RT) should have instead of those that the PS5 (120fps mode) should have absolutely optimization
 
Last edited:
The RX 6700XT has 96MB of Infinity Cache which could benefit it over the RX 5700 XT & there is nothing showing that these games are ROPs limited.
Indeed. I also think the infinity cache could be one of the reason they were confident at halving the depth ROPs on RDNA2. Xbox Series are the only AMD GPUs having heavily cutdown RDNA2 ROPs without Infinity cache.
 

MonarchJT

Banned
Indeed. I also think the infinity cache could be one of the reason they were confident at halving the depth ROPs on RDNA2. Xbox Series are the only AMD GPUs having heavily cutdown RDNA2 ROPs without Infinity cache.
because probably cost / performance favored this option ... just like sony halved the fpu of the cpu
 
because probably cost / performance favored this option ... just like sony halved the fpu of the cpu
Reasons seem similar indeed. But for Sony the area cost of whole FPU was very small. They probably did it because of the maximum power consumption cost. Cerny probably found out for gaming applications half Zen 2 FPUs was enough and there was a risk of (avoidable) CPU downclock if using the whole Zen 2 FPU.
 

Madog

Banned
XSX has PC's RDNA v2 per RBE configuration.

For PC NAVI 21 and NAVI 22, each RBE has 8 Color ROPs and 16 Depth ROPs.


IPC comparison between RX 6700 XT vs RX 5700 XT

RX 6700 XT (at 1.85 Ghz) vs RX 5700 XT (at 1.85 Ghz)
9REzVcG.png


At the same 1.85 Ghz clock speed, RX 6700 XT has a slight edge over RX 5700 XT despite RX 6700 XT having less depth ROPs.

RDNA 2's RBE units have higher efficiency when compared to RDNA 1's RBE units.
console gpus arent offshelf they ahve different customizations and cerny explained this deeply their smart shift technology isnt similar to amds offshelf solution they even have cache scrubbers that cerny said where exlusive to ps5 and not any other rdna 2 card same with series x its not just an offshelf amd gpu they have their own solutions there so comparing 2 different console gpus simply because they are made by amd and use a similar architecture as you would on pc is very wrong ps5 and series gpu arent similar only similarity is being rdna 2 and thats it where as pc gpus are similar in each and every way only difference is how many shaders or clock speeds they have on the expensive line up to the lower end gpu they simply scale the gpu line up. sony and microsoft have different needs for their consoles they arent your average cosumer just purchasing a gpu on amazon.

weve had variable results from title to title and well only know the true power and difference of this machines when they are pushed to the edge and have native coded nextgen games made for them. not this crossgen games. as weve seen advantages of native code like the touryst and doom eternal enhanced.
 

Loxus

Member
1. Variable Rate Shading is useful for conservating pixel and ROPS resources while preserving geometry edges.

2. Again, Prove PS5 has native hardware octa-rate INT4 and quad-rate INT8 TOPS that resolves into 32-bit datatype INT32/FP32.

Machine Learning software can run on DirectX11 class GPUs, hence I'm asking the specific hardware feature to be declared.
@ bolded, prove it doesn't.

It's a RDNA2 GPU, you understand that right?

Both PS5 and Xbox Series X's GPUs are based on RDNA 2 architecture. And we know that RDNA 2 (and even RDNA 1.1) supports Int4 and Int8 operations. There is no reason to assume that Sony specifically took out the ML feature.

PS5 doesn't have Infinity Cache, but it's still RDNA2.
That's because it supports RDNA2 instruction set.
AMD RDNA™ 2 Instruction Set Architecture

6WR16Ui.jpg

Look at this again and tell me how the PS5 is closer to a 5700XT?
CUs, ALUs and TMUs are different.

PS5 doing more with Machine Learning than Microsoft. And now they are using machine learning upscaling for PSVR2.
zBklOgA.jpg

"Virtual reality headsets require much higher computing power to display a satisfactory image to a user than a conventional computer monitor. This is because the monitors of a virtual reality headset are much closer to a user's eyes, subtend a much larger angle, and operate at a higher and sustained frame rate. Providing a virtual reality headset configured according to the method as described above provides the advantage of requiring much less computing power without sacrificing image quality where it is needed most for maintaining user comfort and immersion."


Their VRS solution is as describe by this patent for PSVR2.
7IuPMtf.png

Elog explains it good.
What Mark Cerny talked about is that the culling (mes shader) and prioritisation (VRS) is done at the GE level. That is different - it is a very different GE. His idea - as he described it - is that this means that all functions downstream of that can utilise the now already culled geometry and apply work in accordance with the set priorities. This can be used for texture work, shader work, RT etc. And this calculation is only done once as opposed to more than once per the traditional pipeline. He also claims that this new GE can work with very small primitives without clogging.


------------------------
If you can't provide proper information as you why the PS5 doesn't support INT4/INT8 instructions and has it own VRS solution, then your claims will be dismissed as wishful thinking.
 

rnlval

Member
@ bolded, prove it doesn't.

It's a RDNA2 GPU, you understand that right?

Both PS5 and Xbox Series X's GPUs are based on RDNA 2 architecture. And we know that RDNA 2 (and even RDNA 1.1) supports Int4 and Int8 operations. There is no reason to assume that Sony specifically took out the ML feature.

PS5 doesn't have Infinity Cache, but it's still RDNA2.
That's because it supports RDNA2 instruction set.
AMD RDNA™ 2 Instruction Set Architecture

6WR16Ui.jpg

Look at this again and tell me how the PS5 is closer to a 5700XT?
CUs, ALUs and TMUs are different.

PS5 doing more with Machine Learning than Microsoft. And now they are using machine learning upscaling for PSVR2.
zBklOgA.jpg

"Virtual reality headsets require much higher computing power to display a satisfactory image to a user than a conventional computer monitor. This is because the monitors of a virtual reality headset are much closer to a user's eyes, subtend a much larger angle, and operate at a higher and sustained frame rate. Providing a virtual reality headset configured according to the method as described above provides the advantage of requiring much less computing power without sacrificing image quality where it is needed most for maintaining user comfort and immersion."


Their VRS solution is as describe by this patent for PSVR2.
7IuPMtf.png

Elog explains it good.



------------------------
If you can't provide proper information as you why the PS5 doesn't support INT4/INT8 instructions and has it own VRS solution, then your claims will be dismissed as wishful thinking.
1. From https://www.google.com/amp/s/wccfte...-powered-shader-cores-says-quantic-dream/amp/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI). - Quantic Dream

-----------

Again, prove PS5 has dot math feature that includes octa-rate INT4, quad-rate INT8 and double rate INT16/FP16 that resolved into FP32 result.

DirectX11 class GPUs can run machine learning software, but certain GPU design has better suitability for machine learning processing.

Don't assume NAVI 14's dot feature is on PS5 GPU when Sony has reduced Zen 2's AVX resource by almost a half. Sony has hidden cutdown Zen 2 information and that leads to 3rd party disclosure.

As for VRS,

PlayStation 5 Former Principal Software Engineer Comments on Lack of Variable Rate Shading.​


Nvidia defined VRS exist for Vulkan API, not just Direct12U API. PC RDNA v2 and XSX/XSS follow Nvidia's RTX Turing defined VRS tier 1 and tier 2.

Your Sony patent argument was disproven by the ex-Sony principal software engineer.


Before NVIDIA Turing RTX influenced RDNA 2 PC/XSX/XSS, AMD has presented its next-generation graphics pipeline via primitive shader extension.

3tfprAq.jpg


Notice RDNA v1 and VEGA already have multi-resolution rendering vis primitive shader extension.

AMD's primitive shader extension is not enough for NVIDIA RTX driven DirectX12U/Vulkan's mesh shader/amplification shader/variable-rate shading compliance.

PS; Primitive Shaders on VEGA is broken and has been disabled i.e. f__kup by AMD. https://www.techpowerup.com/240879/amd-cancels-implicit-primitive-shader-driver-support
 
Last edited:

rnlval

Member
The RX 6700XT has 96MB of Infinity Cache which could benefit it over the RX 5700 XT & there is nothing showing that these games are ROPs limited.
96 MB L3 cache is for RX 6700 XT's 192-bit bus 336 GB/s mitigation. RX 6700 XT has 12 GB with 336 GB/s BW. At the same clock speed, 40 CU and 64 ROPS, the net effect is RX 6700 XT delivers RX 5700 XT like results.

XSX has 10 GB with 560 GB/s via the 320 bits bus.

RX 6700 XT has increased clock speed due to improved election leak mitigation. Election leakage results in thermal generation.
 
Last edited:

Loxus

Member
1. From https://www.google.com/amp/s/wccfte...-powered-shader-cores-says-quantic-dream/amp/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI). - Quantic Dream
Where in that article does it say PS5 doesn't support INT4/INT8 instructions?

It's obvious XBSX is more suited for Machine Learning because it has more ALUs. You don't have to be a brain surgeon to understand that statement.

Machine Learning patent
https://www.freepatentsonline.com/WO2021214485A1.html
11. The computer-implemented method of claim 10 wherein the machine learning inference process is implemented by a data model, said data model or associated with an artificial neural network (ANN).

12. The computer-implemented method as claimed in claim 11 wherein the ANN is a convolutional neural network (CNN).




Don't assume NAVI 14's dot feature is on PS5 GPU when Sony has reduced Zen 2's AVX resource by almost a half.

PS5 still supports full 256 bit native instructions as describe by Mark Cerny when discussing heat and power draw issues.

"Our process on previous consoles has been to try to guess what the maximum power consumption during the entire console lifetime might be, which is to say the worst case scene in the worst case game and prepare a cooling solution that we think will be quiet at that power level.

f:id:keepitreal:20200329152122j:plain


If we get it right, fan noise is minimal. If we get it wrong, the console will be quite loud for the higher power games and there's even a chance that it might overheat and shut down if we miss estimate power too badly.

PlayStation 5 is especially challenging because the CPU supports 256 bit native instructions that consume a lot of power.

These are great here and there but presumably only minimally used, or are they. If we plan for major 256 bit instruction usage, we need to set the CPU clock substantially lower or noticeably increase the size of the power supply and fan."


And you can see when looking closely, it's still there but modified to produce less heat and draw less power.
jUBnyq2.jpg



PlayStation 5 Former Principal Software Engineer Comments on Lack of Variable Rate Shading.​


nVidia defined VRS exist for Vulkan API not just Direct12U API. PC RDN2 and XSX/XSS follows nVidia's defined VRS tier 1 and tier 2.

Your Sony patent argument was disproven by ex-Sony principal Software engineer.
On his Twitter profile, Matt Hargett addressed the lack of VRS discussion, highlighting how it's hard to talk about it because the degree to which it will help performance will vary not only per game but possibly per scene per game.

Where does he say the PS5 doesn't have it's own VRS solution?
And how is the patent disproven exactly?

There are different VRS solutions and the PS5's is Foveated Rendering.
QzzZ9qf.jpg

This technique should also be applied to a normal 2D screen as well, as describe by Mark Cerny again.
"Using primitive shaders on PlayStation 5 will allow for a broad variety of techniques including, smoothly varying level of detail, addition of procedural detail to close up objects and improvements to particle effects and other visual special effects."


Just stop, your embarrassing yourself at this point.
 
Last edited:

Madog

Banned
Where in that article does it say PS5 doesn't support INT4/INT8 instructions?

It's obvious XBSX is more suited for Machine Learning because it has more ALUs. You don't have to be a brain surgeon to understand that statement.

Machine Learning patent
https://www.freepatentsonline.com/WO2021214485A1.html
11. The computer-implemented method of claim 10 wherein the machine learning inference process is implemented by a data model, said data model or associated with an artificial neural network (ANN).

12. The computer-implemented method as claimed in claim 11 wherein the ANN is a convolutional neural network (CNN).






PS5 still supports full 256 bit native instructions as describe by Mark Cerny when discussing heat and power draw issues.

"Our process on previous consoles has been to try to guess what the maximum power consumption during the entire console lifetime might be, which is to say the worst case scene in the worst case game and prepare a cooling solution that we think will be quiet at that power level.

f:id:keepitreal:20200329152122j:plain


If we get it right, fan noise is minimal. If we get it wrong, the console will be quite loud for the higher power games and there's even a chance that it might overheat and shut down if we miss estimate power too badly.

PlayStation 5 is especially challenging because the CPU supports 256 bit native instructions that consume a lot of power.

These are great here and there but presumably only minimally used, or are they. If we plan for major 256 bit instruction usage, we need to set the CPU clock substantially lower or noticeably increase the size of the power supply and fan."


And you can see when looking closely, it's still there but modified to produce less heat and draw less power.
jUBnyq2.jpg




On his Twitter profile, Matt Hargett addressed the lack of VRS discussion, highlighting how it's hard to talk about it because the degree to which it will help performance will vary not only per game but possibly per scene per game.

Where does he say the PS5 doesn't have it's own VRS solution?
And how is the patent disproven exactly?

There are different VRS solutions and the PS5's is Foveated Rendering.
QzzZ9qf.jpg

This technique should also be applied to a normal 2D screen as well, as describe by Mark Cerny again.
"Using primitive shaders on PlayStation 5 will allow for a broad variety of techniques including, smoothly varying level of detail, addition of procedural detail to close up objects and improvements to particle effects and other visual special effects."


Just stop, your embarrassing yourself at this point.
vrs doesnt make enough point when drs is available because its noticeable
 
Dude, you literally just proved my point by listing not one but five games the performs better on PS5.

But here is a proper list of games the perform better on PS5.
Updated October 31, 2021
Every PS5 Game That Outperforms The Xbox Series X Version
That's 16 games.
With that many games, how can one say XBSX has a performance advantage. Some games will perform better on either console depending on many other scenarios, not just 18% better teraflops.
Ricky Gervais Lol GIF
 

Loxus

Member
I don't understand the joke.
Digital Foundry states many times holding a better frame rate = better performance, not having a higher resolution.

It's like it's blasphemy to say the PS5 can outperform XBSX, even with analysis form DF, VGTech, etc.

No wonder Developers say away form Gaming Forums in general.
Developers can't say anything without it being an issue.
Console wars have ruined so many discussions.
 

01011001

Banned
I don't understand the joke.
Digital Foundry states many times holding a better frame rate = better performance, not having a higher resolution.

It's like it's blasphemy to say the PS5 can outperform XBSX, even with analysis form DF, VGTech, etc.

No wonder Developers say away form Gaming Forums in general.
Developers can't say anything without it being an issue.
Console wars have ruined so many discussions.

you literally posted a list with lies in it and talk about console wars ruining discussions... are you serious here?
 

Loxus

Member
State the lies then.
Nearly all those links use Digital Foundry as a source.

You just mad PS5 outperforming your favorite plastic box in a few games.

Even if one or two games might be wrong, it doesn't change the fact PS5 outperforms XBSX in a few games.

Like what is wrong with you that you can't understand and except PS5 can outperform XBSX.

Lord help me understand console wars.
frustrated sherlock holmes GIF
 
Last edited:

rnlval

Member
Where in that article does it say PS5 doesn't support INT4/INT8 instructions?

It's obvious XBSX is more suited for Machine Learning because it has more ALUs. You don't have to be a brain surgeon to understand that statement.

Machine Learning patent
https://www.freepatentsonline.com/WO2021214485A1.html
11. The computer-implemented method of claim 10 wherein the machine learning inference process is implemented by a data model, said data model or associated with an artificial neural network (ANN).

12. The computer-implemented method as claimed in claim 11 wherein the ANN is a convolutional neural network (CNN).






PS5 still supports full 256 bit native instructions as describe by Mark Cerny when discussing heat and power draw issues.

"Our process on previous consoles has been to try to guess what the maximum power consumption during the entire console lifetime might be, which is to say the worst case scene in the worst case game and prepare a cooling solution that we think will be quiet at that power level.

f:id:keepitreal:20200329152122j:plain


If we get it right, fan noise is minimal. If we get it wrong, the console will be quite loud for the higher power games and there's even a chance that it might overheat and shut down if we miss estimate power too badly.

PlayStation 5 is especially challenging because the CPU supports 256 bit native instructions that consume a lot of power.

These are great here and there but presumably only minimally used, or are they. If we plan for major 256 bit instruction usage, we need to set the CPU clock substantially lower or noticeably increase the size of the power supply and fan."


And you can see when looking closely, it's still there but modified to produce less heat and draw less power.
jUBnyq2.jpg




On his Twitter profile, Matt Hargett addressed the lack of VRS discussion, highlighting how it's hard to talk about it because the degree to which it will help performance will vary not only per game but possibly per scene per game.

Where does he say the PS5 doesn't have it's own VRS solution?
And how is the patent disproven exactly?

There are different VRS solutions and the PS5's is Foveated Rendering.
QzzZ9qf.jpg

This technique should also be applied to a normal 2D screen as well, as describe by Mark Cerny again.
"Using primitive shaders on PlayStation 5 will allow for a broad variety of techniques including, smoothly varying level of detail, addition of procedural detail to close up objects and improvements to particle effects and other visual special effects."


Just stop, your embarrassing yourself at this point.

PS5 CPU's AVX capability via PS5 APU recycled AMD 4700S APU

5E5knVx.png



Variable Rate Shading Tier 2 example,

ceNO7YW.jpg


PS5's VRS implemented in software.


XSX/XSS/PC RDNA 2/RTX has hardware VRS Tier 2

Just stop, you're embarrassing yourself at this point.
 

Darius87

Member
1. From https://www.google.com/amp/s/wccfte...-powered-shader-cores-says-quantic-dream/amp/

The shader cores of the Xbox are also more suitable to machine learning, which could be an advantage if Microsoft succeeds in implementing an equivalent to Nvidia’s DLSS (an advanced neural network solution for AI). - Quantic Dream

-----------

Again, prove PS5 has dot math feature that includes octa-rate INT4, quad-rate INT8 and double rate INT16/FP16 that resolved into FP32 result.
:messenger_grinning_smiling: you're blind or what? he just showed in post before that RDNA 1.1 and RDNA 2 supports INT4, INT8:
7n6YIIC.jpeg


then you link quantic dream interview as it suppose do denie PS5 support for INT 4/8 ? :messenger_grinning_smiling: it says XSX has more suitable CU's for ML not that PS5 doesn't support so what exactly you wan't to tell with your QD interview?
and then again you start asking same question :messenger_tears_of_joy:.
Insomniac even told that they do S-M muscle deformation with ML, what else do you need to stop repeating this nonsense?
 

rnlval

Member
:messenger_grinning_smiling: you're blind or what? he just showed in post before that RDNA 1.1 and RDNA 2 supports INT4, INT8:
7n6YIIC.jpeg


then you link quantic dream interview as it suppose do denie PS5 support for INT 4/8 ? :messenger_grinning_smiling: it says XSX has more suitable CU's for ML not that PS5 doesn't support so what exactly you wan't to tell with your QD interview?
and then again you start asking same question :messenger_tears_of_joy:.
Insomniac even told that they do S-M muscle deformation with ML, what else do you need to stop repeating this nonsense?
Again, prove octa-rate INT4 and quad rate INT8 with 32 bit results feature for PS5's GPU.

gbO0RLH.png
 
Last edited:

Mr Moose

Member
many of these are straight up lies, for example the supposed locked 4K in Immortals on PS5, that's simply a lie
or that Dirt 5 one was literally a bug that was fixed
Hmm... Got a source for this? I hope it's not Tom from DF.

The only resolution found on PS5 in Quality Mode was 3840x2160.
Xbox Series X in Quality Mode uses a dynamic resolution with the highest native resolution found being 3840x2160 and the lowest native resolution found being approximately 3328x1872. Drops in resolution below 3840x2160 on Xbox Series X in Quality Mode seem to be uncommon.

Xbox Series S in Quality Mode uses a dynamic resolution with the highest native resolution found being 2560x1440 and the lowest native resolution found being 1920x1080. Xbox Series S in Quality Mode uses a form of temporal reconstruction to increase the resolution up to 2560x1440 when rendering natively below this resolution.
 
Last edited:

rnlval

Member
prove that it doesn't.
I have already directed my specific hardware question to Sony's usual communication channels and they haven't returned the answer. Machine learning software can run on DirectX11 class GPU.

Sony is less transparent when compared to the competition.
 
Last edited:
prove that it doesn't.
Thats not how burden of proof works, cant prove the absence of something.

From the outside looking in we have no evidence to suggest that the PS5 supports this while on the contrary there’s very clear evidence in other platforms and PC.

If the Ps5 does indeed support this on a hardware level it behooves the manufacturer to make that clear.

That said, silence often speaks volumes in the engineering/tech space. No need for all the conjecture.
 

Zathalus

Member
Hmm... Got a source for this? I hope it's not Tom from DF.

The VGtech comparison was from the launch version of the games, while the DF comparison was after numerous performance patches to the game. The end result is that they are identical on both consoles.
 

Mr Moose

Member
The VGtech comparison was from the launch version of the games, while the DF comparison was after numerous performance patches to the game. The end result is that they are identical on both consoles.
It's Tom, he is wrong all the time. It's native 4k @ 30fps on PS5, always has been. It had little hitches at launch but it was a solid 30fps (I'm only talking about the PS5 version btw, not about any improvements on the Series consoles).
 

Zathalus

Member
It's Tom, he is wrong all the time. It's native 4k @ 30fps on PS5, always has been. It had little hitches at launch but it was a solid 30fps (I'm only talking about the PS5 version btw, not about any improvements on the Series consoles).
Well, they confirmed locked 4k on both for quality mode.
 

Darius87

Member
Thats not how burden of proof works, cant prove the absence of something.

From the outside looking in we have no evidence to suggest that the PS5 supports this while on the contrary there’s very clear evidence in other platforms and PC.

If the Ps5 does indeed support this on a hardware level it behooves the manufacturer to make that clear.

That said, silence often speaks volumes in the engineering/tech space. No need for all the conjecture.
it's been proved that RDNA 2.0 supports ML operations with mixed precision INT4/INT8 ops and PS5 is RDNA 2.0 based
that's proof nr1.
If you need direct proof insomniac said that S-M does muscle deformation with ML just google it:
Does ps5 use machine learning?

Insomniac confirmed that it was using Machine Learning inference at runtime on the PlayStation 5 to push the effect. New PlayStation 5 games like God of War Ragnarok and Naughty Dog's next project may leverage and push this new AI tech to greater heights.
that's proof nr2

way before EA said PS5 support ML:
“More generally, we’re seeing the GPU be able to power machine learning for all sorts of really interesting advancements in the gameplay and other tools.” says Laura Miele, chief studio officer for EA.
that's proof nr3

FIFA does it with it's hyper motion on both platforms:
https://www.tweaktown.com/news/8050...r-ultra-realism-on-xbox-series-ps5/index.html
that's proof nr4

i've myself heard on podcasts from devs that PS5 can upscale textures with ML.
what else proof do you need? sony silence doesn't confirm or deny anything.
I have already asked my specific hardware question to Sony's usual communication channels and they haven't returned the answer. Machine learning software can run on DirectX11 class GPU.

Sony is less transparent when compared to the competition.
again prove that PS5 doesn't support INT4/INT8 ops.
 
it's been proved that RDNA 2.0 supports ML operations with mixed precision INT4/INT8 ops and PS5 is RDNA 2.0 based
that's proof nr1.
If you need direct proof insomniac said that S-M does muscle deformation with ML just google it:

that's proof nr2

way before EA said PS5 support ML:

that's proof nr3

FIFA does it with it's hyper motion on both platforms:
https://www.tweaktown.com/news/8050...r-ultra-realism-on-xbox-series-ps5/index.html
that's proof nr4

i've myself heard on podcasts from devs that PS5 can upscale textures with ML.
what else proof do you need? sony silence doesn't confirm or deny anything.

again prove that PS5 doesn't support INT4/INT8 ops.
None of this proves anything. The original premise is still true pointing at related technologies proves nothing.

If we look at the literature provided by each manufacturer we can see what features are supported, the fact that you have to go on this Sherlock Holmes level investigative quest to try and Prove your point , meanwhile you can just google “ML PC or ML Xbox” and the explanations are clear as day.
 
Top Bottom