• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Xbox Series X’s Advantage Could Lie in Its Machine Learning-Powered Shader Cores, Says Quantic Dream

Panajev2001a

GAF's Pleasant Genius
do
he said (two times) that is navi based (not big navi). and FOR THIS REASON dosnt have ml...it's very clear english (not like my ahaha)..i hope that the day we will find out IF ps5 have or dosnt have hw support for int4 it will be clear evidence of which architecture it is based on
Navi is not an architecture, it is a GPU family that implements the RDNA arch. Big Navi is the first RDNA2 based GPU design. Unless you want to re-ignite the old RDNA 1.5 bit, but well be my guest.
 
Last edited:

MonarchJT

Banned
Navi is not an architecture, it is a GPU family that implements the RDNA arch. Big Navi is the first RDNA2 based GPU design. Unless you want to re-ignore the old RDNA 1.5 bit, but well be my guest.
i know exactly what it is and is the family of rdna1 gpus (coincidentally)

My take is that ps5 support fp16 as ps4pro that's it
 
Last edited:

BeardGawd

Banned
Sure, let’s cherry pick and complain about others cherry picking it. So the other statement he made mirrors Cerny’s Road to PS5 presentation but it is invalid because reasons in a thread where some are arguing about essentially a hidden AI accelerator like the hidden dGPU in the Xbox One because that would make the marketing PR more believable. C’mon...

Take both or take neither.
XiZr3RB.jpg
Cherry picking? Please show me where he mentions ML in your quote?

I'll wait 😉.
 

Panajev2001a

GAF's Pleasant Genius
Cherry picking? Please show me where he mentions ML in your quote?

I'll wait 😉.
You are cherry picking one quote and refusing the other, despite the other one also matching what Cerny already mentioned in his Road to PS5 talk about their collaboration with AMD. This is goal posts shifting :p.

Either accept then both or refuse them both, do not cherry pick:
Hqt7pbZ.jpg
 

Panajev2001a

GAF's Pleasant Genius
love when people use this but don’t want to post the very next line of info that came with it.

Like:
Some variants of the dual compute unit expose additional mixed-precision dot-product modes in the ALUs, primarily for accelerating machine learning inference. A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput, some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations, all of which use 32-bit accumulators to avoid any overflows.
fFMwmtb.png
 
Last edited:

Hezekiah

Banned
Unless Sony themselves admit their own weaknesses people will continue to spread disinformation.
To be fair there's a lot of misinformation in this thread with at least three people who don't know what they're talking about, telling everybody what Series X's GPU capabilities are.
 

Omni_Manhatten

Neo Member
We shared actual evidence and you keep ignoring it, not really bothering to read it, answering with the same portion of the PR statement of your choice or cherry picking the DF interview, and reading out of the quotes what you want to read into them.
I get your view. I see that you are missing ours. It’s possible PS5 did stick to FP16 support for what many assume is BC on the HW level. I’m not too sure on that part. That is not something I understand. What gets me is the evidence that keeps getting shared supports this theory. That Sony indeed only supported Int 8/4 within its RT structure. Meaning those RT cores have it. Do the other CU or is there reasons to think it may not. This article by the Quantum dev. The Sony engineer. DF asking Cerny and him not wanting to say. I mean it seems you are trying to generalize the architectures capabilities as a just because it’s there reasoning. However we are saying the CU support had to be done with AMD and MS. Also the AI ML work that has been done over years with MS and Nvidia through their Azure agreement was costly. The time it takes to train an AI to properly run an algorithm is tremendous in man hours. Often 10s of thousands. That was just one thing MS and Nvidia did in the AI space. Hence why MS understood the need to mimic Tensor cores for DirectML. Where is Sony getting is ML from? AMD? Possible. Again just questions I think are fair. Not crazy and very logical to ask.
 

MonarchJT

Banned
Maybe, but that is kind of stating the same thing, standard RDNA2 feature:
GJ5RZR9.jpg
the point is that engineer didn't say big navi ...he said fuckin navi. HIM not me, his words not ours..two diff times talking about different things. I don't even want to bring up the dieshot that exposes exactly what the engineer said. I do not want to bring into question leviathan or Locuza both think for the same reason as Leonardi that will not support ML
But I know it will never be accepted. If int4 support is missing, will you finally agree with me about everything?
 
Last edited:

Panajev2001a

GAF's Pleasant Genius
I get your view. I see that you are missing ours. It’s possible PS5 did stick to FP16 support for what many assume is BC on the HW level. I’m not too sure on that part. That is not something I understand. What gets me is the evidence that keeps getting shared supports this theory. That Sony indeed only supported Int 8/4 within its RT structure. Meaning those RT cores have it. Do the other CU or is there reasons to think it may not. This article by the Quantum dev. The Sony engineer. DF asking Cerny and him not wanting to say. I mean it seems you are trying to generalize the architectures capabilities as a just because it’s there reasoning. However we are saying the CU support had to be done with AMD and MS. Also the AI ML work that has been done over years with MS and Nvidia through their Azure agreement was costly. The time it takes to train an AI to properly run an algorithm is tremendous in man hours. Often 10s of thousands. That was just one thing MS and Nvidia did in the AI space. Hence why MS understood the need to mimic Tensor cores for DirectML. Where is Sony getting is ML from? AMD? Possible. Again just questions I think are fair. Not crazy and very logical to ask.

Asking the question about what Sony did do or did not do is not crazy at all, but I think we are taking little evidence on Sony’s side (David Cage is not an engineer/dev, the PS5 dev quote is sketchy and we can take it and it’s correction or neither) and forgetting that it is not the first time where both MS and Sony improved on a feature but only MS made it a bit PR bullet point (see improvements in the controller measured latency: improved in both XSX’s controller and PS5’s DualSense, but only MS went and made a PR sea out of it).

So far what I see is MS taking an optional RDNA1 configuration / seemingly standard RDNA2 feature AMD has designed to allow standard CU’s to run ML code faster (see the quote I posted from their RDNA Architecture manual) without adding Tensor Cores or other additional custom HW (again the chips are out for anyone to see, XSX does not have a massively bigger die and it has a LOT more CU’s and wider RAM I/O, etc... not sure why we are needing to see additional Tensor Core like units in there too). Then they went and made a nice big point in their DF interview and both them and AMD are enjoying the cross promotion.

I think RDNA2 (RDNA1 too) was a quite smart bet by AMD where they added RT acceleration and made the CU’s friendlier/faster at running ML code in compute shaders by making surgical changes to their HW: see TMU’s modifications to add RT intersection HW and extensions to read and update the BVH and extending their CU’s to run INT8/INT4 at proportionally higher throughput (like RPM/Paired floats for FP16) as well new instructions (mixed precision dot products).

DirectML will run just fine on Tensor Cores or nVIDIA shader ALU’s.
 
Last edited:

J_Gamer.exe

Member
Read the accompanying article, Milan stage they all drop into the forties.
You mean this accompanying article which mentions no such thing...

"Put simply, it's 60 frames per second... with just one exception in our hours of play. In the Mendoza mission set in Argentina, it is possible to see Xbox consoles run between 50 to 60fps around a field towards the outskirts of the level, while PlayStation 5 remains constant at 60fps. Hopefully IO will look at improving this for owners of the Microsoft machines, but everything else we played ran flawlessly - bar a very slight stutter in a cutscene at the beginning of the Miami stage from Hitman 2, where all consoles dip from to near 40fps. At this point though, it feels more like a nitpick rather than anything that would have an impact on any specific purchasing recommendation"


Find me the proof of what your saying or its a lie. There is no gameplay drop mentioned and you've been found out.
 
Last edited:

Omni_Manhatten

Neo Member
Asking the question about what Sony did do or did not do is not crazy at all, but I think we are taking little evidence on Sony’s side (David Cage is not an engineer/dev, the PS5 dev quote is sketchy and we can take it and it’s correction or neither) and forgetting that it is not the first time where both MS and Sony improved on a feature but only MS made it a bit PR bullet point (see improvements in the controller measured latency).

So far what I see is MS taking an optional RDNA1 configuration / seemingly standard RDNA2 feature AMD has designed to allow standard CU’s to run ML code faster (see the quote I posted from their RDNA Architecture manual) without adding Tensor Cores or other additional custom HW (again the chips are out for anyone to see, XSX does not have a massively bigger die and it has a LOT more CU’s and wider RAM I/O, etc... not sure why we are needing to see additional Tensor Core like units in there too). Then they went and made a nice big point in their DF interview and both them and AMD are enjoying the cross promotion.
Maybe. I get MS does PR like Sony. Just curious. You are correct it’s not HW it’s HW support. Just would like Cerny to give us that deep dive. Would at least give clarity on his vision of the console for the future .
 

Panajev2001a

GAF's Pleasant Genius
the point is that engineer didn't say big navi ...he said fuckin navi. HIM not me, his words not ours..two diff times talking about different things. I don't even want to bring up the dieshot that exposes exactly when the engineer said. I do not want to bring into question leviathan or Locuza both think for the same reason as Leonardi that will not support ML
I would not take an out of context late at night private message without his followup (which matches Cerny’s words in the Road to PS5 talk too)... take both statements or take neither.
nlVaEmn.jpg

But I know it will never be accepted. If int4 support is missing, will you finally agree with me about everything?
I would accept that XSX can run INT4/INT8 code faster like PS4 Pro could for FP16 :).
 

MonarchJT

Banned
I would not take an out of context late at night private message without his followup (which matches Cerny’s words in the Road to PS5 talk too)... take both statements or take neither.
nlVaEmn.jpg


I would accept that XSX can run INT4/INT8 code faster like PS4 Pro could for FP16 :).
rdna2 cu can run int4 and I don't even want to think about 1 hypothesis where you want me to believe that Sony has disabled int4 support for I don't know what obscure reason)))
correct?
 
Last edited:

Topher

Gold Member
I would not take an out of context late at night private message without his followup (which matches Cerny’s words in the Road to PS5 talk too)... take both statements or take neither.
nlVaEmn.jpg


I would accept that XSX can run INT4/INT8 code faster like PS4 Pro could for FP16 :).

That engineer's entire message is suspect. Why are people repeating what he said when he plainly says it isn't accurate?
 
I love when people use this but don’t want to post the very next line of info that came with it.
“To ensure compatibility with the older GCN instruction set, the RDNA SIMDs in Navi support mixed-precision compute. This makes the new Navi GPUs suitable for not only gaming workloads (FP32), but also for scientific (FP64) and AI (FP16) applications. The RDNA SIMD improves latency by 2x in wave32 mode and by 44% in wave64 mode.” I mean how many more times is this stuff going to be argued on the internet?
Like I said show me the actual evidence? I even share the stuff that his argument comes from. Did they indeed go as far as MS on the shaders to support it? DF has asked . The dev who made the article of this topic isn’t just talking out his butt. It’s been an actual question for a while. Even Cerny said he would answer it later?

This is a perfect example of your poor reading comprehension.

If mixed precision compute is added to support BC with GCN GPUs then it means mixed precision was supported in those old GCN GPUs.

That said, frankly this quote offers nothing to support your argument. The ML argument is about mixed precision INT OPs. You keep talking about and posting quotes discussing FP... it shows you don't even grasp that floating point (FP) and Integer (INT) ops are different.

If you can't even differentiate between something so basic, we're wasting our time arguing with you.
 
Last edited:

Omni_Manhatten

Neo Member
rdna2 cu can run int4 and I don't even want to think about 1 hypothesis where you want me to believe that Sony has disabled int4 support for I don't know what obscure reason)))
correct?
Don’t think of it as disabled. You have to enable it instead. You do it at the HW level before your SOC goes into production. I mean Sony didn’t pick up all RDNA 2 features. Added some of their own. So who knows if they felt they needed it outside of RT.
 
Last edited:

BeardGawd

Banned
I would not take an out of context late at night private message without his followup (which matches Cerny’s words in the Road to PS5 talk too)... take both statements or take neither.
nlVaEmn.jpg
It's weird you keep posting this quote. It does not contradict his first quote. He doesn't even mention ML here.
I would accept that XSX can run INT4/INT8 code faster like PS4 Pro could for FP16 :).

That's all anyone is talking about! Or at least thats all I was trying to say.

X1X also supported FP16 just like PS4Pro. It just didn't have the rapid packed math support (PS4Pro can execute FP16 commands twice as fast as the X1X).

Same situation here. Due to INT8/INT4 rapid packed math support. XSX can execute INT8 twice as fast as PS5 and INT4 four times as fast as PS5.
 

Panajev2001a

GAF's Pleasant Genius
It's weird you keep posting this quote. It does not contradict his first quote. He doesn't even mention ML here.
It does since the first quote was the basis of the whole RDNA1.x circus. Take them both or leave them both.
That's all anyone is talking about! Or at least thats all I was trying to say.

X1X also supported FP16 just like PS4Pro. It just didn't have the rapid packed math support (PS4Pro can execute FP16 commands twice as fast as the X1X).
I know, FP16 support is not new, double throughput is.

Same situation here. Due to INT8/INT4 rapid packed math support. XSX can execute INT8 twice as fast as PS5 and INT4 four times as fast as PS5.
We will see, possibly so, but we will see :).
 
Last edited:

J_Gamer.exe

Member
You mean this accompanying article which mentions no such thing...

"Put simply, it's 60 frames per second... with just one exception in our hours of play. In the Mendoza mission set in Argentina, it is possible to see Xbox consoles run between 50 to 60fps around a field towards the outskirts of the level, while PlayStation 5 remains constant at 60fps. Hopefully IO will look at improving this for owners of the Microsoft machines, but everything else we played ran flawlessly - bar a very slight stutter in a cutscene at the beginning of the Miami stage from Hitman 2, where all consoles dip from to near 40fps. At this point though, it feels more like a nitpick rather than anything that would have an impact on any specific purchasing recommendation"


Find me the proof of what your saying or its a lie. There is no gameplay drop mentioned and you've been found out.
I expect now hes been caught red handed fabricating reality... riky will vanish... 🙄🤡 Its bad when you have to make up fake ps5 performance gameplay drops that don't exist to argue your prefered console is better.

Poor form lad, poor form.
 

Omni_Manhatten

Neo Member
This is a perfect example of your poor reading comprehension.

If mixed precision compute is added to support BC with GCN GPUs then it means mixed precision was supported in those old GCN GPUs.

That said, frankly this quote offers nothing to support your argument. The ML argument is about mixed precision INT OPs. You keep talking about and posting quotes discussing FP... it shows you don't even grasp that floating point (FP) and Integer (INT) ops are different.

If you can't even differentiate between something so basic, we're wasting our time arguing with you.
I simply said this is the exact quote that follows the screenshot showing AMD RDNA2 Int support that is being shared. It’s literally the very next paragraph. You said it was evidence being used to answer my questions and I simply posted what the paragraph following that evidence says. Copy and pasted it. That’s not some mystery. You keep saying it means nothing. It’s specifically says FP16 gaming support. We shouldn’t have to keep explaining this over and over. We understand what is being shown it’s not our question. You don’t understand what we are asking.
 
Top Bottom