• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

IntentionalPun

Ask me about my wife's perfect butthole
Why would you want a dev whose strength is in narrative-driven games (which is obviously their passion as devs) to make something completely different from this?

Because I actually think they also show some great talent that would lend itself to different kinds of games. I actually have found the shooting/gunplay/grenade stuff in their games has been great since Uncharted 2. They do some pretty awesome stuff with physics in general in their games (like the tweet I was responding to.) Their tech would be a crazy playground for like a Hitman style game for instance, and I think they'd excel at something like that (or whatever else they could come up with.)

ND makes the games they want to make. I'm fine with that. If you don't like the games ND wants to make, there are many other devs o
ut there whose passions align more with the context-less, mindless gameplay-focused stuff you're looking for.

I don't get why anyone would want a game dev whose strengths are in making games in a genre you personally dislike to make something you do. What makes you think it would even be good?

See above. I'm not telling anyone what games to make though. Just expressing my personal opinion from my own perspective.
 
Last edited:

SynTha1

Member
Why does someone need to have been into Xbox before Gamepass? It's been around almost 4 years.

Beyond that Xbox and Xbox Live built a big following in the 360 era. Some of that following has stuck around, and are now playing any number of popular third party games online for the brunt of their gaming time. I have multiple friend groups who still generally game on XBL. Many also have PlayStation consoles... some have PCs, but XBL on Xbox is still their main go-to.

Put your sword down warrior, and just..... game, bro 🫂
Hey what people do with their time and money is on them I just want to know why they continue to support Microsoft when they do the absolute minimum to get people's money. I make my reasons known and never bite my tongue about how I feel about Microsoft so I just want to know why people still support them after all the years of straight up lying and they know they are lying like I don't get it.
 

oldergamer

Member
All GPUs can do ML/DL math. It relies primarily on general floating point compute performance.

PS5 has both considerable FP compute as well as integer math capability. And Rapid Packed Math for INT ops being not exclusive to XSX (as it's standard for RDNA2) there's no rational reason to believe the PS5 doesn't also include it--well... other than fanboy wet dreams.

Regardless, as clearly stated previously, all the RPM for INT dick-waving by Xbox faithful is redundant, because DL computations being extremely latency sensitive (both in operation latency and especially, overwhelmingly memory access latency), it's dependant far more on data locality and memory bandwidth than anything else.



Why would you want a dev whose strength is in narrative-driven games (which is obviously their passion as devs) to make something completely different from this?

ND makes the games they want to make. I'm fine with that. If you don't like the games ND wants to make, there are many other devs o
ut there whose passions align more with the context-less, mindless gameplay-focused stuff you're looking for.

I don't get why anyone would want a game dev whose strengths are in making games in a genre you personally dislike to make something you do. What makes you think it would even be good?
I'd disagree with you there. Not all GPU's can handle machine learning. If that was reality, then we would see more then just specific cards with the ability to support the feature. You could say the same on the CPU, you can handle the math there two, but without dedicated hardware handling it, its not fast. Also didn't the sony engineer that had his private messages posted for all to see, confirm that it doesn't have dedicated hardware for this months back on twitter?
 
Because I actually think they also show some great talent that would lend itself to different kinds of games. I actually have found the shooting/gunplay/grenade stuff in their games has been great since Uncharted 2. They do some pretty awesome stuff with physics in general in their games (like the tweet I was responding to.) Their tech would be a crazy playground for like a Hitman style game for instance, and I think they'd excel at something like that (or whatever else they could come up with.)



See above. I'm not telling anyone what games to make though. Just expressing my personal opinion from my own perspective.
Well if you already recognise the merits of the gameplay design in the games they currently make, how would stripping away the narrative elements make those games better for you?

Imho, Hitman-like games boasting very deep gameplay systems don't have to preclude by definition the inclusion of entertaining and engaging story-telling. Imho, the Hitman games themselves would be elevated by adding a better narrative back in.

The problem, as I see it, and please do feel free to disagree, is that many seem to think of "gameplay versus narrative" as being some kind of mutually exclusive binary quantity. It doesn't. And I'd argue ND of all devs (among many other of Sony's FP devs) get the recognition they do because their games boast both superb gameplay as well as story; offering the best of both worlds.
 
I'd disagree with you there. Not all GPU's can handle machine learning. If that was reality, then we would see more then just specific cards with the ability to support the feature. You could say the same on the CPU, you can handle the math there two, but without dedicated hardware handling it, its not fast. Also didn't the sony engineer that had his private messages posted for all to see, confirm that it doesn't have dedicated hardware for this months back on twitter?

No he just said some vague bs that because it's navi based (same as XSX) it doesn't support ML
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
Well if you already recognise the merits of the gameplay design in the games they currently make, how would stripping away the narrative elements make those games better for you?

It's not just the narrative elements. I'm not sure why this needs more explaining than what you've already quoted; I really dislike large sections of every ND game, at the same time I really love sections of their games. I don't know why I need to say more than that; it should be obvious that someone would prefer.. more of what they love, and less of what they hate lol

Imho, Hitman-like games boasting very deep gameplay systems don't have to preclude by definition the inclusion of entertaining and engaging story-telling. Imho, the Hitman games themselves would be elevated by adding a better narrative back in.

The problem, as I see it, and please do feel free to disagree, is that many seem to think of "gameplay versus narrative" as being some kind of mutually exclusive binary quantity. It doesn't. And I'd argue ND of all devs (among many other of Sony's FP devs) get the recognition they do because their games boast both superb gameplay as well as story; offering the best of both worlds.

I never said or implied any of this.
 
I'd disagree with you there. Not all GPU's can handle machine learning. If that was reality, then we would see more then just specific cards with the ability to support the feature. You could say the same on the CPU, you can handle the math there two, but without dedicated hardware handling it, its not fast. Also didn't the sony engineer that had his private messages posted for all to see, confirm that it doesn't have dedicated hardware for this months back on twitter?

Don't be obtuse. GPUs are much better than CPUs at DL computation for obvious reasons, but that doesn't mean they are a particularly good at it in absolute terms. For anything other than inferencing using the simplest DL models, they're actually pretty bad. Which is why you haven't seen a single instance of DL inference used in GPU compute during the current gen.

Rapid Packed Math for INT ops isn't "dedicated hardware for DL/ML"; contrary to the bollocks MS's marketing team has been peddling.

Integer math is used throughout the GPU graphics rendering and compute pipelines, for all sorts of workloads. It's an acceleration of generalised integer compute for lower precision integer numbers. It's analogous to RPM for FP16 for floating point math.

If you're looking for dedicated hardware for DL/ML, you need something specifically targeted towards the dense data, matrix math of DL computation. You want highly parallel cores with short execution pipelines, more registers, cache and shared memory. You want features for sparse data compression to improve effective memory bandwodth usage, and lota and lots of bandwidth to main memory. You want Tensor cores.

Tensor cores on NVidias flagship cards provide well over 230TFLOPs of floating point performance. Even with 2x or 4x RPM on XSX 12TFLOPs it isn't even coming close. And for those NVidia cards, that 230+TFLOPs is free, as it's dedicated hardware. So your DL computation isn't even touching the CUDA cores. So you still have 30+TFLOPs of CUDA core performance for all your other game rendering workloads.

This is the problem with your perspective on this subject and many others who push this ignorant narrative. You mistakenely believe faster low precision integer performance is somehow going to be a game-changer and make the XSX rediculously performant for DL computation. In reality, you're almost certainly looking at low single-digit percentage speed-up at best. And that's speed-up to a base DL compute performance that's already pathetic compared to other products with actual dedicated hardware providing the requisite meaningful level of performance required for this type of computation, e.g. tensor cores or TPUs.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
They are doing the best kinds of games.

Gameplay is perfection in TLOU2.
Admittedly have not played TLOU2 yet; although I do own it now.

I'm not in a hurry because I really struggled to find much enjoyment out of TLOU.

But I'll give it a shot; definitely seen some cool videos of TLOU2 gameplay. I'm just also dreading inevitably being forced through some boring shit lol
 
Last edited:

assurdum

Banned
I'd disagree with you there. Not all GPU's can handle machine learning. If that was reality, then we would see more then just specific cards with the ability to support the feature. You could say the same on the CPU, you can handle the math there two, but without dedicated hardware handling it, its not fast. Also didn't the sony engineer that had his private messages posted for all to see, confirm that it doesn't have dedicated hardware for this months back on twitter?
And what exactly has of so special the Xbox to handle ML and ps5 not? The magical logo?
 
It's not just the narrative elements. I'm not sure why this needs more explaining than what you've already quoted; I really dislike large sections of every ND game, at the same time I really love sections of their games. I don't know why I need to say more than that; it should be obvious that someone would prefer.. more of what they love, and less of what they hate lol



I never said or implied any of this.

Well, then just skip the cutscenes if you don't care about them?

I'm just trying to understand your core complaint.
 

assurdum

Banned
Don't be obtuse. GPUs are much better than CPUs at DL computation for obvious reasons, but that doesn't mean they are a particularly good at it in absolute terms. For anything other than inferencing using the simplest DL models, they're actually pretty bad. Which is why you haven't seen a single instance of DL inference used in GPU compute during the current gen.

Rapid Packed Math for INT ops isn't "dedicated hardware for DL/ML"; contrary to the bollocks MS's marketing team has been peddling.

Integer math is used throughout the GPU graphics rendering and compute pipelines, for all sorts of workloads. It's an acceleration of generalised integer compute for lower precision integer numbers. It's analogous to RPM for FP16 for floating point math.

If you're looking for dedicated hardware for DL/ML, you need something specifically targeted towards the dense data, matrix math of DL computation. You want highly parallel cores with short execution pipelines, more registers, cache and shared memory. You want features for sparse data compression to improve effective memory bandwodth usage, and lota and lots of bandwidth to main memory. You want Tensor cores.

Tensor cores on NVidias flagship cards provide well over 230TFLOPs of floating point performance. Even with 2x or 4x RPM on XSX 12TFLOPs it isn't even coming close. And for those NVidia cards, that 230+TFLOPs is free, as it's dedicated hardware. So your DL computation isn't even touching the CUDA cores. So you still have 30+TFLOPs of CUDA core performance for all your other game rendering workloads.

This is the problem with your perspective on this subject and many others who push this ignorant narrative. You mistakenely believe faster low precision integer performance is somehow going to be a game-changer and make the XSX rediculously performant for DL computation. In reality, you're almost certainly looking at low single-digit percentage speed-up at best.
Doubt he has a single cue of what you trying to explain to him but I admire your patience. Anyway the level of disinformation MS has spread just to promote his hardware is something else. The level of absurdity to promise magical boost with their solution, 50% of more perfomance per core with velocity architecture, 300 % with machine learning. Good Lord. At least Sony never claimed such sensationalist performance.
 
Last edited:

ethomaz

Banned
Ok let’s clear up here before it become a “thing”.

EA when asked specifically about PS5 GPU:

“More generally, we’re seeing the GPU be able to power machine learning for all sorts of really interesting advancements in the gameplay and other tools.”

People keep falling on MS PR Marketing.
 
Last edited:

JonkyDonk

Member
Don't be obtuse. GPUs are much better than CPUs at DL computation for obvious reasons, but that doesn't mean they are a particularly good at it in absolute terms. For anything other than inferencing using the simplest DL models, they're actually pretty bad. Which is why you haven't seen a single instance of DL inference used in GPU compute during the current gen.

Rapid Packed Math for INT ops isn't "dedicated hardware for DL/ML"; contrary to the bollocks MS's marketing team has been peddling.

Integer math is used throughout the GPU graphics rendering and compute pipelines, for all sorts of workloads. It's an acceleration of generalised integer compute for lower precision integer numbers. It's analogous to RPM for FP16 for floating point math.

If you're looking for dedicated hardware for DL/ML, you need something specifically targeted towards the dense data, matrix math of DL computation. You want highly parallel cores with short execution pipelines, more registers, cache and shared memory. You want features for sparse data compression to improve effective memory bandwodth usage, and lota and lots of bandwidth to main memory. You want Tensor cores.

Tensor cores on NVidias flagship cards provide well over 230TFLOPs of floating point performance. Even with 2x or 4x RPM on XSX 12TFLOPs it isn't even coming close. And for those NVidia cards, that 230+TFLOPs is free, as it's dedicated hardware. So your DL computation isn't even touching the CUDA cores. So you still have 30+TFLOPs of CUDA core performance for all your other game rendering workloads.

This is the problem with your perspective on this subject and many others who push this ignorant narrative. You mistakenely believe faster low precision integer performance is somehow going to be a game-changer and make the XSX rediculously performant for DL computation. In reality, you're almost certainly looking at low single-digit percentage speed-up at best.
I think you are giving some people way too much credit with these detailed replies. Marketing lines get parroted around here with no knowledge of what any of it means. There has been so much nonsense all year about all these supposedly massive difference-makers for XSX, you'd think it was a $3000 machine the way these fanboys talk about it.
 

ethomaz

Banned
Admittedly have not played TLOU2 yet; although I do own it now.

I'm not in a hurry because I really struggled to find much enjoyment out of TLOU.

But I'll give it a shot; definitely seen some cool videos of TLOU2 gameplay. I'm just also dreading inevitably being forced through some boring shit lol
If you didn’t like TLOU gameplay the chances to not like TLOU2 gameplay are very high.
TLOU had best TPS gameplay on PS3 gen... no other game had the same recoil and feedback on weapon like it... shorting with a Shotgun or Bow is absolutely amazing.

I can see the lower difficulty seems like really very easy but the hardest difficulty makes you really appreciate the gameplay and of course it shines most in the MP factions.

It is like the Destiny of the TPS gameplay.
 

IntentionalPun

Ask me about my wife's perfect butthole
Well, then just skip the cutscenes if you don't care about them?

I'm just trying to understand your core complaint.
Or, get this.. I could buy and play their games, and then offer my opinion on the things I don't like about them on the internet.

It's not just cutscenes that I dislike about ND's games. I do not enjoy their platforming, their puzzles, and they go well beyond "cutscenes" and generally have a lot of forced narrative sections.. long walks/talks for instance. It creates a balance of games that I find a mix of fun and frustrating.. I'm happy for people that enjoy it all, I don't.
 

IntentionalPun

Ask me about my wife's perfect butthole
If you didn’t like TLOU gameplay the chances to not like TLOU2 gameplay are very high.
TLOU had best TPS gameplay on PS3 gen... no other game had the same recoil and feedback on weapon like it... shorting with a Shotgun or Bow is absolutely amazing.

I can see the lower difficulty seems like really very easy but the hardest difficulty makes you really appreciate the gameplay and of course it shines most in the MP factions.

It is like the Destiny of the TPS gameplay.
I play all games on whatever hard option is available to me at the start, and ND games are no different.. and as I said, I find the gunplay incredibly satisfying.

Their games though are generally balance towards that being a fraction of what you are doing as you play. Is TLOU2 different? I REALLY disliked the "stealth" stuff in TLOU too (and I like a good stealth game, but found it janky as hell in TLOU.)
 
Last edited:

assurdum

Banned
I play all games on whatever hard option is available to me at the start, and ND games are no different.. and as I said, I find the gunplay incredibly satisfying.

Their games though are generally balance towards that being a fraction of what you are doing as you play. Is TLOU2 different? I REALLY disliked the "stealth" stuff in TLOU too (and I like a good stealth game, but found it janky as hell in TLOU.)
Probably is more a refinement of the first. But I agree with you, ND should stop with those long sessions about the nothing or at least give the option to replay the whole campaign without those. There is a mode in TLou2 where you can choice the level to play but isn't it exactly the same and loading time are insufferable.
 
Last edited:

ethomaz

Banned
I play all games on whatever hard option is available to me at the start, and ND games are no different.. and as I said, I find the gunplay incredibly satisfying.

Their games though are generally balance towards that being a fraction of what you are doing as you play. Is TLOU2 different? I REALLY disliked the "stealth" stuff in TLOU too (and I like a good stealth game, but found it janky as hell in TLOU.)
In TLOU2 do things exactly like they did on TLOU... you can beat the parts the way you want... that means you can use any type of gunplay, stealth and environment approach you want.

In easy difficult you will feel good with whatever type of gameplay you choose... it will be easy.
On hardest difficult it will variate because some play styles will be easy than others (aka some parts will be less hard with stealth but nothing is forcing you to go stealth).

What chances in TLOU2? The amount of options... in TLOU2 you have more options in terms weapons, crafting, environment use, and stealth at your disposition... plus enemies AI are smart and there more diversity.

Like I said if you didn’t find the TLOU gameplay engaging then it will be hard to like TLOU2 gameplay.

Maybe you can wait the so “promised” MP where the focus will be only on Gameplay.
 
Last edited:
Ok let’s clear up here before it become a “thing”.

EA when asked specifically about PS5 GPU:

“More generally, we’re seeing the GPU be able to power machine learning for all sorts of really interesting advancements in the gameplay and other tools.”

The caveat here is that there is no distinction here being made to whether this is referencing offline or runtime performance.

I'm sure both PS5 and XSX can cope with inferencing with very simplistic DL models. But I do suspect what is actually being referenced here is the use of simple DL models used offline for things like procedural generation of assets and animation blending. Things on the "tools" side (as quoted) that aid more in games development.

I do hope we get to see DL stuff at runtime, particularly for things like gameplay systems AI and animation blending. But I won't hold my breath.
 

ethomaz

Banned
The caveat here is that there is no distinction here being made to whether this is referencing offline or runtime performance.

I'm sure both PS5 and XSX can cope with inferencing with very simplistic DL models. But I do suspect what is actually being referenced here is the use of simple DL models used offline for things like procedural generation of assets and animation blending. Things on the "tools" side (as quoted) that aid more in games development.

I do hope we get to see DL stuff at runtime, particularly for things like gameplay systems AI and animation blending. But I won't hold my breath.
The biggest example of ML is still offline... DLSS.

I don’t see runtime ML being a thing.
 
Last edited:

TGO

Hype Train conductor. Works harder than it steams.
Forza is pre backed... if you stay in the same place on the track it will never change the weather.
You need to trigger it that is set in some lap before the race start... so after you do that number of laps it will change the weather
Like Outrun then
 
Don't be obtuse. GPUs are much better than CPUs at DL computation for obvious reasons, but that doesn't mean they are a particularly good at it in absolute terms. For anything other than inferencing using the simplest DL models, they're actually pretty bad. Which is why you haven't seen a single instance of DL inference used in GPU compute during the current gen.

Rapid Packed Math for INT ops isn't "dedicated hardware for DL/ML"; contrary to the bollocks MS's marketing team has been peddling.

Integer math is used throughout the GPU graphics rendering and compute pipelines, for all sorts of workloads. It's an acceleration of generalised integer compute for lower precision integer numbers. It's analogous to RPM for FP16 for floating point math.

If you're looking for dedicated hardware for DL/ML, you need something specifically targeted towards the dense data, matrix math of DL computation. You want highly parallel cores with short execution pipelines, more registers, cache and shared memory. You want features for sparse data compression to improve effective memory bandwodth usage, and lota and lots of bandwidth to main memory. You want Tensor cores.

Tensor cores on NVidias flagship cards provide well over 230TFLOPs of floating point performance. Even with 2x or 4x RPM on XSX 12TFLOPs it isn't even coming close. And for those NVidia cards, that 230+TFLOPs is free, as it's dedicated hardware. So your DL computation isn't even touching the CUDA cores. So you still have 30+TFLOPs of CUDA core performance for all your other game rendering workloads.

This is the problem with your perspective on this subject and many others who push this ignorant narrative. You mistakenely believe faster low precision integer performance is somehow going to be a game-changer and make the XSX rediculously performant for DL computation. In reality, you're almost certainly looking at low single-digit percentage speed-up at best. And that's speed-up to a base DL compute performance that's already pathetic compared to other products with actual dedicated hardware providing the requisite meaningful level of performance required for this type of computation, e.g. tensor cores or TPUs.

Great lesson. For me, PS5/XSX is the i/o biggest improvement generation. PS6/future Xbox (whatever) will be the ML/DL biggest improvement generation, with dedicated hardware. Will be very interesting.

DL/ML not only for DLSS, but facial, clothes animation, NPC generators like JALI, gameplay and many other stuff.
 
Last edited:
The level of absurdity to promise magical boost with their solution, 50% of more perfomance per core with velocity architecture, 300 % with machine learning. Good Lord. At least Sony never claimed such sensationalist performance.

I'm honestly wondering if some of that is a just a rehash of Mr Xs dual GPU theory. But instead it's about a ton of features that need to be unlocked to boost the systems performance.

I understand as developers get familiar with both the PS5 and the XSX games will get better. But the notion that the XSX has a ton of performance boosting features that need to be unlocked seems a bit absurd to me.
 

SlimySnake

Flashless at the Golden Globes
I play all games on whatever hard option is available to me at the start, and ND games are no different.. and as I said, I find the gunplay incredibly satisfying.

Their games though are generally balance towards that being a fraction of what you are doing as you play. Is TLOU2 different? I REALLY disliked the "stealth" stuff in TLOU too (and I like a good stealth game, but found it janky as hell in TLOU.)
the problem is that if you are playing on the hardest difficulty they reduce the ammo drops and drops for other resources because its primarily a stealth game.

if you want to enjoy the gunplay, you need to play on normal where ammo is somewhat abundant and you can shoot your way out of everything.

in tlou2, you can set the difficulty parameters separately. so enemy a.i you can set to hardest while keeping other settings like ammo drops to moderate. still, i would not recommend you play tlou2 if the first one did nothing for you. its over twice as long and far more boring than the first. the gameplay is the game's only saving grace.
 

ethomaz

Banned
What your exactly doubt?

DLSS uses offline high server farms... the results are stored to be used as temporal construction of the image at runtime.

The runtime work is pretty light.
All the hard work... the “learning” part is done offline.

That is why the result are better than do temporal construction in runtime alone... the “learning” data is way better.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
What your exactly doubt?

DLSS is rendered at high server farms... the results are stored on game assets to be used as temporal construction of the image at runtime.

DLSS is not "rendered" on server farms. Like literally all ML, the model is trained on data ahead of time (aka "offline".) That model is then used to make a prediction at run time; in the case of DLSS 2.0 this happens on the GPU against the current frame (and past frames) to refine their image reconstruction.
 
What your exactly doubt?

DLSS is rendered at high server farms... the results are stored on game assets to be used as temporal construction of the image at runtime.
Not exactly, ethomaz ethomaz .

The DLSS neural network model is trained offline (possibly on a server farm or cloud based datacentre) using a metric crap load of gameplay frames captured from the game engine.

The model, is then applied (i.e. inferencing) at runtime on the lower resolution image to determine what a clean and sharp respresentation of a higher resolution version of the image should look like; hence it's a type of image upscale.

"Temporal reconstruction", refers to something very different that isn't based on DL.
 

ethomaz

Banned
DLSS is not "rendered" on server farms. Like literally all ML, the model is trained on data ahead of time (aka "offline".) That model is then used to make a prediction at run time; in the case of DLSS 2.0 this happens on the GPU against the current frame (and past frames) to refine their image reconstruction.
The whole “learning” is rendering the game in server farms with scripted gameplay actions to learn how to predict the graphics to be rendered in runtime.
 
Last edited:
DLSS is not "rendered" on server farms. Like literally all ML, the model is trained on data ahead of time (aka "offline".) That model is then used to make a prediction at run time; in the case of DLSS 2.0 this happens on the GPU against the current frame (and past frames) to refine their image reconstruction.
I'm not sure DLSS 2.0 even needs previous frame information at all. I don't think it takes into account temporal information at all (well, not directly😉).

As such, calling it image reconstruction may be a tad inaccurate.
 

oldergamer

Member
Don't be obtuse. GPUs are much better than CPUs at DL computation for obvious reasons, but that doesn't mean they are a particularly good at it in absolute terms. For anything other than inferencing using the simplest DL models, they're actually pretty bad. Which is why you haven't seen a single instance of DL inference used in GPU compute during the current gen.

Rapid Packed Math for INT ops isn't "dedicated hardware for DL/ML"; contrary to the bollocks MS's marketing team has been peddling.

Integer math is used throughout the GPU graphics rendering and compute pipelines, for all sorts of workloads. It's an acceleration of generalised integer compute for lower precision integer numbers. It's analogous to RPM for FP16 for floating point math.

If you're looking for dedicated hardware for DL/ML, you need something specifically targeted towards the dense data, matrix math of DL computation. You want highly parallel cores with short execution pipelines, more registers, cache and shared memory. You want features for sparse data compression to improve effective memory bandwodth usage, and lota and lots of bandwidth to main memory. You want Tensor cores.

Tensor cores on NVidias flagship cards provide well over 230TFLOPs of floating point performance. Even with 2x or 4x RPM on XSX 12TFLOPs it isn't even coming close. And for those NVidia cards, that 230+TFLOPs is free, as it's dedicated hardware. So your DL computation isn't even touching the CUDA cores. So you still have 30+TFLOPs of CUDA core performance for all your other game rendering workloads.

This is the problem with your perspective on this subject and many others who push this ignorant narrative. You mistakenely believe faster low precision integer performance is somehow going to be a game-changer and make the XSX rediculously performant for DL computation. In reality, you're almost certainly looking at low single-digit percentage speed-up at best. And that's speed-up to a base DL compute performance that's already pathetic compared to other products with actual dedicated hardware providing the requisite meaningful level of performance required for this type of computation, e.g. tensor cores or TPUs.
What I said is being truthful, not "obtuse". Yes GPU's so far, being not many (with the exception of certain nvidia cards) wouldn't fall into the category of "better at it."

You can have hardware for specific purposes. It doesn't have to be general. isn't that simply what is provided with tensor cores, additional hardware that can handle the task (Compare with performance on nvidia cards that don't have that hardware.)

You're making a lot of wild assumptions that I haven't alluded to. let me put it this way. I haven't made a single post talking about faster low precision integer performance. Let me put this a way you can understand. I've seen nothing from sony, or any actual proof that PS5 is capable of handling ML in the same way Xbox will. We've not seen sony talk about it at all. MS on the other hand have talked specifically about what they feature can bring (increasing the resolution of textures, and more specifically DLSS like image reconstruction. MS is already using this hardware right now on live games. What do you think they used to to add HDR to older SDR games? They used machine learning to teach it how to apply HDR. I don't see this feature on PS5 with backwards compatible support.

Your argument that "of course it supports it the same way as xbox" just reminds me of all those other GPU features people "claimed" or assumed PS5 supported that MS claims are part of the FULL RDNA 2 support. In the face of all the evidence, what MS has demonstrated, and what is currently in use, there is nothing to suggest its the same on PS5.

I have no idea where you are going with the rest of your argument, but I hope you eventually get there...
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
The whole “learning” is rendering the game in server farms with scripted gameplay actions to learn how to predict the graphics to be rendered in runtime.

- Literally all ML utilizes a "server" to train against data (aka this is done ahead of time). That's how it works; always.. what you are saying makes absolutely no sense, you clearly just don't "get" ML as nobody who did would ever talk about how you are talking about it... it's like pointing out that breathing utilizes air.. thanks bro?
- And actually, DLSS 2.0 no longer uses the game in question for it's training.. doesn't even necessarily use imagery from gams at all. But when it did.. OF COURSE that training is done ahead of time on powerful servers.. that's kind of the point of ML, leverage a shit ton of processing that happens ahead of time to then create something that can quickly make predictions against new data.
 
Last edited:

IntentionalPun

Ask me about my wife's perfect butthole
I'm not sure DLSS 2.0 even needs previous frame information at all. I don't think it takes into account temporal information at all (well, not directly😉).

As such, calling it image reconstruction may be a tad inaccurate.
I did not call DLSS image reconstruction. DLSS 2.0 is used to refine the reconstruction done using other techniques.

(I do apologize if I used the wrong terms, but I did not call DLSS 2.0 image reconstruction, I'm not some expert specifically on DLSS, but Ethomaz is just discussing ML in general sort of.. wrong)
 
Last edited:

onesvenus

Member
Hey what people do with their time and money is on them I just want to know why they continue to support Microsoft when they do the absolute minimum to get people's money. I make my reasons known and never bite my tongue about how I feel about Microsoft so I just want to know why people still support them after all the years of straight up lying and they know they are lying like I don't get it.
Should we pretend games like Forzas and Gears to say two of their more iconic sagas don't exist?
 
What I said is being truthful, not "obtuse". Yes GPU's so far, being not many (with the exception of certain nvidia cards) wouldn't fall into the category of "better at it."

You can have hardware for specific purposes. It doesn't have to be general. isn't that simply what is provided with tensor cores, additional hardware that can handle the task faster then nvidia cards that don't have that hardware.

You're making a lot of wild assumptions that I haven't alluded to. let me put it this way. I haven't made a single post talking about faster low precision integer performance. Let me put this a way you can understand. I've seen nothing from sony, or any actual proof that PS5 is capable of handling ML in the same way Xbox will. We've not seen sony talk about it at all. MS on the other hand have talked specifically about what they feature can bring (increasing the resolution of textures, and more specifically DLSS like image reconstruction. MS is already using this hardware right now on live games. What do you think they used to to add HDR to older SDR games? They used machine learning to teach it how to apply HDR. I don't see this feature on PS5 with backwards compatible support.

Your argument that "of course it supports it the same way as xbox" just reminds me of all those other GPU features people "claimed" or assumed PS5 supported that MS claims are part of the FULL RDNA 2 support. In the face of all the evidence, what MS has demonstrated, and what is currently in use, there is nothing to suggest its the same on PS5.

I have no idea where you are going with the rest of your argument, but I hope you eventually get there...

Clearly you have no idea what my main arguments are, as you evidently have no idea what the supposed dedicated hardware for ML on XSX that MS has been shouting about actually is (protip: it's Rapid Packed Math for Integer OPs).

You also clearly have a very little grasp of the subject matter of the discussion, as evidenced by the fact that you're simply plagiarising my own statements and trying to pass them off as your own insight.

The rest of your incoherent diatribe is just tangential rambling.

At this point I think we can both agree you're clueless and call it a day.
 
Last edited:
What I said is being truthful, not "obtuse". Yes GPU's so far, being not many (with the exception of certain nvidia cards) wouldn't fall into the category of "better at it."

You can have hardware for specific purposes. It doesn't have to be general. isn't that simply what is provided with tensor cores, additional hardware that can handle the task (Compare with performance on nvidia cards that don't have that hardware.)

You're making a lot of wild assumptions that I haven't alluded to. let me put it this way. I haven't made a single post talking about faster low precision integer performance. Let me put this a way you can understand. I've seen nothing from sony, or any actual proof that PS5 is capable of handling ML in the same way Xbox will. We've not seen sony talk about it at all. MS on the other hand have talked specifically about what they feature can bring (increasing the resolution of textures, and more specifically DLSS like image reconstruction. MS is already using this hardware right now on live games. What do you think they used to to add HDR to older SDR games? They used machine learning to teach it how to apply HDR. I don't see this feature on PS5 with backwards compatible support.

Your argument that "of course it supports it the same way as xbox" just reminds me of all those other GPU features people "claimed" or assumed PS5 supported that MS claims are part of the FULL RDNA 2 support. In the face of all the evidence, what MS has demonstrated, and what is currently in use, there is nothing to suggest its the same on PS5.

I have no idea where you are going with the rest of your argument, but I hope you eventually get there...
You wrote in last page that "PS5 doesn't have ML". Who is making a lot of wild assumptions here?

Xbox Series X could have some DLSS like solution, but like he said, there's no sufficient horse power to get any comparison with NVIDIA solutions. We can have better reconstruction in next years, but no dreams.

About backwards compatibility, everybody knows that PS5 has hardware level backwards, and Xbox Series X|S has a highter abstraction level (thanks to DirectX). So, it's obvious that MS can apply some filters or ML implementations like auto HDR. Gen9aware is here for it.
 
Last edited:
I did not call DLSS image reconstruction. DLSS 2.0 is used to refine the reconstruction done using other techniques.
I get you. But while i'm sure it can and has been used to refine an existing upscale done using more traditional temporal reconstruction techniques. It doesn't have to.

I've been discussing purely the DLSS portion of the image upscale equation, so as to avoid confusion.

That said, I don't think there's an issue with ethomaz ethomaz 's understanding. Only his use of the proper terminology which we have to be forgiving, as clearly like many others here, english isn't his first language (and I intend no slight by this).
 
Last edited:

onesvenus

Member
So that's your reason those 2 games thats it well if that's all it takes then have fun good sir. It's just not enough for me to just forget all the lies and manipulation they do I value my money,time and self respect a lil more than that.
LOL
I suppose you don't support any big company then 😂
 

IntentionalPun

Ask me about my wife's perfect butthole
I get you. But while i'm sure it can and has been used to refine an existing upscale done using more traditional temporal reconstruction techniques. It doesn't have to.

Well the question is does it improve the performance vs. quality balance vs. just using a non-ML technique being used by a lot of companies?

I haven't seen any great analysis of that. Are you suggesting nVidia could have an option that didn't use ML that would be identical in quality and performance?
 

oldergamer

Member
You wrote in last page that "PS5 doesn't have ML". Who is making a lot of wild assumptions here?

Xbox Series X could have some DLSS like solution, but like he said, there's no sufficient horse power to get any comparison with NVIDIA solutions. We can have better reconstruction in next years, but no dreams.

About backwards compatibility, everybody knows that PS5 has hardware level backwards, and Xbox Series X|S has a highter abstraction level (thanks to DirectX). So, it's obvious that MS can apply some filters or ML implementations like auto HDR. Gen9aware is here for it.
I've seen no evidence of this on PS5. saying it can do it, I'll give that I'm wrong on that. Even still my old GTX 770 could also do it. However, its an order of a magnitude slower then the current Nvidia cards.

Xbox is already applying HDR to backwards compatible games without impacting the framerate. I'd go out on a limb and say there is a bit more to it then the people that are playing it down


Again I've seen no proof that PS5 has the exact same setup of Xbox when it comes to machine learning. They revealed certain information at hotchips
 
Last edited:

oldergamer

Member
Clearly you have no idea what my main arguments are, as you evidently have no idea what the supposed dedicated hardware for ML on XSX that MS has been shouting about actually is (protip: it's Rapid Packed Math for Integer OPs).

You also clearly have a very little grasp of the subject matter of the discussion, as evidenced by the fact that you're simply plagiarising my own statements and trying to pass them off as your own insight.

The rest of your incoherent diatribe is just tangential rambling.

At this point I think we can both agree you're clueless and call it a day.
Bullshit. Wow all the insults. I'm gonna put you on ignore for a while. You need to cool your ass down.

When I said it wasn't supported, again, that was based off those sony engineer twitter conversations we saw posted months back. if it can, cool. I just haven't seen any evidence of it. where is the proof?

You have no idea what my grasp of the matter is, as I'm not wasting my time arguing about hidden meaning in what was written.
 
Last edited:
Well the question is does it improve the performance vs. quality balance vs. just using a non-ML technique being used by a lot of companies?

I haven't seen any great analysis of that. Are you suggesting nVidia could have an option that didn't use ML that would be identical in quality and performance?
No. Where are you seeing me make suggestion of that?

I'm simply adding clarificarion to your statement that "the DL model inferencing of DLSS 2.0 is used to refine a traditional temporal upscaled image", that this doesn't have to be the case.

Game devs can simply take the raw rendered image and apply the DLSS 2.0 model to do the neural-network based upscale. They don't have to do the temporal reconstruction using past frames first and then apply DLSS 2.0 to the reconstructed image.
 

IntentionalPun

Ask me about my wife's perfect butthole
No. Where are you seeing me make suggestion of that?

Sorry, I posed that as a question because I was legit confused.

I'm simply adding clarificarion to your statement that "the DL model inferencing of DLSS 2.0 is used to refine a traditional temporal upscaled image", that this doesn't have to be the case.

Game devs can simply take the raw rendered image and apply the DLSS 2.0 model to do the neural-network based upscale. They don't have to do the temporal reconstruction using past frames first and then apply DLSS 2.0 to the reconstructed image.

Devs have this option? I thought what I described, is what DLSS 2.0 is.. devs have the option to use it differently?
 
Sorry, I posed that as a question because I was legit confused.



Devs have this option? I thought what I described, is what DLSS 2.0 is.. devs have the option to use it differently?

As I was trying to allude to previously, I'm pretty sure an initial non-DL-based temporal image reconstruction using past image data isn't strictly part of DLSS.

Traditional temporal reconstruction techniques will produce a softer image. Applying a DL image upcale model on that will likely not produce the sharpest end result. You want the sharpest possible native image to feed into your DL model as a general principle.
 
Status
Not open for further replies.
Top Bottom