• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Insomniac used machine learning for character deformation in real time

Heisenberg007

Gold Journalism
Is he like the other guy that just spread the PR without explain why there are old non-RDNA2 parts in the Series X's GPU to me?
Episode 2 Chandler GIF by Friends


Share the sauce.
 

ethomaz

Banned
The Italy guy said PS5 don’t have dedicated hardware, XSX don’t have dedicated hardware too. Dedicated hardware = tensor cores, where code is not executing on your shaders like XSX ML or this from Imsomniac is working.
There are just different approaches for hardware ML.

AMD choose to add hardware support for INT4/INT8 and less precision FP to support ML.
It is hardware based.
 

Thirty7ven

Banned
I like these early celebrations of Playstation community. Reminds me of the launch multiplatform games:messenger_tears_of_joy:

You know it's the real stuff when folks like Rearmed start posting petty shit while avoiding the subject altogether. Your follow up post is just more of the same trash.

Instead of asking everybody to feel sorry for you, how about you tone down the console fanboy warrior shit?
 
Last edited:

Buggy Loop

Member
There are just different approaches for hardware ML.

AMD choose to add hardware support for INT4/INT8 and less precision FP to support ML.
It is hardware based.

I mean, by that definition, we have ML capable hardware since the first programmable computer, it’s just a question of approach and precision 🤷‍♂️

When you have to turn off the main function of an unit to crunch maths, CUs’ main function being to output as much rasterization as possible. You could crunch maths with FP16, inefficiently, or you could accelerate them with 8-bit & 4-bit integer support.

I would say, it’s accelerated, not really dedicated, as you sacrifice the main strength of CU units, and probably cannot realistically get competitive ML ops while keeping a game running (ie a strong majority dedicated to rasterization) versus a competitor’s dedicated silicon for those operations.

Now, Pascal had support for 8-bit int also in the pipeline, just so we have a reference point of comparison in technology implementation, in 2016.
 
A shame that Microsoft have all the talk but no action.
A shame that they still don't have a strong First Party team to deliver on the promises, at least not yet. Meanwhile Sony is talking less and doing more, they already have a strong team and we can see that they're working hard. This game seems to be turning into a testing ground for many new technologies that the PS5 enables, and all this work will be shared internally. Folks, prepare your hearts for the wonders that Sony's first party games will show us.
Microsoft's "strongest console" narrative is already shaken. Can their new first party games rise above what the ICE Team is capable to prove that they have the better machine?
 
Last edited:

Outrunner

Member
A shame that Microsoft have all the talk but no action.
A shame that they still don't have a strong First Party team to deliver on the promises, at least not yet. Meanwhile Sony is talking less and doing more, they already have a strong team and we can see that they're working hard. This game seems to be turning into a testing ground for many new technologies that the PS5 enables, and all this work will be shared internally. Folks, prepare your hearts for the wonders that Sony's first party games will show us.
Microsoft's "strongest console" narrative is already shaken. Can their new first party games rise above what the ICE Team is capable to prove that they have the better machine?

Hey don't talk shit, Microsoft has new controllers.
 

Azurro

Banned
The amount of resources would be trivial as the game still performs the same and there are no changes to res, framerate, etc

MS is probably over marketing their hardware features

Pretty much. The whole hardware fud strategy by MS fanboys is them hanging on to whatever buzzword Microsoft marketed that will enable the Xbox Series X to have double or triple the performance of the PS5. Basically the second GPU in Xbox One, but with extra steps.

The comparisons at this point are 95% of the time quite boring, pretty much visual and framerate parity with smaller file sizes and faster loading on PS5.
 

Hobbygaming

has been asked to post in 'Grounded' mode.
Pretty much. The whole hardware fud strategy by MS fanboys is them hanging on to whatever buzzword Microsoft marketed that will enable the Xbox Series X to have double or triple the performance of the PS5. Basically the second GPU in Xbox One, but with extra steps.

The comparisons at this point are 95% of the time quite boring, pretty much visual and framerate parity with smaller file sizes and faster loading on PS5.
I couldn't have said it any better! And when it's revealed that PS5 has the same feature then they say stuff like the feature is hardware accelerated on the XSX
 
Last edited:

RockOn

Member
you know what you did in your first post and I immediately took you back before the fud starts. The ps5 has no support for int4 and int8 as Leonardi (ps5 engineer) said . This does not mean that you cannot do ML, in a less performing way but you can always do it and this fantastic tech on spiderman is "the proof.
Mixed integer is standard part of RDNA2(including PS5)
ZHL5xGq.png
 

Zeroing

Banned
I mean we had characters with boobs that bounced for quite some time in video games!

It’s about time the male characters had realistic muscle deformation!
 

HAL-01

Member
We're still waiting for the power of the cloud to show up.
You can’t seem to tell the difference between marketing speak and real CS terms for tech used in real world applications

Go mock blast processing and the emotion engine, you’re out of your element here
 

MrSec84

Member
yes basically physics! you can't come up with special sauce without the special sauce hardware.

it is known MS waited till the end to equip SX with modern rdna2 accelerators
The Machine Learning "Accelerators" as you call them are just the ALUs that are altered to handled lower precision tasks, as it's stated in the RDNA introductory Whitepaper a major portion of the Stream Processors run 16-bit and 32-bit tasks, a small fraction of the total Stream Processors are modified by AMD to handle 4-bit and 8bit tasks.

These 4-bit and 8-bit scale "slots" are where Machine Learning is accelerated on RDNA GPUs, there was no need for Microsoft to wait around for RDNA2 to do this, they just asked AMD to put more 4/8-bit capable ALUs compared to what is standard for AMD's own PC GPU designs.

Both Sony and Microsoft could have built RDNA1 consoles, which could have had all of their ALUs capable of reducing the scale of OPs across every ALU, thus both would be as capable as possible of running Machine Learning on every possible portion of their GPUs, RDNA2 wasn't needed for this, this is a fact and clear from a cursory look at the RDNA1 Whitepaper.
 

vkbest

Member
There are just different approaches for hardware ML.

AMD choose to add hardware support for INT4/INT8 and less precision FP to support ML.
It is hardware based.
That is not dedicated hardware, it’s accelerate hardware or support or you want to call it. Dedicated hardware it’s something that is only specialized and one or several concrete task.
 
I take them both, in one he says that it’s between RDNA1 and 2, and in the other tries to say that RDNA is just a “commercial term” after seeing the outrage.
What else is RDNA 2 then? It's not a technical one for sure.

The other nonsense argument was about Sony not using some MS AI programming API (like DX 12, or some extension of it)... As always these "arguments" based on one company not integrating another company's buzzwords for a given technology don't mean much about the competition using the underlying tech better or worse because they didn't license the buzzwords the competition uses for the tech.
 

CamHostage

Member
...Hey boys, can you HotChips PP deck posters go find a thread of your own argue about ML with each other somewhere else?

We don't need to ruin every post that touches on machine learning with a slew of "YES it does!! / NO it doesn't!!" posts about whether or not there is special sauce waiting to be uncorked on the consoles out on the market. It's hard enough to keep a thread from turning into a Console War salvo when it's one side's internal development team we're talking about (I'm guilty of falling into that at times,) but at least that has heat. Some good memes come from that, so that's fun. This unceasing battle though over what RDNA features are/aren't in PS5/XSX just because ML came up, it really needs to be confined to its own thread. If you're here to help people understand how ML works and how it would make work this animation that we're talking about here, then great; if not, there are other places on the internet for you.

This is what this thread is about:

 
Last edited:

Dampf

Member
Machine learning will be a huge thing this generation, maybe even more so than RT.

Exciting times ahead.

Thankfully, Nvidia GPUs are already more than prepared for that future, starting with Turing. And the Xbox and PS5 consoles will do decently too, thanks to INT8/4 instruction support.
 
Last edited:

Dampf

Member

ML can be done on RDNA onwards, FP4, FP8 and some variations between 16 and above 32 can be done on a select number of RDNA ALUs.
This is official, from AMD themselves.
Sony confirmed PS5 uses RDNA2 as a base for it's GPU Architecture, it would make no sense to remove ML capabilities and we now see Insomniac Games are using that.

Clearly this supposed principal graphics engineer is telling porkies, likely didn't have anything to do with developing PS5's GPU or have anything to do with the APU design at all.
What is referred to in the RDNA1 whitepaper as INT has nothing to do with the instructions used for machine learning, but rather for texturing.

RDNA1.0 (5700(XT)) does not support INT8 and INT4 compute instructions. Only RDNA1.1 and onwards does (which the PS5 is based on at minimum)

Screenshot_2020-11-18-RDNA-2-questions-areejs12-hardwaretimes-com-Hardware-Times-Mail.png


Now, Pascal had support for 8-bit int also in the pipeline, just so we have a reference point of comparison in technology implementation, in 2016.

No, Pascal only supports down to FP16 precision. You're confusing it with texturing as well.
Only Turing and onward support INT8/4 instruction set for compute.

Edit: seems like I was wrong, Pascal does support INT8. But this likely won't be much use in gaming, as Pascal has issues running INT and FP concurrently.
 
Last edited:

MrSec84

Member
What is referred to in the RDNA1 whitepaper as INT has nothing to do with the instructions used for machine learning, but rather for texturing.

RDNA1.0 (5700(XT)) does not support INT8 and INT4 compute instructions. Only RDNA1.1 and onwards does (which the PS5 is based on at minimum)

Screenshot_2020-11-18-RDNA-2-questions-areejs12-hardwaretimes-com-Hardware-Times-Mail.png




No, Pascal only supports down to FP16 precision. You're confusing it with texturing as well.
Only Turing and onward support INT8/4 instruction set for compute.

Edit: seems like I was wrong, Pascal does support INT8. But this likely won't be much use in gaming, as Pascal has issues running INT and FP concurrently.

From the AMD RDNA Whitepaper and I quote:

"Some variants of the dual compute unit expose additional mixed-precision dot-product modes in the ALUs, primarily for accelerating machine learning inference. A mixed-precision FMA dot2 will compute two half-precision multiplications and then add the results to a single-precision accumulator. For even greater throughput, some ALUs will support 8-bit integer dot4 operations and 4-bit dot8 operations, all of which use 32-bit accumulators to avoid any overflows."

So ML can be done using half precision, 8 to 4 bit helps to accelerate ML computing further on the GPU.
 

CamHostage

Member
This thread is about machine learning in games. And INT instructions are what enables that at a much more performant level.

I think this is the right thread.

It isn't.
Dampf Dampf and M MrSec84 , you are both welcome to start that thread, if INT instructions and RDNA implementation are what you want to talk about. The hardware aspect of ML seems to interest you both, go ahead and form a discussion around it.

This is what this thread is about:

Source: https://support.insomniac.games/hc/...6532-Version-1-09-PS4-1-009-PS5-Release-Notes
Source: https://docs.zivadynamics.com/zivart/introduction.html

About ZivaRT: ZivaRT (Ziva Real Time) is a machine learning based technology that allows a user to get nearly film-quality shape deformation results in real time. In particular, the software takes in a set of representative high-quality mesh shapes and poses, and trains a machine-learning model to learn how to deform and skin the mesh. ZivaRT allows for near film quality characters to be deployed in applications in real time.

Source: https://www.vg247.com/2021/03/30/spider-man-miles-morales-new-suit-muscles/

kEJCue.jpg


New-Spider-Man-Mile-Morales-Update-Adds-Sleek-Advanced-Tech-Suit-2-scaled.jpg


according to Lead Character Technical Director Josh DiCarlo, it is something really exciting for those who love their characters to bring realism.

Miles would be simulated completely from the inside-out, using techniques that were previously only possible in film. This will make the character less of a mannequin, and more akin to the actual muscle and structure you see in a real person.

So every deformation on the costumes is the actual result of muscle and cloth simulation.



 
Last edited:

Shai-Tan

Banned
There seems to be a lot of confusion in here about the online and offline parts of ml. Part of the point is to reduce computational complexity by making the online part less complex. There are a lot of examples of that on the YouTube channel “two minute papers” where what would require expensive physics in real time is reduced via “style transfer” of the more complex offline simulation (the language used in that tech document is “morph” and “corrective blend”) Games are just a special case for this because it requires actual real time.
 

CamHostage

Member
There seems to be a lot of confusion in here about the online and offline parts of ml. Part of the point is to reduce computational complexity by making the online part less complex ... what would require expensive physics in real time is reduced via “style transfer” of the more complex offline simulation (the language used in that tech document is “morph” and “corrective blend”) Games are just a special case for this because it requires actual real time.

Right, it seems like much of this technology was baked in the VivaVFS lab before implementation in Spider-Man. Which is great -- results look super smooth, hit on performance seems negligible to nothing, win/win, can be carried over into other games. The excitement here isn't the ML (or not) of it all, it's the end result that hey, games can do this now!

This video is from the provider of Insomniac's muscle tech, Ziva Dynamics (though I'm not sure this is the exact method implemented?), might show a bit more of what they're working with:




I wish I never saw this thread. LOL!

Now I am always noticing how odd the main (male) characters body looks at times in Outriders:
-Sometimes he looks like he has manboobs
-Sometimes he looks like he has no collarbone and his shoulder is sticking out further his chest while standing still
-etc.

Heh, if this catches on and everybody gets used to seeing muscle modeling, I wonder if we'll see a deluge of patches in current games to catch all these characters up...

Will be interesting also when we get characters with non-human proportions (though animals can also use muscle deformation) and animation-styled characters once players have gotten used seeing to model accuracy down to the muscles. Would a developer build a "muscle-model" to create a new Rayman or Mario? Would a Sonic run cycle be better if we could see his neck deforming "correctly" as his arms swing in a run?

mario1.jpg
 
Last edited:

CamHostage

Member
Diving more into Ziva Dynamics, they seem to be active in supporting Sony* for multiple PS5 game projects, beyond Spider-Man...



Ziva is a VFX studio that has had its technology used in GoT and movies like The Meg, Jumanji, Captain Marvel, and Venom. Now, it is exploring three areas where it's ZivaRT (Ziva Real-Time) will bring aspects of the Ziva VFX technology to realtime game applications: ZivaRT Bodies, ZivaRT Faces, and ZivaRT Clothing.

There's no demo up yet for Clothing, and we've already seen some of what they're doing with Bodies, but you can see their Faces technology a bit. It... didn't blow me away in the demo footage, but check it out. Maybe we'll get a better look at all the ZivaRT features now that Spider-Man has put it in the spotlight.


(*Hopefully mentioning Sony's relationship here doesn't drag us too far into Console Wars territory; Microsoft is also looking stuff like at muscle deformation and body animation simulation, just like everybody else is, as next-gen projects push boundaries.)

Where does this come from? Those FB posts talk about being working with multiple studios on PS5 games. Why do you assume it's a Sony collaboration?
Well, I guess I made a bit of an extrapolation. They do say "busy partnering with AAA studios to bring #ZivaRT bodies and suits to the leading PS5 titles", not just leading console titles, so it's not just Spider-Man and it's only PS5 tiles (at least, that's only what they're announcing so far,) and Insomniac is their debut partner so that is already Sony. I thought maybe they might only be experimenting with Insomniac, as in it was Spider-Man and Ratchet uses this technology so far, but they say "studios" plural... Also, Sony Pictures is a consistent user of FX houses using Ziva VFX (the non-realtime predecessor version to ZivaRT,) the two Sony arms are not the same company per se but there's be ways of getting an introduction from that corporate relationship. So then, what else makes sense? It'd be weird for them to have multiple titles already in partnerships and they all happen to be being made only for PS5 yet none of the rest happen to be from maker Sony. (If their other lead partners were Arkane with DeathLoop and Tango with Ghostwire, they'd kind of give their partners' new owners some leeway and not mention platform, right?)

If it bugs you, I can take out the mention. I only made the reference because, if you own a PS5, their announcement is a pretty clear lure aimed at those owners specifically to take note of ZivaRT for future games on that console, so you may want to follow them. If you own a different box, there's not a yet title using this tech scheduled for you, but hopefully stay tuned. (And either way, there are plenty of other VFX houses working on bridging the RT gap like this one is, so learn what's up here and look forward to similar announcements in titles you may be planning on purchasing.)
 
Last edited:

mckmas8808

Mckmaster uses MasterCard to buy Slave drives
Diving more into Ziva Dynamics, they seem to be in good with Sony* for multiple PS5 game projects, beyond Spider-Man...



Ziva is a VFX studio that has had its technology used in GoT and movies like The Meg, Jumanji, Captain Marvel, and Venom. Now, it is exploring three areas where it's ZivaRT (Ziva Real-Time) will bring aspects of the Ziva VFX technology to realtime game applications: ZivaRT Bodies, ZivaRT Faces, and ZivaRT Clothing.

There's no demo up yet for Clothing, and we've already seen some of what they're doing with Bodies, but you can see their Faces technology a bit. It... didn't blow me away in the demo footage, but check it out. Maybe we'll get a better look at all the ZivaRT features now that Spider-Man has put it in the spotlight.


(*Hopefully mentioning Sony's relationship here doesn't drag us too far into Console Wars territory; Microsoft is also looking stuff like at muscle deformation and body animation simulation, just like everybody else is, as next-gen projects push boundaries.)


That's actually REALLY REALLY good though. Just needs a touch of hand animation added onto it.
 

Azurro

Banned
I'm a little bit out of my element here, but I'd like to know if it's possible to simulate properly the movement of the equipment of your character using ML. For example, let's take the average game with a Space Marine and this big armor, the problem is that when you see the character moving, you don't get the sense that it's actually armour what the character is wearing. It moves along the arm all the type, as if it was grafted to it, and bends in all sorts of unrealistic ways. I'd like ML to be used to simulate how a breastplate, gauntlets and such would actually move, maybe it could even solve the clipping problem.
 

CamHostage

Member
I'm a little bit out of my element here, but I'd like to know if it's possible to simulate properly the movement of the equipment of your character using ML. For example, let's take the average game with a Space Marine and this big armor, the problem is that when you see the character moving, you don't get the sense that it's actually armour what the character is wearing. It moves along the arm all the type, as if it was grafted to it, and bends in all sorts of unrealistic ways. I'd like ML to be used to simulate how a breastplate, gauntlets and such would actually move, maybe it could even solve the clipping problem.

Hmm... I'm not technical in that way (not a developer, just a fan and a little bit a student of interesting technical developments.) I would say probably ML could help (it seems to help everything) but maybe not in the ways you're thinking, because you've pointed out the problems of this animation but I think the reasons for the problems may be counter-intuitive...

(*Apologies if this all starts too basic, but I'm building towards something, also also hopefully this helps everybody, whatever their understanding is...)


tenor.gif


So, here's Cloud Strife, and here he is 2D. He of course has a big-ass, wonderfully stupid sword. His sword looks heavy, so how heavy do you think it is? The answer is "Zero grams", because it's a videogame and these are just pixels, and so it doesn't matter how much the sword weighs. So they just draw him once, and it doesn't matter if he's twirling a tiny dagger, if he's twirling a Buster Sword, if he's twirling a sword the size of the Empire State Building; it'd still be the same four frames to spin it 360 degrees over his head. They probably spent as much time animating his hair as they do his fingers spinning the blade.

d9w28pb-a747f433-c6dc-42da-b169-aaf71695b077.gif


Now, here Cloud is 3D, but a relatively simple 3D. His animation almost doesn't matter if he's holding a sword or not, it's the same striking motion. (Again, the hair probably got an undue amount of love from the animators.) It is, however, a fully-built 3D model with some detailed physicality modeled into the attack motion; there's realism thought into it, even if it's a canned motion. He shifts his weight on his front leg, he uses his back foot to lunge and slide as he moves from a strike to a stance, his non-active arm counter-balances his body. There's lots of motion here, with lots of bangles and bits of armor on his body, and all of that needs to be animated in this attack motion (which will then be repeated over and over and over again in the game, so it better look good!) to look good in motion and hopefully-but-not-necessarily look correct under closer inspection. We can add some stretch or adjustment to vary it a bit (maybe if he's on a staircase, his knees are bent to keep his feet on the stair, as best as possible), but probably it's not going to look perfect unless we make a separate "Staircase Attack" animation. And if we wanted the armband or his shoulder pad to slide a bit, we could animate that by hand (that would take extra time, but it would look good if we put the time in to do it right, although then it would always look the same and so that takes away from the "physicality" of physics,) or we could assign them physics systems but then we'd have to be real careful (and probably would see some clipping or other failures) because then those moving pieces are not in the expected place when the next animation starts.

BQ94.gif


Now this Cloud is really complex. His hair moves in the wind and as he falls, his pants don't stay inflated like two balloons strapped to his leg, and by god, is that sword heavy-looking!! Maybe there's some some animator pulling strings on all the little bits of tassel and buckles and muscles to make them move right because applying physics to everything is A) hard and B) maybe not the best thing for a videogame (where you may want reliable and unrealistic motion so the game plays fast,) but you also really want the computer to help do some of the detail work where there's untold numbers of details to simulate. Luckily, this is a rendered cinematic, so he doesn't have to be done in realtime. We can give the Buckle Program enough time to determine how the buckles bounds and sway, we can have the Hair Manager do it's work in a pass after the Body Mover does all of its thing

596998ddcce31012f310eb9cd18377ab.gif


And finally, here's our Cloud today in a videogame, and he's got it so danged hard! He's got this 'heavy'-assed sword (well, it still weighs the same 0 as the pixel Buster sword, but the game is telling him it's supposed to be heavy,) he's got armor and hair and muscles and all these moving parts, he has to accomplish his animations without knowing where in the environment he is (whoops, he can't unsheathe his sword because the damned ceiling is too low to clear the Buster over his shoulder,) he's supposed to still look cool at any given second because videogame characters are supposed to always look amazingly cool, and he's got to do all this while you watch him in realtime.

Cloud has it tough today, but he's also got more power than ever to help him. There are computer routines to simulate the physics of his moving parts, there are awareness systems to help tell him that he needs to adjust his animation to account for variables in an environment (like ceiling columns,) and their are animation adjustment systems that can take a sword-unsheathing animation and tweak it enough that the sword clears the column. Some of those systems may be realtime AI, some of those systems might be run by AI in the lab and then baked into the character system once optimized on the machine-learned behaviors, some of those things are still designed and modified by creators and just sort of fudged where it can't be perfected.

In the past, we had a lot of Cloud #2, where he just moved how he moved and the rest of the systems had to put up with it. Sometimes we would put something physics-based into the animation (most often hair or armbands, sometimes a holstered gun or a shoulderpad or something,) and we would put some weights and resistances and limits to those things based on real-world equivalents and that'd be doable for a little bit, but we've got to be real careful with that because the character is what matters above all else in gameplay and so if a character needs to jump, he will go through his jump motion, even if his slung rifle clips through his leg or he's standing on his cape. Our current Cloud can learn to do some things to avoid those problems (maybe he can take a half-step off his cape before jumping, or maybe he can throw his shoulders to swing the rifle a bit so he's clear to jump,) but he's still got to be responsive and fun to play, because it doesn't matter how good Cloud looks in motion if it sucks to play as him.

Unpredictability used to be an enemy to character animation in gameplay design (you literally design attacks and enemy tactics around X frames of motion and Y pixels of length on the extended hitbox,) but now we are teaching our computers to predict unpredictability.

And so, to your question, new games are handling unpredictable elements like physics with both what they've learned and what tools they've been given. Where will Cloud's shoulders he if his muscles do this, and this, and this? If we know where his shoulders will be, we can know where his shoulderpads will probably be if we assign physics to them. We can tell if the shoulderpads need to be redesigned because they look dumb sometimes, or they hinder a motion or whatever. We can also give him a shrugging motion smoothly integrated in the middle of other animations if we need a reset-type cheat effect to put the shoulderpad back at position 1. They may still clip sometimes - because gameplay comes first - but they may also avoid clipping in most cases because the game engine and even the character is aware that there are shoulderpads there. In the past, we couldn't trust the shoulderpads to be like shoulderpads, and our characters couldn't do anything about it if the shoulderpads started being unshoulderpad-like.

We'll ultimately still get plenty of Cloud #2s or Master Chief #2s or Samus #2s, with armor stapled onto their bodies, because it may look off-model if you look real close but it'll still play right and it's only worth so much money and debugging to add in tons more variables, versus just paying an animator to get in enough of the picky bits only real sharp eyes will notice...), but it's becoming more doable in realtime for even player-controlled characters to bare the complexity (including more physics-active elements) of robust animation.

tenor.gif
 
Last edited:
S

Shodan09

Unconfirmed Member
It's a shame they couldn't have used it to make the gameplay less repetitive. I enjoyed the story but Christ, grinding for the platinum is boring as sin.
 

onesvenus

Member
If it bugs you, I can take out the mention. I only made the reference because, if you own a PS5, their announcement is a pretty clear lure aimed at those owners specifically to take note of ZivaRT for future games on that console, so you may want to follow them. If you own a different box, there's not a yet title using this tech scheduled for you, but hopefully stay tuned. (And either way, there are plenty of other VFX houses working on bridging the RT gap like this one is, so learn what's up here and look forward to similar announcements in titles you may be planning on purchasing.)
It doesn't matter me at all I always find amusing how everything is supposed to be a collaboration when Sony is involved.
Maybe they only have a PS5 SDK for now and that's why they are only talking about PS5.
In any case, it's good tech, let's see if we see it in a lot more games
 

YOU PC BRO?!

Gold Member
Geez this thread. Of course the PS5 can perform ML. Anyone saying it can't is a fool. The question we have surrounds performance. Unlike the Series X is Sony forced to cram a single INT8/INT4 calculation through the FP16 pipe?

"We knew that many inference algorithms need only 8-bit and 4-bit integer positions for weights and the math operations involving those weights comprise the bulk of the performance overhead for those algorithms," says Andrew Goossen. "So we added special hardware support for this specific scenario. The result is that Series X offers 49 TOPS for 8-bit integer operations and 97 TOPS for 4-bit integer operations. Note that the weights are integers, so those are TOPS and not TFLOPs. The net result is that Series X offers unparalleled intelligence for machine learning."
 

Audiophile

Member
Would be very surprised if PS5 didn't have extended ML capabilities by way of INT4/8 (as opposed to just FP). It's a standard feature of RDNA2 and Sony would have had to decline the feature for their silicon, which would be utter madness given the direction things are heading.

I think MS are just referring to them as accelerators for both clout and simplicity (they do after all accelerate such workloads considerably relative to FP, even if it is an extension of general compute vs the even faster, dedicated tensor approach). As for Sony, they probably just being annoyingly silent about this stuff as is their way..

On a slight tangent, would love to see a hybrid temporal + AI approach to AA/Reconstruction..
 
Last edited:
Top Bottom