• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.
Well, it's obvious that The Tempset 3D is way, way stronger than AMD's TrueAudio. I'm not saying XSX will use 20% for audio, it'll actually use half-baked, last gen audio tech. So yest, it'll be around 1% at best.

But if you wanna match what PS5 is going to produce, hundreds of detailed audio sources with all algorithms calculated, that's capable of producing 5,000 sound sources, then you probably need much more than 4 CU's on XSX, which they'll not do.

ETmoYvkXQAE2kzC



The audio quality is going to be massive.

Wait so this is where you got the 20% claim? :pie_roffles: Don't take it the wrong way but, again, when someone from the XSX design team themselves refutes the claim, I think that puts such a claim in a speculative opinionated article utilizing incomplete info, into some state of question.

So whether XSX's audio setup matches PS5's or not isn't up to being definitely answered at this time. I'm willing to assume it won't. But the delta between them, in the case with XSX needing to produce equivalent 3D RT audio solutions as PS5, won't be eating around a 20% GPU budget on the APU. I think that assumption can be peacefully laid to rest.
 

Bo_Hazem

Banned
Wait so this is where you got the 20% claim? :pie_roffles: Don't take it the wrong way but, again, when someone from the XSX design team themselves refutes the claim, I think that puts such a claim in a speculative opinionated article utilizing incomplete info, into some state of question.

So whether XSX's audio setup matches PS5's or not isn't up to being definitely answered at this time. I'm willing to assume it won't. But the delta between them, in the case with XSX needing to produce equivalent 3D RT audio solutions as PS5, won't be eating around a 20% GPU budget on the APU. I think that assumption can be peacefully laid to rest.

Yes, it'll not be 20%, because it will not match PS5 in the audio quality. Not even in their dreams. If they attempt to, then they need to choke up to 20GB/s as stated by Mark Cerny of high quality audio into that 10GB ram and use the GPU to compensate for that quality gap.

Could you link the source of those Xbox engineers? They probably need to improve that 4.6x speed over Xbox One's HDD before concerning about audio, I would suggest.
 
T

Three Jackdaws

Unconfirmed Member
Well, it's obvious that The Tempset 3D is way, way stronger than AMD's TrueAudio. I'm not saying XSX will use 20% for audio, it'll actually use half-baked, last gen audio tech. So yest, it'll be around 1% at best.

But if you wanna match what PS5 is going to produce, hundreds of detailed audio sources with all algorithms calculated, that's capable of producing 5,000 sound sources, then you probably need much more than 4 CU's on XSX, which they'll not do.

ETmoYvkXQAE2kzC



The audio quality is going to be massive.
I get where you guys are coming from but a few devs have already come out and said they don't actually use teraflops as a metric when designing games, I think we as fans have looked too deep in teraflops and the impact they'll have on gaming, largely thanks to marketing and ignorance. Hence why saying the audio tasks consume 20% of the 12 teraflops leaving behind only 9.8 teraflops is just not accurate, especially from a game development standpoint.
 
Last edited by a moderator:

Bo_Hazem

Banned
I get where you guys are coming from but a few devs have already come out and said they don't actually use teraflops as a metric when designing games, I think we as fans have looked too deep in teraflops and the impact they'll have on gaming, largely thanks to marketing and ignorance. Hence why saying the audio tasks consume 20% of the 12 teraflops leaving behind only 9.8 teraflops is just not an accurate, especially from a game development standpoint.

Not an accurate, indeed. But those TF are made by CU's, don't they? So an old audio tech that needed 4 full CU's to process, what does that mean? PS3 sound quality was so superior to PS4/XB1, people are not taking notes here:

Anyone else notice that the PS4 overall sound quality seems to pale in comparison to the PS3? I'm not talking about the options and flexibility that the PS3 offers over the PS4, I'm talking about the overall quality of the output, particularly via LPCM over HDMI. The best demonstration of this is listening to tracks through Music Unlimited. On the PS3 with the HQ audio setting turned on, tracks sounds pretty close to CD quality with plenty of dynamic range and fullness to the sound. But on PS4, the same track (I literally have done an A-B comparison) sounds compressed, thin, and low res (yes even with HQ mode on). My wife even noticed how low it sounded and how much of the range was missing. There are whole frequency ranges in the and highest ranges that are just missing on PS4. It's most noticeable with Music Unlimited since it's a lot easier to evaluate music, but it's not just a MU issue. Several cross gen games I've tried sound "thin", low, and more compressed on PS4.

Anyone else notice this? Again my AV pre/pro and general system configs between the two systems are identical. Is it just something Sony needs to address via firmware? Or could is be the difference with using the TrueAudio processing on the GPU on PS4 vs the Cell on the PS3?



Take it from Markitect:

PS5 lead system architect Mark Cerny says the new focus on audio in PS5 is about finding "new ways to expand and deepen gaming." Where the PS3 was "a beast when it came to audio," Cerny says "it's been tough going making forward process on audio with PS4."


Other discussions:

 
Last edited:

DForce

NaughtyDog Defense Force
How does a .4 tflop difference result in a 900p to 1080p disparity but a 1.9 tflop difference will hardly be noticeable?
Round the numbers.

1TF vs 2TF. Would PS4 get double the TF performance if these were the numbers? Now with a .4 advantage, you can see how performance would scale.

On next gen consoles, you have one similar to a 2070 super and the other at 2080 super. Not exact performance levels, but close.

You'll get around 11fps advantage with the RTX 2080 super over the RTX 2070 super. This will matter when hitting that 4k 30\60fps target, so a small reduction in pixel count will be required, and the GTX 2070 wouldn't need to drop significantly in comparison to the RTX 2080 super.

 
Yes, it'll not be 20%, because it will not match PS5 in the audio quality. Not even in their dreams. If they attempt to, then they need to choke up to 20GB/s as stated by Mark Cerny of high quality audio into that 10GB ram and use the GPU to compensate for that quality gap.

Could you link the source of those Xbox engineers? They probably need to improve that 4.6x speed over Xbox One's HDD before concerning about audio, I would suggest.

Bo...Bo...Bo...BoBoBo. C'mon :pie_smirking:
Also wanted to make a BoBoBo Bo-Bo BoBo reference

Now are you talking 20% GPU or 20 GB/s bandwidth? Because from what Cerny has said TE also uses the main system bandwidth, and his 20 GB/s figure was in reference to devs using it unchecked. But to the rest of what you mentioned? It's kind of hogwash.

We don't know where these systems will actually fall in-practice when it comes to audio. If it mirrors what some are saying when it comes to the diminishing returns of differences in the GPU delta, then the same can be applied when it comes to whatever audio delta there will end up being, and that's assuming it's anything significant.

Which, for wanting the best quality of audio support between both platforms to encourage wider support by 3rd-party devs, I think many would want that gap to be negligible at best and without serious hardware performance penalties on either side. It all comes down to optimizations and it's very apparent that MS and Sony are both doing some big-league moves when it comes to customization and optimization.

The link in question I can't post here, but the guy in question (Bill something, he's on MS's VR/AR team now and was working on Xcloud previously) had the figures mentioned/alluded to in a Dealer stream (and please, no one try making this an argument into character integrity or using that as a means of dismissal; I sat through the stream and Bill kept it 100% professions, and Dealer's ribbings were negligible at best. Plus I've seen them talk about PS positively, particularly the games, especially people like Dee Batch. FWIW I also watch some PS-focused streams too like MarlonGaming), and he pretty much laughed it off as baseless speculation.

LIke I said, we can pretty much put the 20% GPU for audio ridiculousness to rest We should probably assume that the systems are probably within a margin of error (1% - 3%), which I would also assume would be the GPU overhead on XSX needed (if needed) to match performance metrics of TE that Sony have divulged. This is assuming no further info the XSX's audio setup comes out or is clarified, mind.

Really, it'd be a win-win for everyone who wants performance differences in key areas between the systems to be pretty much on-par for the big tasks that might define next-gen, at least for the things we do not have a lot of hard numbers and disclosure on, anyway. And as advanced as audio development and design can be, it's not in the same league as GPU design and tasks, so theoretically I don't see too much a reason or expectation of a massive delta in audio performance except if one got stupidly lazy for no good reason at all xD.
 
Last edited:

Bo_Hazem

Banned
Round the numbers.

1TF vs 2TF. Would PS4 get double the TF performance if these were the numbers? Now with a .4 advantage, you can see how performance would scale.

On next gen consoles, you have one similar to a 2070 super and the other at 2080 super. Not exact performance levels, but close.

You'll get around 11fps advantage with the RTX 2080 super over the RTX 2070 super. This will matter when hitting that 4k 30\60fps target, so a small reduction in pixel count will be required, and the GTX 2070 wouldn't need to drop significantly in comparison to the RTX 2080 super.



That's actually with being so generous with XSX, it's a 18% difference, so it's 2080 vs 2080 Super, and that's only GPU power, not concerning custom and bottlenecks between very different conosles.

At best, it'll be 5fps for XSX, or the other way around with faster, smarter SSD and less assets offloaded rapidly outside the field of view.
 

Bo_Hazem

Banned
Bo...Bo...Bo...BoBoBo. C'mon :pie_smirking:
Also wanted to make a BoBoBo Bo-Bo BoBo reference

Now are you talking 20% GPU or 20 GB/s bandwidth? Because from what Cerny has said TE also uses the main system bandwidth, and his 20 GB/s figure was in reference to devs using it unchecked. But to the rest of what you mentioned? It's kind of hogwash.

We don't know where these systems will actually fall in-practice when it comes to audio. If it mirrors what some are saying when it comes to the diminishing returns of differences in the GPU delta, then the same can be applied when it comes to whatever audio delta there will end up being, and that's assuming it's anything significant.

Which, for wanting the best quality of audio support between both platforms to encourage wider support by 3rd-party devs, I think many would want that gap to be negligible at best and without serious hardware performance penalties on either side. It all comes down to optimizations and it's very apparent that MS and Sony are both doing some big-league moves when it comes to customization and optimization.

The link in question I can't post here, but the guy in question (Bill something, he's on MS's VR/AR team now and was working on Xcloud previously) had the figures mentioned/alluded to in a Dealer stream (and please, no one try making this an argument into character integrity or using that as a means of dismissal; I sat through the stream and Bill kept it 100% professions, and Dealer's ribbings were negligible at best. Plus I've seen them talk about PS positively, particularly the games, especially people like Dee Batch. FWIW I also watch some PS-focused streams too like MarlonGaming), and he pretty much laughed it off as baseless speculation.

LIke I said, we can pretty much put the 20% GPU for audio ridiculousness to rest We should probably assume that the systems are probably within a margin of error (1% - 3%), which I would also assume would be the GPU overhead on XSX needed (if needed) to match performance metrics of TE that Sony have divulged. This is assuming no further info the XSX's audio setup comes out or is clarified, mind.

Really, it'd be a win-win for everyone who wants performance differences in key areas between the systems to be pretty much on-par for the big tasks that might define next-gen, at least for the things we do not have a lot of hard numbers and disclosure on, anyway. And as advanced as audio development and design can be, it's not in the same league as GPU design and tasks, so theoretically I don't see too much a reason or expectation of a massive delta in audio performance except if one got stupidly lazy for no good reason at all xD.

The sources are kinda "laughable" with all due respect to you. We will see them both in action, and DICE will take full advantage of the Tempest engine so expect some juicy comparisons. They can still enjoy weak, software-based 3D audio though.
 
Last edited:
T

Three Jackdaws

Unconfirmed Member
That's actually with being so generous with XSX, it's a 18% difference, so it's 2080 vs 2080 Super, and that's only GPU power, not concerning custom and bottlenecks between very different conosles.

At best, it'll be 5fps for XSX, or the other way around with faster, smarter SSD and less assets offloaded rapidly outside the field of view.
Correct me if I'm wrong but Geforce is a completely different architecture to AMD's RDNA 2 so it's not really a fair comparison, the comparison also fails to take in the heavy customisations and other features both PS5 and Series X contain.
 

Bo_Hazem

Banned
Let's appreciate this 7-years-old TrueAudio tech, that think gave me shivers when it went behind my head with the surround using my cheap headphones :lollipop_tears_of_joy: You can easily spot it when it shifts to surround, that's what's quality for everyone means:

 
Last edited:

SonGoku

Member
It is just that custom command processor meaning as a hardware change is really hard to believe
I remember something about XB1X having two command processors, not sure if hearsay or confirmed by a presentation
Well here's the thing: what makes you think features like cache scrubbers are only beneficial to the console side of things?
Two reasons
  1. Because Cerny used it as an example of a console specific feature that isn't widely useful outside their specific design, pretty much implying it wouldn't be implemented globally as a RDNA2 feature.
  2. There are no other designs in the PC space that rely on streaming from storage at 8-9GB/s directly into VRAM, so they likely don't have any needs for it?
XSX might well have their own custom version of cache scrubbers if their velocity architecture benefits from it.
 
The sources are kinda "laughable" with all due respect to you. We will see them both in action, and DICE will take full advantage of the Tempest engine so expect some juicy comparisons. They can still enjoy 20-32 sound sources though.

Wait so you think an Xbox engineer who worked on Xcloud and now works on the VR/AR solutions team is "laughable"? And somehow Mark Cerny is not with some of the claims they somewhat embellished on during their presentation? :pie_thinking: ...

That's not even going into some of the misconception folks seem to have about the GPU differences in the two systems. Anyone simply viewing it in terms of extra resolution or pixels is looking at it wrong. GPGPU programming and compute are very powerful tools for next-gen and having extra CUs to spare for the task while maintaining visual fidelity in key aspects relative to PS5 is a pretty interesting thing to look at when it comes to XSX, similar to the SSD advantages PS5 will most likely maintain regardless of optmizations MS makes to theirs (since the raw physical specs are just that different).

We'll see indeed but you might want to temper expectations on the audio delta. Just saying.
I remember something about XB1X having two command processors, not sure if hearsay or confirmed by a presentation

Two reasons
  1. Because Cerny used it as an example of a console specific feature that isn't widely useful outside their specific design, pretty much implying it wouldn't be implemented globally as a RDNA2 feature.
  2. There are no other designs in the PC space that rely on streaming from storage at 8-9GB/s directly into VRAM, so they likely don't have any needs for it?
XSX might well have their own custom version of cache scrubbers if their velocity architecture benefits from it.

That's just it though; the very concept of cache scrubbers sounds pretty beneficial outside of a single system design. Maybe Sony has an implementation of it that is particularly useful for their system and that would be be something worth stressing, but I can't think of a GPU that wouldn't benefit from having a massively efficient way of clearing out their local caches to maintain data throughput for calculations.

Streaming would not be the only reason for the potential benefit of cache scrubbers, though it is one such reason. And with the expectation of SSDs becoming more commonplace in PCs going forward, it would seem like a forward-thinking feature to benefit other platforms (especially for sake of 3rd-parties), not simply Sony.

Though, again, they could have a specific implementation of it which is particularly beneficial to their system that you wouldn't see elsewhere. That much seems logical to assume imho.
 

Gamernyc78

Banned
The sources are kinda "laughable" with all due respect to you. We will see them both in action, and DICE will take full advantage of the Tempest engine so expect some juicy comparisons. They can still enjoy 20-32 sound sources though.

Bro I remember playing Medal of Honor and damn tht audio was sooooo good. Dice are wizards with audio in their fps military games so I cant wait!!
 

sircaw

Banned
Sorry if this is a daft question, i am getting slightly confused by all the posts.

In regards to the Ps5 the Audio chip the tempest chip, does not burden the ps5 by using cpu or gpu resources?

I thought Microsoft had their own dedicated audio chip to, or am i mistaken?
thank you for answer.
 

Bo_Hazem

Banned
Bro I remember playing Medal of Honor and damn tht audio was sooooo good. Dice are wizards with audio in their fps military games so I cant wait!!

Indeed! Even with current gen weaker sound quality, BF4 far explosions were insane! I can say EA in general are in the top when it comes to audio, although most people hate them overall.
 

SonGoku

Member
thicc_girls_are_teh_best thicc_girls_are_teh_best I won't pretend to understand or comprehend the reach of cache scrubbers beyond PS5 design needs
All i have to go from is Cernys statement
Mark Cerny said:
If we bring concepts to AMD felt to be widely useful then they can be adopted into RDNA2 and used broadly including PC GPUs
Mark Cerny said:
if the ideas are sufficiently specific to what we are trying to accomplish like the GPU cache scrubbers then they end up being just for us

Maybe they'll be incorporated in RDNA3?
 
Last edited:

Bo_Hazem

Banned
Sorry if this is a daft question, i am getting slightly confused by all the posts.

In regards to the Ps5 the Audio chip the tempest chip, does not burden the ps5 by using cpu or gpu resources?

I thought Microsoft had their own dedicated audio chip to, or am i mistaken?
thank you for answer.

PS5's sulotions is GPU-based, with other customizations to it. It offloads both GPU/CPU, not sure if it needs the main GPU RT for audio raytracing. It can even assist other algorithms in the game that are computationally expensive or need high bandwidth if devs want to, as Mark Cerny stated:

Timestamped:




XSX uses a chip that offloads only the CPU, GPU still does its regular work. And it's an overwhelmingly weaker solution.
 
Last edited:
Not the 20% audio again. :messenger_tears_of_joy:

I can all but guarantee that MS would not need 11 cus of GPU power to match the output of 1 altered CU on the PS5.

That is the best (worst?) part of it all; either Cell was just that efficient (if so, why not just use a Cell-based GPU like they intended for PS3?), or AMD fucked up in making RDNA2 efficient in any real means (if so, why use AMD in the first place?). It's a baseless speculation that's already been debunked by people on the hardware team essentially.

If we're willing to give Sony leniency on inclusion of certain features even if they themselves have not come out and said anything regards their inclusion directly, I don't see why there's a need to entertain this 20% idea when it has been essentially debunked by someone who's directly working within the Xbox division.

thicc_girls_are_teh_best thicc_girls_are_teh_best I won't pretend to understand or comprehend the reach of cache scrubbers beyond PS5 design needs
All i have to go from is Cernys statement



Maybe they'll be incorporated in RDNA3?

They could have been referring to their own implementation of cache scrubbers.

I stand by what I spoke with you about in the other reply: what makes something like cache scrubbers a feature that sounds immensely beneficial for a GPU design in general, yet only used by a single company for a single product, when you yourself already alluded that such a thing would probably be akin to a core feature and as such, AMD would probably have it featured standard within the RDNA2 architecture spec?

There could be a certain implementation of the feature at the access/abstraction level exclusive to this vendor or that vendor, we'll need to see. But there's no reason to shut the door on the possibility it could be a ubiquitous feature in RDNA2, since we, again, don't know all of the features of RDNA2 as it is.

I know what Cerny said in the presentation, but GPU cache scrubbers as a feature isn't necessarily something guaranteed exclusive to them. Or to put it another way, there might be other means more general to RDNA2 baked into the silicon that achieve a similar function, and the GPU cache scrubbers Cerny mentioned are a specific alteration and implementation of such on their end.

EDIT: Actually I just remembered something in relation to all of this. MS made ECC (error-correction) modifications to the GDDR6 memory IIRC, as this was mentioned at some point (but I can't recall specifically where).

In concept cache scrubbing and ECC memory are analogous to one another, just at different hierarchies of the memory stack. Basically, Sony's is SRAM-based while MS's is DRAM-based, just going very basically off the info they've both provided, and making some assumptions.

So in essence both systems have a form of memory/data scrubbing but at different implementation levels.
 
Last edited:

ethomaz

Banned
Is geometry processor = geometry engine ?
From what I understand it is because AMD in the same presentation used both names referring to the same silicon part.

But I believe the Geometry Engine is a high level term that combines the Geometry Processors (GCN used to have one per Shader Engine), Primitive Units, and something else.

Anyway the Geometry Processor/Engine is very important to the AMD Arch.
 

Bo_Hazem

Banned
That is the best (worst?) part of it all; either Cell was just that efficient (if so, why not just use a Cell-based GPU like they intended for PS3?), or AMD fucked up in making RDNA2 efficient in any real means (if so, why use AMD in the first place?). It's a baseless speculation that's already been debunked by people on the hardware team essentially.

If we're willing to give Sony leniency on inclusion of certain features even if they themselves have not come out and said anything regards their inclusion directly, I don't see why there's a need to entertain this 20% idea when it has been essentially debunked by someone who's directly working within the Xbox division.

So you think AMD engineers don't know what they're talking about? With their graph about 3D audio 4* years ago?

a3c651be6b4d9e962ba9dbff7ce7ea7c75557d7f.jpg


If anything, this confirms that XSX 3D audio is software-based only, and PS5's 3D audio is hardware+software-based.




And this is the new AMD TrueAudio Next (a 2016 revision), that supports only 32 sources: (Please, people, it's only less than 2 minutes, listen carefully before laughing and throwing uneducated jokes around to the debate to mislead the readers. It's ok to be wrong, I've been wrong sometimes as well, doesn't make you a bad person.)




“We finally won’t have to fight with programmers and artists for memory and CPU power,” says senior sound designer Daniele Galante.


EDIT: 4 years ago.
 
Last edited:
So you think AMD engineers don't know what they're talking about? With their graph about 3D audio 7 years ago?

a3c651be6b4d9e962ba9dbff7ce7ea7c75557d7f.jpg


If anything, this confirms that XSX 3D audio is software-based only, and PS5's 3D audio is hardware+software-based.



And this is the new AMD TrueAudio Next: (Please, people, it's only less than 2 minutes, listen carefully before laughing and throwing uneducated jokes around to the debate to mislead the readers. It's ok to be wrong, I've been wrong sometimes as well, doesn't make you a bad person.)



...seven years ago. :pie_thinking:

Anyway, these companies can make alterations to these features, keep that in mind as well.

I'm not misleading anyone, and certainly not trying to. But when you have someone from a next-gen console dev team dismiss outrageous claims of 20% GPU resources dedicated to audio processing to match PS5's audio setup, that's probably worth taking into consideration.

That's what I've been mentioning more or less. Like I said, it's probably time to put that one to rest. Can we agree to that?
 
Last edited:
Sounds more like your own perspective looking at it this way. There's bad-faith speculation coming from both camps, but in certain places, or certain channels, you'll probably see it directed more at one side than the other.

Honestly as long as you know the truth of the matter or have good reason to speculate on what you THINK may be the truth of things, why let it upset you? If you get the chance to debate with someone to disprove their argument, then that's great. But if not, then you can just laugh it off as being unfounded speculation that will change with the winds.

Also I do want to point out that just because you have some people saying some of these things does not mean they are "for" another "team". Not all of us treat this like an election. For example, to answer the points you yourself bring up:

1: I don't think PS5 (devkits or retail) are overheating. However, I do think the cooling solution is very intricate out of design, and I do think they are still pushing the chip beyond the upper limit of the sweetspot on their given node process (7nm DUV enhanced), even with what cooling solution they have (taking form factor into account as well), which could be stressing heat-to-performance balance they are looking to nail down completely if they haven't done so already.

2: RDNA2 is a feature set that can be customized at the silicon level to the client's needs. If one were to say, say, the PS5 doesn't have RDNA2 100% to the feature set PC GPUs will have, then they aren't wrong. Because it won't. And neither will the XSX. Both are using customized RDNA2-based APUs, and the extent of those customizations are still not fully known, particularly regards PS5.

3: This one is a bit silly to debate on, because it's pretty much confirmed PS5 has hardware-based RT. It's a feature baked into RDNA2 at the silicon level, they would have to actively go through the hassle of removing it to not have it in there. While the systems will have hardware-based RT, that doesn't mean devs won't use software-based complimentary RT techniques alongside the hardware-based one if they feel a need to.

4: PS5 will have a version of VRS, but it won't be called VRS, as that particular implementation and name is patented by Microsoft. Nothing prevents Sony from having their own abstraction and API stack for implementation of their own version of VRS, however, and I'm certain they have just that.

On the the question of VRS and Playstation 5.... I will say this.

Microsoft's VRS patent actually references Mark Cerny's 2015 patent on what would eventually be called Variable Rate Shading.

VRS as a technique was always gonna be in PS5. Sometimes if you look without bias you'll learn something,
 
Last edited:

SonGoku

Member
I stand by what I spoke with you about in the other reply: what makes something like cache scrubbers a feature that sounds immensely beneficial for a GPU design in general, yet only used by a single company for a single product, when you yourself already alluded that such a thing would probably be akin to a core feature and as such, AMD would probably have it featured standard within the RDNA2 architecture spec?
Re watching the cache scrubbers bit
Coherency engines assist the two I/O co-processors. The coherency engine informs GPU of overwritten address ranges before the cache scrubbers do pinpoint evictions
Time stamped here


Cache scrubbers seem intimately integrated with PS5 custom I/O & SSD, which is why it was likely seen as not widely useful outside their design.
So in essence both systems have a form of memory/data scrubbing but at different implementation levels.
So both are unique implementations not part of RDNA2 core features?
 
Last edited:
T

Three Jackdaws

Unconfirmed Member
you're really gonna post this fanboy. He's the something as timdog or dealer gaming...
His got his faults don't get me wrong but what's funny is he's no where near as bad as some of the xbox fans, he doesn't come up with fake rumours and leaks in an attempt to sabotage and create FUD amongst fans which is just another level of sadness if you ask me. MBG has a strong bias towards Playstation, sure but he's he runs a Playstation centric channel, not so much different then several other console centric YT channels.
 

Bo_Hazem

Banned
...seven years ago. :pie_thinking:

Anyway, these companies can make alterations to these features, keep that in mind as well.

I'm not misleading anyone, and certainly not trying to. But when you have someone from a next-gen console dev team dismiss outrageous claims of 20% GPU resources dedicated to audio processing to match PS5's audio setup, that's probably worth taking into consideration.

That's what I've been mentioning more or less. Like I said, it's probably time to put that one to rest. Can we agree to that?

I was generalizing, you know I like your avatars (the last one is meh though). :lollipop_tears_of_joy:

The AMD TrueAudio Next in 2016 (the revision) video i've just posted explains that it's impossible to achieve 3D audio on a CPU level, which explains that XSX is going with software solution rather than dense, hardware-based solution. And that AMD TrueAudio Next (not the old one that takes full 4 CU's) is only capable of 32 sound sources, while PS5's Tempest Engine is capable of delivering hundreds up to 5,000 unique sounds sources with overwhelming processing power, as Mark Cerny stated.
 
Last edited:

Lone Wolf

Member
His got his faults don't get me wrong but what's funny is he's no where near as bad as some of the xbox fans, he doesn't come up with fake rumours and leaks in an attempt to sabotage and create FUD amongst fans which is just another level of sadness if you ask me. MBG has a strong bias towards Playstation, sure but he's he runs a Playstation centric channel, not so much different then several other console centric YT channels.
He used to be an Xbox fanboy and switched sides. He’s was shitty then and he’s shitty now.
 
On the the question of VRS and Playstation 5.... I will say this.

Microsoft's VRS patent actually references Mark Cerny's 2015 patent on what would eventually be called Variable Rate Shading.

VRS as a technique was always gonna be in PS5. Sometimes if you look without bias you'll learn something,

Does it actually reference the patent (that would be grounds for litigation would it not? Companies don't take kindly to other companies directly referencing their patent technology for development of their own stuff)? Or was it a case of two companies taking somewhat analogous means to a particular feature? That happens a good bit in the tech space tbh.

Otherwise yes, VRS as a technique was pretty much a lock for PS5. It might fall under a different naming, but the basic idea and feature itself will be there.

Re watching the cache scrubbers bit
Coherency engines assist the two I/O co-processors. The coherency engine informs GPU of overwritten address ranges before the cache scrubbers do pinpoint evictions
Time stamped here


Cache scrubbers seem intimately integrated with PS5 custom I/O & SSD, which is why it was likely seen as not widely useful outside their design.


That particular setup and implementation does seem vital to their design process, I agree. The concept itself, however, isn't something I see being only beneficial for PS5, since again, it sounds like something very useful. Conceptually it's something I can see the XSX using as well, they might already be doing so with the ECC modifications to their GDDR6 memory modules.

The concept, of course, just in reference to correction of data through means of coherency. The implementations on both systems will likely vary greatly, but they seem to be taking some form of that concept at the heart of things at the very least.

I was generalizing, you know I like your avatars (the last one is meh though). :lollipop_tears_of_joy:

The AMD TrueAudio Next in 2016 (the revision) video i've just posted explains that it's impossible to achieve 3D audio on a CPU level, which explains that XSX is going with software solution rather than dense, hardware-based solution. And that AMD TrueAudio Next (not the old one that takes full 4 CU's) is only capable of 32 sound sources, while PS5's Tempest Engine is capable of delivering hundreds up 5,000 unique sounds sources with overwhelming processing power, as Mark Cerny stated.

That's fair enough (I'm gonna switch this avatar up soon btw ;) ).

The one thing I'm looking forward to seeing with PS5's implementation is if all of the additional sources actually add any perceivable differences and advantages to the average person, or if it comes into diminishing returns territory. Again for audiophiles, they can probably note the difference especially if they have high-quality stereo equipment.

But will the average Joe or Jane really perceive the difference in practice? A lot of people still can't tell the difference between CD-quality audio and other standards like FLAC and AIFF :S. We'll see how that shapes out in due time.
 

SonGoku

Member
The concept itself, however, isn't something I see being only beneficial for PS5, since again, it sounds like something very useful. Conceptually it's something I can see the XSX using as well, they might already be doing so with the ECC modifications to their GDDR6 memory modules.

The concept, of course, just in reference to correction of data through means of coherency. The implementations on both systems will likely vary greatly, but they seem to be taking some form of that concept at the heart of things at the very least.
Fair enough, maybe it'll be used as inspiration for a RDNA3 feature, I think its reasonable to assume given Cerny comments that it isn't a core RDNA2 feature at least?

BTW we need more precise lenguage to make a distinction between hardware features that devs program and see (RT, TSS, Mesh Shaders, VRS etc) and hardware features that are invisible to the developer and happen automatically like Cache Scrubbers or GNC->RDNA changes that made it more efficient
 

M-V2

Member
He used to be an Xbox fanboy and switched sides. He’s was shitty then and he’s shitty now.
I watch his videos, he isn't shitty at all, he says each video has "rumor" in the title that take it with grain of salt or reporting by reading from articles like anybody else on YouTube. He has biases?? Yes, he has like other YouTubers, the difference is some show their biases (like MBG) and some people don't.

Not trying to defend him but speaking the truth.
 
Last edited:

ethomaz

Banned
I mean, the idea of cache scrubbers also sounded kind of "believable" until it was actually mentioned by Cerny. I wouldn't put this type of stuff out of the realms of possibility until we get all the details on the APUs.

Another way to look at it is, if it's a potentially common-place feature on RDNA2 systems, then by some logic so would something like cache scrubbers, since that seems like something which would benefit the PC side of things and Sony have said that some PS5 custom features may see implementation in PC GPU cards through successful initiatives between them and AMD, this was mentioned in the presentation.
But the cache scrubbers were added, no? I meant neither Sony or MS will probably change the core units of RDNA 2.
Add more stuffs is reasonable.

Of course I’m guessing.
 

Bo_Hazem

Banned
That's fair enough (I'm gonna switch this avatar up soon btw ;) ).

The one thing I'm looking forward to seeing with PS5's implementation is if all of the additional sources actually add any perceivable differences and advantages to the average person, or if it comes into diminishing returns territory. Again for audiophiles, they can probably note the difference especially if they have high-quality stereo equipment.

But will the average Joe or Jane really perceive the difference in practice? A lot of people still can't tell the difference between CD-quality audio and other standards like FLAC and AIFF :S. We'll see how that shapes out in due time.

I had a 2007 Infiniti M45 with great Bose system, I was used to regular MP3 CD's and the sound quality was great and amazing... Until I put in an original CD...

1qmr9u.jpg


The sound quality is MASSIVELY on another level, as those tend to have like up to 1411kbps vs 120-320kbps MP3's. You can hear every touch on the Oud and other instruments:

_DHQ.jpg


It's just something you need to experience to comprehend, and that's not a headset, only a car sound system (although a good Bose one).

Casuals will ride the bandwagon, even though if they use cheap headset as you can notice it even with crap headsets with youtube 3D sound tests. Of course, you pay more and you'll get more, or stick to PlayStaion headsets.
 
Last edited:
Does it actually reference the patent (that would be grounds for litigation would it not? Companies don't take kindly to other companies directly referencing their patent technology for development of their own stuff)? Or was it a case of two companies taking somewhat analogous means to a particular feature? That happens a good bit in the tech space tbh.

With patents you must always reference prior art, or else your search was not exhaustive and it hase to be exhaustive.

MS' VRS patent 2018 - 10,147,227

Patent by a certain Mark Evan Cerny 2015

Now Cerny's patent looks like it was specifically for the VR Headsets but it is an application of the VRS technology which makes sense. VRS enables the system to direct shading resources to areas of the screen where the eye of the gamer is directed... perfect application of VRS really.
 

DaGwaphics

Member

"That’s unprecedented in the field of the console market. While up until this point, developers have had to use the same pool of resources and power while working with all aspects of a game, from programming to visuals to, of course, audio, having a dedicated audio chip means not just better audio in games – what with sound engineers having a chip all of their own to work with – but it also means that there’s extra space in the GPU for all the other stuff. "

Might want to read the article again. XSX will use the RT pipeline for raycasting audio, doesn't sound like audio will be processed there outside of that. YOU DO NOT NEED A GPU for 3D audio, and never have. This is standard tech, 80% of MBs on the market have a dedicated audio chip for sound, no GPU required. As do the majority of tech devices that handle 3D audio. GPU audio is feature offered by most GPUs (to simplify audio pass-through), but not a requirement.
 
Last edited:

Bo_Hazem

Banned
"That’s unprecedented in the field of the console market. While up until this point, developers have had to use the same pool of resources and power while working with all aspects of a game, from programming to visuals to, of course, audio, having a dedicated audio chip means not just better audio in games – what with sound engineers having a chip all of their own to work with – but it also means that there’s extra space in the GPU for all the other stuff. "

Might want to read the article again. XSX will use the RT pipeline for raycasting audio, doesn't sound like audio will be processed there outside of that. YOU DO NOT NEED A GPU for 3D audio, and never have. This is standard tech, 80% of MBs on the market have a dedicated audio chip for sound, no GPU required. As do the majority of tech devices that handle 3D audio. GPU audio is feature offered by most GPUs (to simplify audio pass-through), but not a requirement.

Yes, they'll go with software solution then, not true deep hardware 3D audio, just like with cellphones and other average devices. That would give more room for GPU space to utilize the CPU, as it would need to have extra work to preload assets into the CPU.
 

Grodiak

Member
Casuals will ride the bandwagon, even though if they use cheap headset as you can notice it even with crap headsets with youtube 3D sound tests. Of course, you pay more and you'll get more, or stick to PlayStaion headsets.

My honest recommendation is - don't train your ears. You will forever regret it :messenger_grinning_squinting: Just jump on the ride, grab the blue pill and enjoy :messenger_winking_tongue:
 

SonGoku

Member
Yes, they'll go with software solution then, not true deep hardware 3D audio, just like with cellphones and other average devices. That would give more room for GPU space to utilize the CPU, as it would need to have extra work to preload assets into the CPU.
They have an audio chip though.. we don't know how capable it is, am i missing something?
 

Bo_Hazem

Banned
They have an audio chip though.. we don't know how capable it is, am i missing something?

Yes, as explained, Sony's GPU-based capable of measuring hundreds of sound sources that are computationally expensive and free up the CPU/GPU. The XSX chip is a great solution to free up the CPU but has no GPU power to deliver true 3D audio. To compensate for that using their main GPU, it can be computationally expensive as Mark Cerny stated. How much? We don't know, AMD TrueAudio Next needed 4 full CU's for that:

a3c651be6b4d9e962ba9dbff7ce7ea7c75557d7f.jpg



As stated by AMD here in the video, it is IMPOSSIBLE to do true 3D audio using ONLY CPU, and that's for ONLY 32 sources, not hundreds or 5,000:

 
Last edited:
Status
Not open for further replies.
Top Bottom