• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

pasterpl

Member
Like others know better about massive, unplanned AAA games than Matt Booty, head of Xbox Studios.

M Booty literally said that they will release lots of AA games between AAA titles. Not sure how much clearer this can be? I know that SDF got part of the quote out of context and run with it spreading FUD across the internet but you are wrong. Look at the job postings on some of these XGS websites, they are mostly looking for AAA experienced workers.
 

Bo_Hazem

Banned
This is the same kind of speculations that put RDNA1 and "NO HARDWARE RAYTRACING IN PS5" based on the github leak.

PS5 has dedicated audio hardware and can stream from SSD very fast.

Xbox Series X has dedicated audio hardware and can stream from SSD very fast.

It is getting sad with you! Seariously.

PS5: We have 3D Audio (Hardware-GPU-based, hundreds calculated) - Based on what Cerny said they are still working on making this work. So Sony's solution is still "half-baked".

XSX: We have 3D Audio (software, half-baked) - Wrong. You are knowingly spreading lies. Question why XSX 3D audio is "half-baked"




PS5: We can stream directly from SSD.
XSX: We can stream directly from SSD (Well, 11sec loading screen) - WTF? What 11 Seconds? Are you referring to State Of Decay loading? That demo has nothing to do with streaming.

You deserve this to be your avatar, you are spreading FUD and lies, I just dont understand why? Troll?

hqdefault.jpg

Yes and no, 3D audio can't be calculated on CPU alone, either reserve CU's from the GPU, or do mediocre 3D Audio. Says AMD:




And no, XSX shows only 4.6x speeds over Xbox One HDD, that's way too slow for ideal streaming directly from SSD without preloading into the CPU/GPU. The margin is massive.
 
Last edited:

SgtCaffran

Member
Bo_Hazem Bo_Hazem pasterpl pasterpl

Please be careful with all the misinformation! I will copy one of my earlier posts here to explain XsX vs PS5 audio (at least the parts mentioned in public).

Both PS5 and XsX have an audio chip but both appear to be for very different purposes. Now remember we do not have the full story for both!

The PS5 chip called the Tempest Engine is a modified RDNA2 (I assume) CU that works similar to a PS3 SPU. Cerny says it is as powerful as all eight Jaguar cores of the PS4 CPU combined. The goal is to process hundreds of audio sources with HRTF, head related transfer functions. This is a method to simulate how our ears change the incoming sounds so that we can pinpoint the location of their source. This works best with simple stereo headphones where the Tempest Engine will convert mono audio from all directions and locations into a combined stereo mix where you should be able to hear the direction and feel the presence (3D audio or binaural audio). However, this does not have anything to do (at least from what has been currently shared) with sound reflections and reverb in rooms.

The Xbox solution already exists and is called project Acoustics. The XsX appears to have a new hardware block meant to work on this solution. The goal of project Acoustics is to calculate the reflections, absorptions and wave interactions of sounds waves in a particular room by means of voxels and the Azure cloud compute servers. These calculations are then simplified by using probes and interpolation for the actual game so that the full calculations do not have to be used. This should give a good representation of actual sound behaviour in rooms and worlds. However, this does not have anything to do with how we perceive 3D audio or binaural audio. The resulting mix will be a "simple" stereo, 5.1, 7.1 or Dolby Atmos signal.

TL;DR
PS5 Tempest Engine
- DOES provide real 3D audio simulation of our ears (sounds direction, locality, presence, PSVR)
- DOES NOT provide room reflections, reverb (indoor, outdoor, caves, etc)
- DOES provide computational room for developers to use on audio

XsX project Acoustics
- DOES provide room simulation for reflections, reverb, etc (indoor, outdoor, caves, etc)
- DOES NOT provide real 3D audio simulation of our ears (sounds direction, locality, presence)
- PROBABLY provides computational room for developers to use on audio

I think the XsX solution can be considered a replacement of raytracing for audio.

Hope this clears some stuff up. If anybody has some more insight please let me know!
 

Bo_Hazem

Banned
M Booty literally said that they will release lots of AA games between AAA titles. Not sure how much clearer this can be? I know that SDF got part of the quote out of context and run with it spreading FUD across the internet but you are wrong. Look at the job postings on some of these XGS websites, they are mostly looking for AAA experienced workers.

Yes, Halo, Forza, Gears, repeat.
 

Bo_Hazem

Banned
Bo_Hazem Bo_Hazem pasterpl pasterpl

Please be careful with all the misinformation! I will copy one of my earlier posts here to explain XsX vs PS5 audio (at least the parts mentioned in public).

Both PS5 and XsX have an audio chip but both appear to be for very different purposes. Now remember we do not have the full story for both!

The PS5 chip called the Tempest Engine is a modified RDNA2 (I assume) CU that works similar to a PS3 SPU. Cerny says it is as powerful as all eight Jaguar cores of the PS4 CPU combined. The goal is to process hundreds of audio sources with HRTF, head related transfer functions. This is a method to simulate how our ears change the incoming sounds so that we can pinpoint the location of their source. This works best with simple stereo headphones where the Tempest Engine will convert mono audio from all directions and locations into a combined stereo mix where you should be able to hear the direction and feel the presence (3D audio or binaural audio). However, this does not have anything to do (at least from what has been currently shared) with sound reflections and reverb in rooms.

The Xbox solution already exists and is called project Acoustics. The XsX appears to have a new hardware block meant to work on this solution. The goal of project Acoustics is to calculate the reflections, absorptions and wave interactions of sounds waves in a particular room by means of voxels and the Azure cloud compute servers. These calculations are then simplified by using probes and interpolation for the actual game so that the full calculations do not have to be used. This should give a good representation of actual sound behaviour in rooms and worlds. However, this does not have anything to do with how we perceive 3D audio or binaural audio. The resulting mix will be a "simple" stereo, 5.1, 7.1 or Dolby Atmos signal.

TL;DR
PS5 Tempest Engine
- DOES provide real 3D audio simulation of our ears (sounds direction, locality, presence, PSVR)
- DOES NOT provide room reflections, reverb (indoor, outdoor, caves, etc)
- DOES provide computational room for developers to use on audio

XsX project Acoustics
- DOES provide room simulation for reflections, reverb, etc (indoor, outdoor, caves, etc)
- DOES NOT provide real 3D audio simulation of our ears (sounds direction, locality, presence)
- PROBABLY provides computational room for developers to use on audio

I think the XsX solution can be considered a replacement of raytracing for audio.

Hope this clears some stuff up. If anybody has some more insight please let me know!

I'm totally trying to explain that but yours comes with great addition. Thanks a lot. (y)
 
Last edited:

nosseman

Member
Yes and no, 3D audio can't be calculated on CPU alone, either reserve CU's from the GPU, or do mediocre 3D Audio. Says AMD:




And no, XSX shows only 4.6x speeds over Xbox One HDD, that's way too slow for ideal streaming directly from SSD without preloading into the CPU/GPU. The margin is massive.


Again - baseless speculation.

How can you know that XBox Series X does not have dedicated audio hardware? You dont!

How do you know XSX is 4.6x speeds over Xbox One? You dont! You simply watched a demo of existing game and compared the loading times and ignored the fact that new games will use the Velocity Architecture to stream assets.

In reality XSX HD is 40 times faster than Xbox One - that does not mean that it could load a existing game 40 times faster but a new game that leverages Velocity Architecture could achieve this number in streaming assets.
 
Bo_Hazem Bo_Hazem pasterpl pasterpl

Please be careful with all the misinformation! I will copy one of my earlier posts here to explain XsX vs PS5 audio (at least the parts mentioned in public).

Both PS5 and XsX have an audio chip but both appear to be for very different purposes. Now remember we do not have the full story for both!

The PS5 chip called the Tempest Engine is a modified RDNA2 (I assume) CU that works similar to a PS3 SPU. Cerny says it is as powerful as all eight Jaguar cores of the PS4 CPU combined. The goal is to process hundreds of audio sources with HRTF, head related transfer functions. This is a method to simulate how our ears change the incoming sounds so that we can pinpoint the location of their source. This works best with simple stereo headphones where the Tempest Engine will convert mono audio from all directions and locations into a combined stereo mix where you should be able to hear the direction and feel the presence (3D audio or binaural audio). However, this does not have anything to do (at least from what has been currently shared) with sound reflections and reverb in rooms.

The Xbox solution already exists and is called project Acoustics. The XsX appears to have a new hardware block meant to work on this solution. The goal of project Acoustics is to calculate the reflections, absorptions and wave interactions of sounds waves in a particular room by means of voxels and the Azure cloud compute servers. These calculations are then simplified by using probes and interpolation for the actual game so that the full calculations do not have to be used. This should give a good representation of actual sound behaviour in rooms and worlds. However, this does not have anything to do with how we perceive 3D audio or binaural audio. The resulting mix will be a "simple" stereo, 5.1, 7.1 or Dolby Atmos signal.

TL;DR
PS5 Tempest Engine
- DOES provide real 3D audio simulation of our ears (sounds direction, locality, presence, PSVR)
- DOES NOT provide room reflections, reverb (indoor, outdoor, caves, etc)
- DOES provide computational room for developers to use on audio

XsX project Acoustics
- DOES provide room simulation for reflections, reverb, etc (indoor, outdoor, caves, etc)
- DOES NOT provide real 3D audio simulation of our ears (sounds direction, locality, presence)
- PROBABLY provides computational room for developers to use on audio

I think the XsX solution can be considered a replacement of raytracing for audio.

Hope this clears some stuff up. If anybody has some more insight please let me know!


PS5 can also do ray-traced audio in addition to 3D audio. Cerny already talked about ray-traced audio.
 

ZywyPL

Banned
Yes, Halo, Forza, Gears, repeat.

Still more variety than 3rd person walking sims lol.

That's putting it mildly. It's rampant trolling, that's what it is. He knows very well that what he's writing isn't true. Just report and move on.

I'm actually curious who and how much is paying him. Because clearly he's not doing it for free, no way.
 
Last edited:

Darius87

Member
Do workgroup processors perform anything different in RSNA 2?

since they are just 2 CU grouped together I wondered if they did anything different or do they work the same way?
The more powerful dual compute unit starts with a dedicated front-end as illustrated in Figure 6. The L0 instruction cache is shared between all four SIMDs within a dual compute unit, whereas prior instruction caches were shared between four CUs – or sixteen GCN SIMDs. The instruction cache is 32KB and 4-way set-associative; it is organized into four banks of 128 cache lines that are 64-bytes long. Each of the four SIMDs can request instructions every cycle and the instruction cache can deliver 32B (typically 2-4 instructions) every clock to each of the SIMDs – roughly 4X greater bandwidth than GCN.
The fetched instructions are deposited into wavefront controllers. Each SIMD has a separate instruction pointer and a 20-entry wavefront controller, for a total of 80 wavefronts per dual compute unit. Wavefronts can be from a different work-group or kernel, although the dual compute unit maintains 32 work-groups simultaneously. The new wavefront controllers can operate in wave32 or wave64 mode.
 

geordiemp

Member
Still more variety than 3rd person walking sims lol.

You mean the scenes where there is a break / corridor or door when loading hopefully that will no longer be necessary.

Or do you mean the game actually having a narrative and story around gameplay ?
 

SgtCaffran

Member
PS5 can also do ray-traced audio in addition to 3D audio. Cerny already talked about ray-traced audio.
Correct but that is a whole other story and seperate from the Tempest Engine/Project Acoustics. Although at some point the ray traced audio will be input for the PS5 audio chip.

The question is how well raytracing handles audio, since ray tracing works with... rays and sound is a wave based phenomenon. The Xbox solution of project Acoustics is actually an offline wave simulation. So when that is used, there is no need for audio raytracing anymore.
 

PaintTinJr

Member
Paging Bo_Hazem Bo_Hazem , @thicc_girls_are_teh_best, SonGoku SonGoku

When Cerny talked about audio, he mentioned the SPUs in the PS3 were a "beast" at audio rendering and that "simple, pipeline algorithms could really take advantage of asynchronous DMA."

I know very little about audio - other than if it sounds good or not - and was wondering what about the SPU's in the PS3 made it a beast at rendering audio as opposed to the PS4? Is it just as Cerny says, "simple, pipeline algorithms?" Additionally, Cerny mentioned that they modified an AMD CU to work more like an SPU, given the custom nature behind this, do you think developers will make take advantage of this? Also, are there any examples that come to mind? I know Cerny brought up rain droplets in his presentation, are there any others that stand out? Maybe wind?



Also, what is this thing?

aeaNrEY.png

It is the versatility of algorithms it can accelerate that make the SPUs important – SPUS or something very similar were almost certainly on space projects in the early 2000s.

If an ASICs (fixed path) can reduce a predetermined compute problem to a minimal number of clock cycles, and depending on the problem type, a CU or a general purpose CPU core ‘might’ be able to individually get within single digit orders of magnitude to do that work - before the problem becomes known/predetermined to be subsumed in an ASIC solution. But not all compute problems sit exactly close to CU or CPU core capability, and that is where the SPUs is the perfect candidate providing maximum versatility versus performance, at the expense of difficulty to craft optimal solutions.

Thinking about how early we are into 3D audio and realtime-RT, having such versatility in the Tempest Engine seems essential for future-proofing the PS5 against GPU ASIC solutions that emerge into the generation. Playstation will be hoping that the problem domain of 3D audio will mapped out and the processing burden lowered, so that the Tempest Engine resources used for 3D audio processing could be reclaimed and diverted to newer problems without hampering the 3D audio solution.
 

longdi

Banned
The whole ear mapping binaural virtualization is not that special tech.
Not enough to overcome 12tf of pure flops

Other companies have it in the market

Creative now has a coupon for $50 off all the products in the SXFI line:

Coupon Code: SXFIMOVIES
 
Okay, so based on some poking around I did, I would have to concede now that the PS5 definitely appears to support Sampler Feedback Streaming.

Checked the DirectX Specs, which publish the engineering specs for a lot of directx features and is packed full of detail on what it takes for them to work, in addition to often revealing other names they go by or what a piece of hardware needs to support the feature or a specific tier of feature support. They also make sure to caution us with the line

":Note that some of this material (especially in older specs) may not have been kept up to date with changes that occurred since the spec was written. "

Sampler Feedback Streaming goes by a few other names, and one IMMEDIATELY jumped out at me: partially resident textures. So did Sparse Feedback Textures.


Terminology
Use of sampler feedback with streaming is sometimes abbreviated as SFS. It is also sometimes called sparse feedback textures, or SFT, or PRT+, which stands for “partially resident textures”.

If nothing has changed, and I'm assuming it hasn't, that would mean this is a hardware feature introduced way back on Radeon HD 7970, 2 years before PS4 and Xbox One were released. It was only supported via OpenGL, but not DirectX until very recently with DirectX 12 Ultimate, which greatly limited its adoption. This would have no effect at all on Playstation's ability to be using it much sooner depending on the tier of tiled resources it supported since the feature is best utilized with that, which even the PS4 would have some support for.

So the two folks ethomaz ethomaz and someone else who were telling me that this was the case were absolutely correct. So this fully explains why Mark Cerny would have no need to talk about it as it's not a new feature at all, but it is a far more important feature this gen due to the extra RAM. It doesn't exactly explain why he didn't mention VRS, since that's a totally new feature to AMD GPUs only just added in RDNA2, thus newer even than Primitive Shaders (introduced in Vega and is also where the next generation Geometry Engine was introduced, but this feature wasn't fully activated for use till RDNA1.} I'm also going to assume Primitive Shaders is Mesh Shaders, but Primitive Shaders or the Next Generation Geometry Engine are what AMD calls it.

Would further explain why Anandtech focused specifically just on Ray Tracing and Variable Rate Shading in their 5700XT review as the big new features that are missing. No mention of Mesh Shaders, likely because Anandtech assumes Primitive Shaders is it.

https://www.anandtech.com/show/14618/the-amd-radeon-rx-5700-xt-rx-5700-review

AMD in turn comes in with the edge on manufacturing process, as they’re using TSMC 7nm versus the 16nm offshoot that NVIDIA uses, however NVIDIA comes in with a notable feature advantage thanks to ray tracing and variable rate shading support. AMD and NVIDIA’s cards are not equal in features, and that will play a big part in their value.



The perfect compliment to this is tiled resources at the highest tier, which would have missed PS4 and Xbox One, but would be available in Xbox One X and PS4 Pro for sure. So both PS4 Pro and Xbox One X can take advantage of this also.

Assuming Sampler Feedback was never active on even DirectX12 for Xbox One, that would mean the PS4 which I believe has an API closer to OpenGL, would have had some level of access to Sampler Feedback, but perhaps due to the slowness of the hard drives it wasn't yet possible to fully benefit from this feature on current gen consoles. Anyway, check these out.


Among the features added to Graphics Core Next that were explicitly for gaming, the final feature was Partially Resident Textures, which many of you are probably more familiar with in concept as Carmack’s MegaTexture technology. The concept behind PRT/Megatexture is that rather than being treated as singular entities, due to their size textures should be broken down into smaller tiles, and then the tiles can be used as necessary. If a complete texture isn’t needed, then rather than loading the entire texture only the relevant tiles can be loaded while the irrelevant tiles can be skipped or loaded at a low quality. Ultimately this technology is designed to improve texture streaming by streaming tiles instead of whole textures, reducing the amount of unnecessary texture data that is streamed.

Currently MegaTexture does this entirely in software using existing OpenGL 3.2 APIs, but AMD believes that more next-generation game engines will use this type of texturing technology. Which makes it something worth targeting, as if they can implement it faster in hardware and get developers to use it, then it will improve game performance on their cards. Again this is similar to volume shadows, where hardware implementations sped up the process.

Wrapping things up, for the time being while Southern Islands will bring hardware support for PRT software support will remain limited. As D3D is not normally extensible it’s really only possible to easily access the feature from other APIs (e.g. OpenGL), which when it comes to games is going to greatly limit the adoption of the technology. AMD of course is working on the issue, but there are few ways around D3D’s tight restrictions on non-standard features.


If this doesn't near exactly match the description of Sampler Feedback Streaming, I don't know what does.

Another bit of info from a 2012 blog post.


AMDs latest GPUs have one interesting new feature: Partially resident textures, PRT for short are a new way to handle textures too large for the graphics memory.

But let’s start at the beginning: Up till now GPUs didn’t have a memory management unit (MMU), so all the data for rendering had to be present in the graphics memory. That’s especially problematic for textures, on the one hand you want high resolution textures to create a world with lots of detail, on the other hand you see most of the scene most of the time from a larger distance. While you might see all details eventually, you can never see everything at once: your screen size is too limited (and tiny compared to the amount of textures in large virtual worlds like games).

Megatexturing or virtual texturing as it’s used in ID Softwares latest shooter Rage is exploiting the fact, that you can’t see all the details at once and manages to handle around 20GB of textures – or better, one gigantic texture. It analyzes which parts are needed and swaps those parts in. All of this is done in software as a CPU or CUDA implementation (depending on the users settings and the platform).

Partially resident textures promise to give the programmer a simple way to use gigantic textures but this time the detection of missing data and the fetching will be done by the MMU of the GPU. John Carmack already announced, that Doom 4 will get optional support for this technique.

AMD has published a demo that presents this technique, the demo is written in OpenGL using an extension. No Direct3D support was announced yet, but I guess it will become a part of one of the next DX versions. The OpenGL extension used internally by AMD was announced, but it is not yet public.

This means that the most interesting questions are still open: What happens in case of missing data? Does the pipeline stall or will data of a lower MipMap level be used?

This line makes more sense when combined with this from the Series X Digital Foundry article.


A technique called Sampler Feedback Streaming - SFS - was built to more closely marry the memory demands of the GPU, intelligently loading in the texture mip data that's actually required with the guarantee of a lower quality mip available if the higher quality version isn't readily available, stopping GPU stalls and frame-time spikes. Bespoke hardware within the GPU is available to smooth the transition between mips, on the off-chance that the higher quality texture arrives a frame or two later.

All lines up pretty well. Considering the cautionary line about older specs may end up changing it makes me wonder if something further was done to SFS for Series X seeing as how the Microsoft architect said it wasn't a standard RDNA2 feature, and that it was custom made for Series X. Maybe it got enhanced somehow since the original hardware implementation roughly 9 years ago. Not sure.
 

Aceofspades

Banned
Bo_Hazem Bo_Hazem pasterpl pasterpl

Please be careful with all the misinformation! I will copy one of my earlier posts here to explain XsX vs PS5 audio (at least the parts mentioned in public).

Both PS5 and XsX have an audio chip but both appear to be for very different purposes. Now remember we do not have the full story for both!

The PS5 chip called the Tempest Engine is a modified RDNA2 (I assume) CU that works similar to a PS3 SPU. Cerny says it is as powerful as all eight Jaguar cores of the PS4 CPU combined. The goal is to process hundreds of audio sources with HRTF, head related transfer functions. This is a method to simulate how our ears change the incoming sounds so that we can pinpoint the location of their source. This works best with simple stereo headphones where the Tempest Engine will convert mono audio from all directions and locations into a combined stereo mix where you should be able to hear the direction and feel the presence (3D audio or binaural audio). However, this does not have anything to do (at least from what has been currently shared) with sound reflections and reverb in rooms.

The Xbox solution already exists and is called project Acoustics. The XsX appears to have a new hardware block meant to work on this solution. The goal of project Acoustics is to calculate the reflections, absorptions and wave interactions of sounds waves in a particular room by means of voxels and the Azure cloud compute servers. These calculations are then simplified by using probes and interpolation for the actual game so that the full calculations do not have to be used. This should give a good representation of actual sound behaviour in rooms and worlds. However, this does not have anything to do with how we perceive 3D audio or binaural audio. The resulting mix will be a "simple" stereo, 5.1, 7.1 or Dolby Atmos signal.

TL;DR
PS5 Tempest Engine
- DOES provide real 3D audio simulation of our ears (sounds direction, locality, presence, PSVR)
- DOES NOT provide room reflections, reverb (indoor, outdoor, caves, etc)
- DOES provide computational room for developers to use on audio

XsX project Acoustics
- DOES provide room simulation for reflections, reverb, etc (indoor, outdoor, caves, etc)
- DOES NOT provide real 3D audio simulation of our ears (sounds direction, locality, presence)
- PROBABLY provides computational room for developers to use on audio

I think the XsX solution can be considered a replacement of raytracing for audio.

Hope this clears some stuff up. If anybody has some more insight please let me know!

Watch Cerny speech again, he said something about virtual surround systems. I think Tempest Engine covers the bolded also.
 

FeiRR

Banned
It is the versatility of algorithms it can accelerate that make the SPUs important – SPUS or something very similar were almost certainly on space projects in the early 2000s.

If an ASICs (fixed path) can reduce a predetermined compute problem to a minimal number of clock cycles, and depending on the problem type, a CU or a general purpose CPU core ‘might’ be able to individually get within single digit orders of magnitude to do that work - before the problem becomes known/predetermined to be subsumed in an ASIC solution. But not all compute problems sit exactly close to CU or CPU core capability, and that is where the SPUs is the perfect candidate providing maximum versatility versus performance, at the expense of difficulty to craft optimal solutions.

Thinking about how early we are into 3D audio and realtime-RT, having such versatility in the Tempest Engine seems essential for future-proofing the PS5 against GPU ASIC solutions that emerge into the generation. Playstation will be hoping that the problem domain of 3D audio will mapped out and the processing burden lowered, so that the Tempest Engine resources used for 3D audio processing could be reclaimed and diverted to newer problems without hampering the 3D audio solution.
As I understand Cerny's remarks on HRTF, the problem is lack of statistical data right now. Measuring them isn't exactly cheap and it's a cumbersome process which requires expensive hardware. He said there will be several (I think number 4 or 5 was mentioned) profiles to chose from at start with a possibility to include revisions and more choice later. Isn't that similar to how MP1/2/3 compression algorithms were designed by Fraunhoffer Institute? MP3 compression works well for majority of the population but if you are an outlier, you'll suffer. I don't know if I got it right, I don't know much about audio engineering.
 

xacto

Member
The whole ear mapping binaural virtualization is not that special tech.
Not enough to overcome 12tf of pure flops

Other companies have it in the market

Creative now has a coupon for $50 off all the products in the SXFI line:

Coupon Code: SXFIMOVIES

That bolded text right there is what's wrong with fanboysm... on both sides of the fence.
 
The whole ear mapping binaural virtualization is not that special tech.
Not enough to overcome 12tf of pure flops

Other companies have it in the market

Creative now has a coupon for $50 off all the products in the SXFI line:

Coupon Code: SXFIMOVIES

And again - you won't have to buy new stuff to use Sonys solution. You'll be able to hear better sound using your current soundsystem be it headphones, tv-speakers or anything else...
 

BluRayHiDef

Banned
Does anyone have the following values for the PlayStation 5's and the Xbox Series X's GPUs? I know only that the PS5's GPU has 144 texture mapping units.

1. Stream Processors
2. Texture mapping units
3. Raster operators
4. Compute units
5. Asynchronous compute units
 

Darius87

Member
PS5 Tempest Engine
- DOES provide real 3D audio simulation of our ears (sounds direction, locality, presence, PSVR)
- DOES NOT provide room reflections, reverb (indoor, outdoor, caves, etc)
- DOES provide computational room for developers to use on audio

Hope this clears some stuff up. If anybody has some more insight please let me know!


also PS5 can do RT-Audio and that's literally how reverb, reflections(it's the same thing) are made sound source bouncing of the objects.
 

nosseman

Member
Does anyone have the following values for the PlayStation 5's and the Xbox Series X's GPUs? I know only that the PS5's GPU has 144 texture mapping units.

1. Stream Processors
2. Texture mapping units
3. Raster operators
4. Compute units
5. Asynchronous compute units

Some of these numbers are confirmed but not all.

Here is unconfirmed numbers from Techpowerup (https://www.techpowerup.com/):

PS5:
Shading Units 2304 (Stream Processors?)
TMUs 144 (Texture mapping units)
ROPs 64 (Raster operators?)
Compute Units 36 (Compute units)
L2 Cache4 MB

Xbox Series X:
Shading Units 3328
TMUs 208
ROPs 80
Compute Units 52
L2 Cache 5 MB

The only thing we know for sure as far as I know is the CU count.
 

PaintTinJr

Member
As I understand Cerny's remarks on HRTF, the problem is lack of statistical data right now. Measuring them isn't exactly cheap and it's a cumbersome process which requires expensive hardware. He said there will be several (I think number 4 or 5 was mentioned) profiles to chose from at start with a possibility to include revisions and more choice later. Isn't that similar to how MP1/2/3 compression algorithms were designed by Fraunhoffer Institute? MP3 compression works well for majority of the population but if you are an outlier, you'll suffer. I don't know if I got it right, I don't know much about audio engineering.
That analogy works pretty well at an overview level IMO - and with lots of similar technologies and issues. However, mp3 issues are a signal reconstruction issue from lossless compression at the point the ear converts between audiowave and brain signal(AFAIK), whereas the Tempest 3D audio problem (for headphones) looks to be one of highly target superposition of waves reconstructing sounds waves at specific points, while the targets are the entrances to unknown cave shapes, with an unknown entrance size, just taken from a limited size range. If using TV speakers it would seem like the same problem, but with caves in opposite sides of hills.
 

CrysisFreak

Banned
Most of the new power discussion is misguided imho in the sense that fanboys want to see a significant difference where there is none.
The PS4 was quite a bit stronger than the XBone and that meant 1080p became 900p or 900p became 720p. Sometimes there was actually parity as you know things are complicated.
Now the relative difference is much smaller AND the resolutions we're dealing with are so high that image quality is great anyway. So these machines will provide about the exact same multiplat experience.
One wildcard is the PS5 IO that will probably benefit exclusives greatly.


Cool, if true.🥴
No Forza Horizon 5 = No excitement from me
I need that shit asap.
 

BluRayHiDef

Banned
Some of these numbers are confirmed but not all.

Here is unconfirmed numbers from Techpowerup (https://www.techpowerup.com/):

PS5:
Shading Units 2304 (Stream Processors?)
TMUs 144 (Texture mapping units)
ROPs 64 (Raster operators?)
Compute Units 36 (Compute units)
L2 Cache4 MB

Xbox Series X:
Shading Units 3328
TMUs 208
ROPs 80
Compute Units 52
L2 Cache 5 MB

The only thing we know for sure as far as I know is the CU count.

Thanks.

I know how to calculate fillrate (# of TMUs x GPU frequency), but how do you calculate the aspects of GPU performance that are associated with the other specifications? I'd like to know because I've read that the higher frequency of the PS5's GPU makes it faster than the XSX's GPU in some regards, such as rasterization.
 
Last edited:

SgtCaffran

Member
Watch Cerny speech again, he said something about virtual surround systems. I think Tempest Engine covers the bolded also.
He mentioned typical audio calculations that are performed in games right now that can be run on the Tempest Engine, see point 3 where I stated that resources are available for other audio calculations on the chip. However, that is not equal to full room simulation like what Xbox project Acoustics is trying to achieve.

The advantage of the Tempest Engine in this case (compared to PS4) is that these calculations do not have to be performed on the CPU.
 
I’m sure you’re very well aware but at least if you wanted to, you could purchase a detachable mic 🍻. I have a beloved pair of audio-technica cans and the mod mic. Perfection.
Yeah, and that's probably the best solution for a mic, it's just that I wouldn't use it. I don't do much multiplayer, I've always been more of a single player guy, so that's mainly the reason why I don't do headsets, or anything with a mic.

And Audio-Technica is GOOD, some of the best mid-range priced headphones. What model do you have?
 

SgtCaffran

Member


also PS5 can do RT-Audio and that's literally how reverb, reflections(it's the same thing) are made sound source bouncing of the objects.

Yes like I mentioned before both machines are able to use their RT cores for audio computations. That is a very seperate topic, the discussion was about the custom audio chips.

Audio is a wave effect and ray tracing simulates rays. It will be very interesting to see how close Sony can approximate realistic room audio with RT.
 

ZywyPL

Banned
No Forza Horizon 5 = No excitement from me
I need that shit asap.

I'm on the sim/realistic side, FM8 and GT7 will decide which console I'll get first, but I have a low faith in the latter one judging by its last 3 installments... But I still hope Kaz/PD can redeem themselves, they have all the processing power and memory they need, no more excuses this time around.
 

Darius87

Member
He mentioned typical audio calculations that are performed in games right now that can be run on the Tempest Engine, see point 3 where I stated that resources are available for other audio calculations on the chip. However, that is not equal to full room simulation like what Xbox project Acoustics is trying to achieve.

The advantage of the Tempest Engine in this case (compared to PS4) is that these calculations do not have to be performed on the CPU.
how so? is there something more beyond reverberation and delay for acoustic effects?
also what sony is to doing for sound:
  • HRTF
  • < 100 hq sound sources
  • ambisonics
  • convolution reverb(more realistic reverb then traditional reverb, compute expensive) , echo
  • sound locality
  • RT-audio
your term full room simulation is reverb, echo, locality you can argue about quality of these effects but there's nothing more to sound waves, acoustics then that.
 
Yup, I read that LDAC is way better than bluetooth and can play up to 990kbps audio compared to 320kbps on bluetooth? Which should be smart to use for both 1000XM3/4 or next PS headset to take advantage of it:


aptX Adaptative is nice as well for up to 420kbps for other applications that might support it but not LDAC. I found a Sony amp while looking, what do you think of this:


USB DAC Headphone Amplifier
PHA-1A

e45ecb63eeb28592a52bd7b8ea1fb043



Looks small but probably exceeds 1000XM3 specs (meaning it needs even better headsets to take advantiage of?) and as blasphemous as your $1099 amp in terms of price.

Would it be smart to buy it and use it with let's say XM3 (40kHz max) or XM4? As it's rated as 10Hz-100kHz. I see it advertized with both the walkman and Xperia (I'm a 15-years-streak Sony phone user).

32614f20b249fd3db55b1a8acc0f95a6


Sorry if I'm asking too many questions.🤓
Not a problem, my man. Let's just break it down a bit:

10Hz-100kHz is the frequency range that's supported by that particular amp, which is nice, but, most headphones won't have that kind of frequency range (for example, my AKG K-872s peak at 54kHz, and they're not exactly cheap or bad headphones), so, I don't want to say that's overkill, 'cause having a broader frequency range to work with always results in richer detail, but, it doesn't necessarily have to be a deal breaker when looking for an amp, you could be ok with something lesser.

Now, for the real deal of all of this. You have to carefully take into consideration the sources you're going to use for sound, music, and so on. Meaning, if the source you're gonna use, let's say the PS5, peaks at 48kHz (it's just a number commonly used, I don't remember right now if that's what Mark Cerny said on the presentation), then, that's the frequency range you're going to need to cover, at least. If you want to use the amp for, maybe, your PC, then, the same thing, you have to think about how you listen to music. Do you listen mp3? Compressed formats? If you have some Hi-Res audio, what is the frequency peak, the bit depth? You have to take a look at that, know what you need to cover, and then, with that in mind, go for something that suits your needs. This is something I talk about with a lot of bands I record, just, know what you need, what suits you best, and then buy that. It serves no purpose buying crazy good equipment if you're not gonna take advantage of it.

And, at any point, feel free to hit me up on PM for anything, I don't want to derail the thread too much with all this audio talk haha
 
Last edited:

Darius87

Member
Yes like I mentioned before both machines are able to use their RT cores for audio computations. That is a very seperate topic, the discussion was about the custom audio chips.

Audio is a wave effect and ray tracing simulates rays. It will be very interesting to see how close Sony can approximate realistic room audio with RT.
you still think PS5 won't provide reverb? that's why i told you about audio RT.
 

scie

Member

Cool, if true.🥴

Well this part "Their new games are sharing a lot of tech, built for UE4." would make sense. A lot of game studios have their go-to-engine like Frostbite at EA for BF, FIFA, NFS. Capcom with its engine for RE2, RE3 and DMC. They can share the expertise through all their studios and build synergies on it.
 

B_Boss

Member
Yeah, and that's probably the best solution for a mic, it's just that I wouldn't use it. I don't do much multiplayer, I've always been more of a single player guy, so that's mainly the reason why I don't do headsets, or anything with a mic.

And Audio-Technica is GOOD, some of the best mid-range priced headphones. What model do you have?

Ahhh I understand! Oh hands down my favorite headphone manufacturer period. I have my trusted and amazing pair of ATH-MSR7BK cans. I absolutely love them.
 
Ahhh I understand! Oh hands down my favorite headphone manufacturer period. I have my trusted and amazing pair of ATH-MSR7BK cans. I absolutely love them.
Oh, the MSR7BK are probably one of the very best headphones they've ever done. I tried them a couple times, and again, considering the price range, I was impressed. The soundstaging is really good, and everything is very clear. Treble is a bit too sharp, but that's just old whiny me talking haha
 
I personally think $449-$499 for Series X and $399-$449 for PS5.

Why specifically tomorrow? I thought we were expecting something maybe at the end of the month or in early-May. Did I miss something?

Some seem to think Sony will always push news on tuesdays or in a 3 week period ( which would be april 28th? )
 

Redlight

Member
People know it, it is annoying as it seems you are either still trying to gloat or are upset people are not all renouncing their PS5 planned purchases, people get it’s got a higher FLOPS rating and they are very close still (within about 15% of each other realistically), and they are still excited for their own reasons.

Some people know it, sure, others have invested a lot of time into fudging the facts to create an impression that the PS5 is technically superior hardware by zeroing in on very specific features and downplaying any Series X advantage. It goes on for page after page as if the spec reveals never happened.

It would be shame if anyone was mislead by that spin - and they well could be if they were to come to this thread late.

I'm not trying to get people to 'renounce their PS5' purchase at all, the PS5 will be awesome on it's own terms. Nor am I 'gloating', all I'm doing is restating the known facts when it seems that they've been forgotten or buried by an avalanche of denials.

Restating those facts is not something that should upset anyone.
 
Status
Not open for further replies.
Top Bottom