• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

BluRayHiDef

Banned
Because increasing the number of Shader Array increase the complexity and power draw / heat of the chip.

I can only see a single option:

- 4 Shader Arrays with 7 WGP each

How can you match 56 CUs with 6 SAs or 8 SAs? 6 SAs with 5 WGP = 60 CUs, 6 SAs with 4 WGP = 48 CUs, 8 SAs with 4 WGP = 64 CUs, 8 SAs with 3 WGP = 48 CUs.

IMO only 4 SAs with 7 WGP makes sense.

PS5 is 4 SAs with 5 WGP each... identical to Navi 10.... AMD already showed more WGP per SA like Navi 14 that has 2 SAs with 6 WGP each.

images
Thanks. This clarifies things for me.
 

BluRayHiDef

Banned
If Xbox APU is 4 Shader Arrays then it indeed has 64 ROPs.

I will say more if it is not 64ROPs then there is way more chances to be 96ROPs.

80 ROPs is really weird.

96 Rops would increase the difference in the rasterization rates between the consoles from 2.25% to 18.54%.

PS5's GPU:
64 ROPs x 2,230 x 1000 = 142,720,000 rendering operations per second

XSX's GPU:
96 ROPs x 1,825 x 1000 = 175,200,000 rendering operations per second

(142,720,000 rendering operations per second)/(175,200,000 rendering operations per second) = 0.8146118721 -> 81.46%

The PS5 would have a rasterization rate that would be 81.46% of the XSX's.

100% - 81.46% = 18.54%

By the way, would these percentage differences that I've calculated result in any significant perceivable differences in performance between the consoles?

Also, to accommodate these differences, would developers be more likely to proportionally decrease polygon count and texture resolutions for the PS5 to match its framerates with those of the XSX or would they keep the polygon count and texture resolutions the same and therefore cause the PS5 to have lower framerates?

How perceivable would the differences in polygon count and texture resolutions or framerates be?
 
Last edited:

nosseman

Member
If Xbox APU is 4 Shader Arrays then it indeed has 64 ROPs.

I will say more if it is not 64ROPs then there is way more chances to be 96ROPs.

80 ROPs is really weird.

The discussion has been asked on /r/amd and some people believe the number of ROPs can change.



Yes it is possible.

In RDNA parts the rops come in a cluster of 4 units that itself is in a cluser of 4 units for 16 ROPs per cluster, there are two of these clusters per Shader Engine meaning 32 ROPs for 1 SE engine designs (Navi 14) and 64 ROPs for 2 SE designs (Navi 10).

I believe it is all modular so it is possible for each SE to contain a pair of 20 ROP clusters rather than 16 leading to 80 ROPs across a 2 SE GPU.

I believe AMD changed the layout in RDNA 2 or the APU is so custom that they can put in more ROPs.
 

Shmunter

Member
96 Rops would increase the difference in the rasterization rates between the consoles from 2.25% to 18.54%.

PS5's GPU:
64 ROPs x 2,230 x 1000 = 142,720,000 rendering operations per second

XSX's GPU:
96 ROPs x 1,825 x 1000 = 175,200,000 rendering operations per second

(142,720,000 rendering operations per second)/(175,200,000 rendering operations per second) = 0.8146118721 -> 81.46%

The PS5 would have a rasterization rate that would be 81.46% of the XSX's.

100% - 81.46% = 18.54%

By the way, would these percentage differences that I've calculated result in any significant perceivable differences in performance between the consoles?

Also, to accommodate these differences, would developers be more likely to proportionally decrease polygon count and texture resolutions for the PS5 to match its framerates with those of the XSX or would they keep the polygon count and texture resolutions the same and therefore cause the PS5 to have lower framerates?

How perceivable would the differences in polygon count and texture resolutions or framerates be?
I think the more likely scenario to be 64 rops on XsX from tech we are familiar with. 96 is almost science fiction for a console at this point. What’s the clacs using the 64 rop scenario?

Edit: the previous poster may be bringing something new to the table. Whether legitimate is anyone’s guess.
 
Last edited:
96 Rops would increase the difference in the rasterization rates between the consoles from 2.25% to 18.54%.

PS5's GPU:
64 ROPs x 2,230 x 1000 = 142,720,000 rendering operations per second

XSX's GPU:
96 ROPs x 1,825 x 1000 = 175,200,000 rendering operations per second

(142,720,000 rendering operations per second)/(175,200,000 rendering operations per second) = 0.8146118721 -> 81.46%

The PS5 would have a rasterization rate that would be 81.46% of the XSX's.

100% - 81.46% = 18.54%

By the way, would these percentage differences that I've calculated result in any significant perceivable differences in performance between the consoles?

Also, to accommodate these differences, would developers be more likely to proportionally decrease polygon count and texture resolutions for the PS5 to match its framerates with those of the XSX or would they keep the polygon count and texture resolutions the same and therefore cause the PS5 to have lower framerates?

How perceivable would the differences in polygon count and texture resolutions or framerates be?
ROPs are tied to number of shader array (4). Xbox consoles since XB1 have always had less Rops and less pixel performance than PS consoles (which could be seen in a few games, XBX included). I expect to be the same for next gen when there will be lots of alphas effects.
 

SgtCaffran

Member
Project Acoustics is an API/engine devtool and the TE is an audio processor.
Both solutions actually encompass software and hardware. In that sense they are comparable but in other aspects the goals are very different.

Project Acoustics has a software side that does a room acoustics model based on offline calculations. The Xbox Series X also has a hardware chip dedicated to implementing this model. Now unfortunately we do not have details yet on this chip so it's difficult to compare with the PS5 Tempest Engine. I personally expect the TE to be more powerful.

That being said, the Tempest solution by Sony also has a software and hardware side. The Engine itself is an audio processor like you said and it's primary goal is the Tempest HRTF software (they use the Tempest name for this as well if I recall correctly).

Now we know that the PS5 solution allows devs to use any leftover computation for other audio calculations and I expect the Xbox chip to allow the same (would make sense but there is no confirmation on that).

Also I don't think 3rd party devs are prohibited from implementing offline calculations done with Project Acoustics on PS5, similarly MS owns Havok but that doesn't stop devs from using it on competing consoles. Otherwise they would just use their inhouse solution for offline simulations across all platforms.

As far as RT hardware can there be any modifications on the API to do waves or similar bahavior?
Interesting question! Would devs on the PS5 be able to use Project Acoustics? I haven't thought of that!

Regarding the audio raytacing, I think the only way to deal with waves is to approximate them. I don't think it's possible to actually simulate waves and their constructive/destructive behaviour.

However, if many rays are used combined with some smart tricks, I'm sure we will get a decent substitute!
What Darius87 Darius87 is trying to tell you is that you are misunderstanding what Tempest Engine is.
Tempest engine is just a purpose built compute unit designed to be good at fast fourier transform which is how convolution is calculated. Simulating how sound moves, behaves and reacts in an environment. All these include sound occlusion, reverberance, decay etc.

Project Acoustics is wave acoustics engine that uses pre-computed forms of these effects that Tempest Engine can do in realtime.
I'm sorry but you are mistaken about the Tempest Engine.

The TE is indeed a custom audio chip that will be very good in handling the Tempest HRTF calculations and other (standard) audio calculations. However, what you shouldn't expect is full room acoustics simulation with wave construction/deconstruction in real time. That is in no way possible even with the relatively powerful TE audio chip.

Also, the XsX uses a custom audio chip to implement the Project Acoustics models. We have no idea how powerful it will be, though.
 
Last edited:

nosseman

Member
I think the more likely scenario to be 64 rops on XsX from tech we are familiar with. 96 is almost science fiction for a console at this point. What’s the clacs using the 64 rop scenario?

Edit: the above poster may be bringing something new to the table. Whether legitimate is anyone’s guess.

The thing is that slow and wide GPU:s need more ROPs to keep up with the rest of the rendering configuration.


Historically the number of ROPs, TMUs, and shader processing units/stream processors have been equal. However, from 2004, several GPUs have decoupled these areas to allow optimum transistor allocation for application workload and available memory performance. As the trend continues, it is expected that graphics processors will continue to decouple the various parts of their architectures to enhance their adaptability to future graphics applications. This design also allows chip makers to build a modular line-up, where the top-end GPUs are essentially using the same logic as the low-end products.

The current top of the line Navi GPU for gaming is the 5700 XT and it has 40CU:s. XBox Series X has 52 CUs and is with PS5 the first product with RDNA 2.
 

BluRayHiDef

Banned
ROPs are tied to number of shader array (4). Xbox consoles since XB1 have always had less Rops and less pixel performance than PS consoles (which could be seen in a few games, XBX included). I expect to be the same for next gen when there will be lots of alphas effects.
Why has Microsoft traditionally implemented a conservative number of ROPs into their consoles relative to Sony, considering that they typically outclass Sony in regard to every other corresponding component?
 

Panajev2001a

GAF's Pleasant Genius
Why has Microsoft traditionally implemented a conservative number of ROPs into their consoles relative to Sony, considering that they typically outclass Sony in regard to every other corresponding component?

MS had less ROPS in Xbox One because they had considerably less CU’s and due to the configuration of the chip this implied.
PS4 Pro has way more ROPS because they went with a duplicated/mirrored/butterfly wings and improved PS4 GPU design instead of just adding more improved CU’s and thus got more ROPS than they can really even use in most scenarios. It was likely a lot more expensive to remove them than leaving them there hehe.
 
Last edited:

Darius87

Member
I have not made this claim. I have, however, explained that the primary goal of the Tempest Engine is to provide 3D Audio by means of HRTF and not room acoustics simulation. And I have also stated that any leftover computational power can be used by developers as they see fit. Can you now please stop this useless crusade?
whenever you'll say that i'll show your post where you contradict yourself.

PS5 Tempest Engine
- DOES provide real 3D audio simulation of our ears (sounds direction, locality, presence, PSVR)
- DOES NOT provide room reflections, reverb (indoor, outdoor, caves, etc)
- DOES provide computational room for developers to use on audio

I'm sorry but you are mistaken about the Tempest Engine.

The TE is indeed a custom audio chip that will be very good in handling the Tempest HRTF calculations and other (standard) audio calculations. However, what you shouldn't expect is full room acoustics simulation with wave construction/deconstruction in real time. That is in no way possible even with the relatively powerful TE audio chip.
it's same thing as using RT Audio + convolution reverb an that's real time and RT Audio is cheap compared to other RT effects like shadows reflections etc..
so you're wrong again :messenger_grinning_squinting: we can expect full room acoustics simulation, and there's is choice for devs to do same offline prebaking for ps5 because there's no secret sauce for audio chip to do that calculations like i said everything you hear coming from headphones, speakers are just DSP signals which tempest chip excell at.

Also, the XsX uses a custom audio chip to implement the Project Acoustics models. We have no idea how powerful it will be, though.
so powerfull that it will need prebaked calculations for acoustics, pretty dope.
 

StreetsofBeige

Gold Member
Does PS5 allow you to hook up a boring USB HDD with PS4 games on it and it can play them (you just don’t get PS5 perks)?

SeX allows you to. The caveat is no Series X perks and any game designed specially for SeX has to run off the internal or Seagate SSD
 

SgtCaffran

Member
whenever you'll say that i'll show your post where you contradict yourself.
Again you are taking things out of context. The post you quoted is made as a response to a discussion where false statements were made about Xbox and Playstation having very similar audio projects. Basically, Project Acoustics was brought up as a "counter" to PS5 Tempest. The point I made was that the goals of both are completely different. Reiterating: Xbox wants to do room acoustics simulation and Playstation wants to do 3D binaural audio by means of HRTF. Is that their sole purpose? No. Is it their primary purpose as described by Sony and Microsoft? Yes.

I have never claimed that the Tempest audio chip is not capable of other audio calculations. In fact, I have even stated this as an added benefit. So again, stop this crusade and just focus on stuff that actually matters.
it's same thing as using RT Audio + convolution reverb an that's real time and RT Audio is cheap compared to other RT effects like shadows reflections etc..
so you're wrong again :messenger_grinning_squinting: we can expect full room acoustics simulation, and there's is choice for devs to do same offline prebaking for ps5 because there's no secret sauce for audio chip to do that calculations like i said everything you hear coming from headphones, speakers are just DSP signals which tempest chip excell at.
It's not the same thing. RT audio+effects will always be an approximation of room acoustics (which might be very good, don't misunderstand me!). What Xbox wants to achieve is actual room acoustics simulation. There is a distinction, whether or not we will notice it in games, that's a very different question.

Apparently Microsoft has decided that there is merit to an offline baked audio solution, even though the XsX raytracing capabilities will probably be slightly more powerful then the PS5's. So the Xbox will be able to do RT audio easily but they still chose to use Project Acoustics. So it will be interesting to find out why!

So I'm asking you again: stop the pettiness and let us have a technical discussion without all the wanting to prove me wrong on semantics and wanting one console to be better. I invite you.
 
Last edited:
D

Deleted member 775630

Unconfirmed Member
Does PS5 allow you to hook up a boring USB HDD with PS4 games on it and it can play them (you just don’t get PS5 perks)?

SeX allows you to. The caveat is no Series X perks and any game designed specially for SeX has to run off the internal or Seagate SSD
Yes it does, but the PS5 doesn't have the backwards compatibility upgrades that XSX offers. Like for example the full HDR implementation for every game, so games that didn't support HDR, suddenly do. Because they use machine learning to enhance this experience, and all on a hardware level so developers don't need to do anything for this. Was amazing to see this in the video of DF, they used a Fusion Frenzy example. (Timestamped)
 
Last edited by a moderator:

sinnergy

Member
ROPs are tied to number of shader array (4). Xbox consoles since XB1 have always had less Rops and less pixel performance than PS consoles (which could be seen in a few games, XBX included). I expect to be the same for next gen when there will be lots of alphas effects.
Xbox could end up with 80 ROPS, right? The amount of ROPs isn’t confirmed yet.
 

Shmunter

Member
Yes it does, but the PS5 doesn't have the backwards compatibility upgrades that XSX offers. Like for example the full HDR implementation for every game, so games that didn't support HDR, suddenly do. Because they use machine learning to enhance this experience, and all on a hardware level so developers don't need to do anything for this. Was amazing to see this in the video of DF, they used a Fusion Frenzy example. (Timestamped)

I'm not expecting Xbox class BC from Sony either. BUT to be fair, we haven't heard anything about what ps5 boost mode offers for PS4 games as yet. Indeed, we haven't heard about any system or quality of life features yet. Blue balling it seems to be the Japanese way.
 

BluRayHiDef

Banned
I think the more likely scenario to be 64 rops on XsX from tech we are familiar with. 96 is almost science fiction for a console at this point. What’s the clacs using the 64 rop scenario?

Edit: the previous poster may be bringing something new to the table. Whether legitimate is anyone’s guess.

PS5's GPU:
64 ROPs x 2,230 x 1000 = 142,720,000 rendering operations per second

XSX's GPU:
64 ROPs x 1,825 x 1000 = 109,500,000 rendering operations per second

142,720,000 / 109,500,000 = 1.3033789954 -> 1.3033789954 x 100 = 130.33789954 =~ 130.34%

Or...

109,500,000 / 142,720,000 = 0.7672365471 -> 0.7672365471 x 100 = 76.72%

The rasterization rate of the PS5's GPU would be 130.34% of that of the XSX's GPU or - in other words - the rasterization rate of the XSX's GPU would be 76.72% of that of the PS5's GPU.
 

RaySoft

Member
Not an accurate, indeed. But those TF are made by CU's, don't they? So an old audio tech that needed 4 full CU's to process, what does that mean? PS3 sound quality was so superior to PS4/XB1, people are not taking notes here:

Anyone else notice that the PS4 overall sound quality seems to pale in comparison to the PS3? I'm not talking about the options and flexibility that the PS3 offers over the PS4, I'm talking about the overall quality of the output, particularly via LPCM over HDMI. The best demonstration of this is listening to tracks through Music Unlimited. On the PS3 with the HQ audio setting turned on, tracks sounds pretty close to CD quality with plenty of dynamic range and fullness to the sound. But on PS4, the same track (I literally have done an A-B comparison) sounds compressed, thin, and low res (yes even with HQ mode on). My wife even noticed how low it sounded and how much of the range was missing. There are whole frequency ranges in the and highest ranges that are just missing on PS4. It's most noticeable with Music Unlimited since it's a lot easier to evaluate music, but it's not just a MU issue. Several cross gen games I've tried sound "thin", low, and more compressed on PS4.

Anyone else notice this? Again my AV pre/pro and general system configs between the two systems are identical. Is it just something Sony needs to address via firmware? Or could is be the difference with using the TrueAudio processing on the GPU on PS4 vs the Cell on the PS3?



Take it from Markitect:

PS5 lead system architect Mark Cerny says the new focus on audio in PS5 is about finding "new ways to expand and deepen gaming." Where the PS3 was "a beast when it came to audio," Cerny says "it's been tough going making forward process on audio with PS4."


Other discussions:


The Cell could handle audio like a champ. SPE's were perfect for these type of tasks.
 
Looking at RT on and off comparisons video on youtube, I would say RT benefit is minimal at best, and sometimes it even looks worse.

  1. Is there a benefit to even implement RT at all in the next-gen systems then?
  2. Why waste computational resources on an inefficient process?
  3. What are your expectations on AA and AAA games next-gen regarding RT?
  4. Those games that will try to push better looking games as much as possible, better realism and more eye-popping graphics, do you see them implementing RT?
  5. Also, am I correct in my understanding that devs can use offline RT to approximate the kind of lighting they need to bake so that lighting in-game will not be too computational intensive? Same effect but less GPU resources?
Edit: Also, is baked lighting (eg. lightmaps) data-driven technique? I mean, the better the baked lighting, the bigger the data?


Sorry if my questions are too basic.

1. Yeah I believe it is though, I also think they felt the need to implement RT because its the hot new stuff everyone talks about even if it's still not really effective.

2. Because marketing buzzwording etc.

3. Some will heavily use RT other will use it in moderation. I don't expect to be blown away in AAA titles because they already had good lightning. AA Titles might have more advantage using RT, see Minecraft as reference which really look a lot better with RT.

4. Yeah I think so, best case would be to use RT where its possible and safes ressources ( be it human ressources in Production or perormance ressources while running the game )

5. Probably possible but wouldn't be as good as full use of RT in every possible way. We will probably have to wait for RT Hardware Architecture to mature over time. My guess is RT will mature with 4K before anybody is willing to jump to 8K. Most people still use Full HD. So 4K will only mature now with the new consoles and so will RT usage. Mayber 8K will come after next-gen ( so in about 8 years / 2029-2030? ) although such predictions are hard to make.

btw. just recently some research group finally made a first step towards photon based microchips.
Wonder how this will turn out. Could be one of those future technologies that raises computation to a new level.


PlayStation 5's GPU:
2,230 Mhz (frequency)
2304 SUs (shading units)
144 TMUs (texture mapping units)
64 ROPs (render output units)
36 CUs (compute units)
4MB of L2 Cache

Xbox Series X's GPU:
1,825 Mhz (frequency)
3,328 SUs (shading units)
208 TMUs (texture mapping units)
80 ROPs (render output units)
52 CUs (compute units)
5MB of L2 Cache

-------------------------- Shading Rate Difference

PS5's GPU:
2304 SUs x 2,230 x 1000 = 5,137,920,000 shading operations per second

XSX's GPU:
3,328 SUs x 1,825 x 1000 = 6,073,600,000 shading operations per second

Calculation of Percentage Difference: (5,137,920,000 shading operations per second) / (6,073,600,000 shading operations per second ) = 0.845943098 -> 0.845943098 x 100 = 84.5943098% = ~ 84.59%

The PlayStation 5's shading rate is 84.6% of the Xbox Series X's shading rate.

-------------------------- Fillrate Difference

PS5's GPU:
144 TMUs x 2,230 x 1000 = 321,120,000 texels per second

XSX's GPU:
208 TMUs x 1,825 x 1000 = 379,600,000 texels per second

Calculation of Percentage Difference: (321,120,000 texels per second) / (379,600,000 texels per second ) = 0.845943098 -> 100 x 0.845943098 = 84.5943098% =~ 84.6%

The PlayStation 5's fill rate is 84.6% of the Xbox Series X's fill rate.

-------------------------- Render Output Rate Difference

PS5's GPU:
64 ROPs x 2,230 x 1000 = 142,720,000 rendering operations per second

XSX's GPU:
80 ROPs x 1,825 x 1000 = 146,000,000 rendering operations per second

(142,720,000 rendering operations per second)/(146,000,000 rendering operations per second) = 0.9775342466 -> 0.9775342466 x 100 = 97.75342466 =~ 97.75%

The PS5's render output rate is 97.75% of the XSX's render output rate.

-------------------------- Compute Rate Difference

PS5's GPU:
36 CUs x 2,230 x 1000 = 80,280,000 computations per second

XSX's GPU:
52 CUs x 1,825 x 1000 = 94,900,000 computations per second

(80,280,000 computations per second)/(94,900,000 computations per second) = 0.845943098 -> 0.845943098 x 100 = 84.5943098% =~ 84.59%

The PS5's computation rate is 84.59% of the XSX's computation rate.

-------------------------- L2 Cache Bandwidth Difference

PS5:
4MB x 2,230 x 1000 = 8,920,000 MB/s

XSX:
5MB x 1,825 x 1000 = 9,125,000 MB/s

(8,920,000 MB/s)/(9,125,000 MB/s) = 0.9775342466 -> 0.9775342466 x 100 = 0.9775342466 =~ 97.75%

The PS5's L2 Cache Bandwidth is 97.75% of the XSX's L2 Cache bandwidth

________

So, ah, are these calculations correct?

Finally someone who presents next-gen data in a good shape.
Thanks for the effort!

96 Rops would increase the difference in the rasterization rates between the consoles from 2.25% to 18.54%.

PS5's GPU:
64 ROPs x 2,230 x 1000 = 142,720,000 rendering operations per second

XSX's GPU:
96 ROPs x 1,825 x 1000 = 175,200,000 rendering operations per second

(142,720,000 rendering operations per second)/(175,200,000 rendering operations per second) = 0.8146118721 -> 81.46%

The PS5 would have a rasterization rate that would be 81.46% of the XSX's.

100% - 81.46% = 18.54%

By the way, would these percentage differences that I've calculated result in any significant perceivable differences in performance between the consoles?

Also, to accommodate these differences, would developers be more likely to proportionally decrease polygon count and texture resolutions for the PS5 to match its framerates with those of the XSX or would they keep the polygon count and texture resolutions the same and therefore cause the PS5 to have lower framerates?

How perceivable would the differences in polygon count and texture resolutions or framerates be?

1. Yes and no. This will probably depend on first-party titles.

2. I fear they'll focus on polygon count and so forth insteads of framerates which I would prefer.
Hopefully they'll give us the option to choose between both versions.

3. My guess is, in multiplatform titles those diffrences will be minor and only those that look for it will see something that might be off. Most people won't notice any significant difference.
This might however depend on the usage of RT, so some titles might depend so heavily on some RT features that one of both consoles might seriously have more issues with. ( Probably PS5s RT is weaker ) However this remains to be seen in action once next-gen gets started.

I haven't seen any AAA Title yet that would distinguishly look better because of RT.
 

psorcerer

Banned
Looking at RT on and off comparisons video on youtube, I would say RT benefit is minimal at best, and sometimes it even looks worse.

  1. Is there a benefit to even implement RT at all in the next-gen systems then?
  2. Why waste computational resources on an inefficient process?
  3. What are your expectations on AA and AAA games next-gen regarding RT?
  4. Those games that will try to push better looking games as much as possible, better realism and more eye-popping graphics, do you see them implementing RT?
  5. Also, am I correct in my understanding that devs can use offline RT to approximate the kind of lighting they need to bake so that lighting in-game will not be too computational intensive? Same effect but less GPU resources?
Edit: Also, is baked lighting (eg. lightmaps) data-driven technique? I mean, the better the baked lighting, the bigger the data?


Sorry if my questions are too basic.

1. There's no benefit. IMHO. But there is a lot of hype. And some might want to ride it.
2. Usually it's when all other options are out. And on small scale it may be not that bad. If used for specific things, like shadows.
3. Have no idea, but my impression from Cerny's talk was that he doesn;t think there'll be much of it. On the other hand MSFT will push it hard, because "power" bullshit.
4. Obviously, but for some specific effects, like CoD these days, for example.
5. They can use much more realistic techniques offline, after all RT is net even remotely the best one. Real-time graphics is all about faking the light equation as close as possible, i.e. you need to use a specific solution for a specific problem. Any global solution will work too if it's linear enough. RT is a non-linear stochastic crap (only primary rays can be linearized).
6. Everything is data driven, naive RT with no data will still look like shit.

I understand why NV does it, their giant GPUs are heavily underutilized, because they inserted a lot of smaller cores (tensor, integer, etc.) now they need to use it.
Using stochastic methods and then smoothing the result by these cores is what they will do, I don't think it's a good path for AMD, but MSFT seems to like RT (see DX12U), so AMD needs to do it too.
 

BluRayHiDef

Banned
1. Yeah I believe it is though, I also think they felt the need to implement RT because its the hot new stuff everyone talks about even if it's still not really effective.

2. Because marketing buzzwording etc.

3. Some will heavily use RT other will use it in moderation. I don't expect to be blown away in AAA titles because they already had good lightning. AA Titles might have more advantage using RT, see Minecraft as reference which really look a lot better with RT.

4. Yeah I think so, best case would be to use RT where its possible and safes ressources ( be it human ressources in Production or perormance ressources while running the game )

5. Probably possible but wouldn't be as good as full use of RT in every possible way. We will probably have to wait for RT Hardware Architecture to mature over time. My guess is RT will mature with 4K before anybody is willing to jump to 8K. Most people still use Full HD. So 4K will only mature now with the new consoles and so will RT usage. Mayber 8K will come after next-gen ( so in about 8 years / 2029-2030? ) although such predictions are hard to make.

btw. just recently some research group finally made a first step towards photon based microchips.
Wonder how this will turn out. Could be one of those future technologies that raises computation to a new level.




Finally someone who presents next-gen data in a good shape.
Thanks for the effort!



1. Yes and no. This will probably depend on first-party titles.

2. I fear they'll focus on polygon count and so forth insteads of framerates which I would prefer.
Hopefully they'll give us the option to choose between both versions.

3. My guess is, in multiplatform titles those diffrences will be minor and only those that look for it will see something that might be off. Most people won't notice any significant difference.
This might however depend on the usage of RT, so some titles might depend so heavily on some RT features that one of both consoles might seriously have more issues with. ( Probably PS5s RT is weaker ) However this remains to be seen in action once next-gen gets started.

I haven't seen any AAA Title yet that would distinguishly look better because of RT.

Like you, I'd prefer that they lower polygon count and texture resolutions in order to maintain high framerates (60 fps).

As for ray tracing, I honestly don't think that either console will be powerful enough to implement ray tracing to a significant degree. The most powerful RTX card by Nvidia can render games with decent framerates when ray tracing is activated at only 1080p. So, I doubt that consoles would be able to do better, even if their GPUs are based on a new architecture from AMD. XSX had to run Minecraft at 1080p with ray tracing turned on in order to perform well, and Minecraft isn't a graphically demanding game.
 
6. Everything is data driven, naive RT with no data will still look like shit.

Can a baked lighting solution be better looking than a real-time RT? I'm thinking RT's advantage is dynamism? Meaning it will allow for more destructible environments because the lightning is not baked?

Using stochastic methods and then smoothing the result by these cores is what they will do, I don't think it's a good path for AMD, but MSFT seems to like RT (see DX12U), so AMD needs to do it too.

I had to google stochastic and I still didn't understand. lol. Can you explain to me like I'm 5? Are you referring to DLSS 2.0?
 
Last edited:

ZywyPL

Banned
it didn't age well :D

That's exactly what I mean and that's exactly proving my point - XBX SSD is up to 4800% faster than ordinary HDD, I repeat - 4800%, but it doesn't matter, what matters for the devs is the actual raw numbers, which apparently aren't good enough to do some new fancy things. In the same way, Pro is 230% stronger than base PS4, but it doesn't matter, what matters if those raw 4.2TF, which are simply not sufficient for 4K resolution.
 

Darius87

Member
Again you are taking things out of context. The post you quoted is made as a response to a discussion where false statements were made about Xbox and Playstation having very similar audio projects. Basically, Project Acoustics was brought up as a "counter" to PS5 Tempest. The point I made was that the goals of both are completely different. Reiterating: Xbox wants to do room acoustics simulation and Playstation wants to do 3D binaural audio by means of HRTF. Is that their sole purpose? No. Is it their primary purpose as described by Sony and Microsoft? Yes.

I have never claimed that the Tempest audio chip is not capable of other audio calculations. In fact, I have even stated this as an added benefit. So again, stop this crusade and just focus on stuff that actually matters.

It's not the same thing. RT audio+effects will always be an approximation of room acoustics (which might be very good, don't misunderstand me!). What Xbox wants to achieve is actual room acoustics simulation. There is a distinction, whether or not we will notice it in games, that's a very different question.

Apparently Microsoft has decided that there is merit to an offline baked audio solution, even though the XsX raytracing capabilities will probably be slightly more powerful then the PS5's. So the Xbox will be able to do RT audio easily but they still chose to use Project Acoustics. So it will be interesting to find out why!

So I'm asking you again: stop the pettiness and let us have a technical discussion without all the wanting to prove me wrong on semantics and wanting one console to be better. I invite you.

i'm not crusading on you or anything if your comparison was is out of context so why do not edit it and correct it? that would've been stop me from pointing it to you many times as false statement. either way it's still false statement about full room simulation and i think the problem is that you're don't really know what convolution reverb is? and how it's made? that's why you're saying it's not same thing like RT audio, where in reallity it is same it works exactly like RT even without reverberation effects on audio sources RT by it self should produce convoluted reverb the quality of it depends of how many bounces RT could achieve.
even without RT audio acoustics there's no stoping for devs doing the same thing as ms aproach to save cycles for computing(ex like prebaked shadoews etc..) though it's more hurdle for devs then RT audio and there's all to it just to save cycles.
so if you know some real advantage for ms aproach over sony i'm all ears.
 

FeiRR

Banned
It would amuse me greatly if Sony revealed that the PS5 had 2-4GB of dedicated OS RAM, leaving the full 16GB available to developers. Unlikely to happen, however. RAM might not be all that expensive, but it's expensive enough to push the price up.
I don't think it's necessary to have any RAM buffer for the OS. When you switch from OS to a game/app, it's state can be pushed to SSD in half a second. Add a fade transition and you'll have an impression it's been smooth.
Like you, I'd prefer that they lower polygon count and texture resolutions in order to maintain high framerates (60 fps).

As for ray tracing, I honestly don't think that either console will be powerful enough to implement ray tracing to a significant degree. The most powerful RTX card by Nvidia can render games with decent framerates when ray tracing is activated at only 1080p. So, I doubt that consoles would be able to do better, even if their GPUs are based on a new architecture from AMD. XSX had to run Minecraft at 1080p with ray tracing turned on in order to perform well, and Minecraft isn't a graphically demanding game.
I like my textures sharp. It's been a huge problem this generation. I have a game at 4k but textures pop-in or just remain blurred up close. I hope we'll be able to forget about that soon. I also don't think getting stable 60 FPS should be a problem with the CPUs we're getting.

RayTracing, on the other hand, seems like the next gimmick we're going to hear a lot about with very little effect in practice. The tech isn't there quite yet for a full implementation. But you can use it in certain elements (window/puddle reflections, mirrors in fixed viewpoint scenes, shadows, etc.) and still get good results. Good devs will find their ways. This is 20 years old, no GPU acceleration.
 

Gediminas

Banned
That's exactly what I mean and that's exactly proving my point - XBX SSD is up to 4800% faster than ordinary HDD, I repeat - 4800%, but it doesn't matter, what matters for the devs is the actual raw numbers, which apparently aren't good enough to do some new fancy things. In the same way, Pro is 230% stronger than base PS4, but it doesn't matter, what matters if those raw 4.2TF, which are simply not sufficient for 4K resolution.
your comment was :

It's basically Bugatti Veyron vs Chiron - sure, the latter has 50% more HP, more VMax etc., but that doesn't mean Veyron is slow by any means.

But on the other hand, I'm pretty sure any dev whenever asked would take extra 2TF instead of half, extra 8GB RAM instead of 256MB, additional 8 threads instead of 2, and so on, because ultimately that's what they are working on, an actual hardware with actual specs, not on percentages or X-times multipliers.

Those 2TF might be as worthy as 2 PS4, enough to render TLoU2 and GoW4 for example, that's a lot if you ask me, not "just XX%". But then again, Sony might still stick to 30FPS, whereas MS clearly wants to deliver 60 in their games, so in simple math Sony's games can potentially have more TF/frame, and as a result have actually better visuals at the cost of framerate.


especially this part : " I'm pretty sure any dev whenever asked would take extra 2TF instead of half, extra 8GB RAM instead of 256MB, additional 8 threads instead of 2, and so on, because ultimately that's what they are working on, an actual hardware with actual specs, not on percentages or X-times multipliers."

the point is : the developer came and said otherwise, he is not even taking that extra ram or cpu power, instead he is even choosing existent PS5 revolutionary and next gen SSD instead of extra power.

you can't even own your own words.
 
Last edited:

Neo Blaster

Member
What I want to hear is instant resume of multiple titles on PS5. Killer feature for XsX.
That Crytek engineer did confirm that for PS5. He said while XSX took 6 seconds to shift games, PS5 would do that in less than a second. There are even conspiracy theories that it was Sony who dropped the NDA hammer on that article because he talked about a console feature not yet revealed by them.
 

Neo Blaster

Member
Yes it does, but the PS5 doesn't have the backwards compatibility upgrades that XSX offers. Like for example the full HDR implementation for every game, so games that didn't support HDR, suddenly do. Because they use machine learning to enhance this experience, and all on a hardware level so developers don't need to do anything for this. Was amazing to see this in the video of DF, they used a Fusion Frenzy example. (Timestamped)

Sony barely talked about BC, I think it's too early to claim anything PS5 has or hasn't.
 

azertydu91

Hard to Kill
Talking about things the PS5 does not have without knowing seems to be a thing nowadays. By default, we assumed things such as not RDNA2, no RT, no VRS. I think all things considered, we should just assume zero OS features.
It doesn't have a power cable ... because if they had it they would've mentionned it in their GDC video.

As of now it is just a controller plugged to a SSD according to some.
 

ZywyPL

Banned
Where are you guys getting the TMU, ROP etc. numbers from? As far as I know neither Sony nor MS nor AMD provided full specs of the next-gen consoles GPUs, I don't think they ever will, but maybe I missed something?

he point is : the developer came and said otherwise, he is not even taking that extra ram or cpu power, instead he is even choosing existent PS5 revolutionary and next gen SSD instead of extra power.

The dev prefers extra 8.9TB/s vs 100MB instead of extra 4,7TB/s vs 100MB, despite the latter being a whole 4800% difference vs HDD. More is always better, but more ACTUAL bandwidth, memory, processing power, not more percents. And the way you and some others paint it is that a car with extra 20HP is the same as a car with extra 200HP, if not better, which is just plain wrong. That's what that "most profound moment" in MGS2 was talking about, people are being given content without context, and are just making everything up themselves, while like I said few times already, the first batch of 3rd party titles will show what's what.
 
Last edited:

BluRayHiDef

Banned
your comment was :

It's basically Bugatti Veyron vs Chiron - sure, the latter has 50% more HP, more VMax etc., but that doesn't mean Veyron is slow by any means.

But on the other hand, I'm pretty sure any dev whenever asked would take extra 2TF instead of half, extra 8GB RAM instead of 256MB, additional 8 threads instead of 2, and so on, because ultimately that's what they are working on, an actual hardware with actual specs, not on percentages or X-times multipliers.

Those 2TF might be as worthy as 2 PS4, enough to render TLoU2 and GoW4 for example, that's a lot if you ask me, not "just XX%". But then again, Sony might still stick to 30FPS, whereas MS clearly wants to deliver 60 in their games, so in simple math Sony's games can potentially have more TF/frame, and as a result have actually better visuals at the cost of framerate.


especially this part : " I'm pretty sure any dev whenever asked would take extra 2TF instead of half, extra 8GB RAM instead of 256MB, additional 8 threads instead of 2, and so on, because ultimately that's what they are working on, an actual hardware with actual specs, not on percentages or X-times multipliers."

the point is : the developer came and said otherwise, he is not even taking that extra ram or cpu power, instead he is even choosing existent PS5 revolutionary and next gen SSD instead of extra power.

you can't even own your own words.

As has been said many times, teraflops don't tell the entire story. For example, despite the Xbox One X having a peak performance level of 6 teraflops relative to the PS4 Pro's 4.2 teraflops, in order for the Xbox One X to run Resident Evil 3 at 60 frames per second, it had to be updated with a patch that reduced the resolution at which it renders the game from Native 4K to 2880 x 1620...which is the same resolution at which the PS4 Pro renders the game in order maintain 60 frames per second as well. The image quality is still better on the Xbox One X but is in significantly so (Digital Foundry had to zoom in on frames from each version to show the difference). So, if a difference of 1.8 teraflops (6 Tf - 4.2 Tf = 1.8 Tf) is practically imperceptible within the range of graphical fidelity that the current generation of consoles are capable of, why would a difference that's practically the same (12.155 Tf - 10.28 Tf = 1.87 Tf) be any more perceptible, especially at 4K or resolutions near 4K, at which pixels are so numerous and tiny that it's more difficult to discern any appreciable difference?
 
Last edited:
Status
Not open for further replies.
Top Bottom