• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Next-Gen PS5 & XSX |OT| Console tEch threaD

Status
Not open for further replies.

Bo_Hazem

Banned
Of course it's needed. Are all the games using UE5? Do all the engines need to work with virtualized geometry from now on?

I don't know why Sony fans are so eager to buy things like the Tempest engine but to criticise everything Microsoft does. VRS, when correctly implemented will not be noticeable and we will gain performance with that. It's not bad at all.

We only saw bad examples so far, that's why. But we'll see how it'll pan out.
 
Last edited:

ethomaz

Banned
Sony showing the decals on iPhone instead of Xperia.

simon cowell facepalm GIF
I can understand because they sell more decals on iPhone than Xperia phones.
Until now they did not have a hit with Xperia.

That is what it is... even so some Xperia phones in the past had amazing builds and features.
 
Last edited:

onesvenus

Member
Do we know it is using VRS Tier 1 for sure btw?
No, in fact uses Tier 2 VRS, I was wrong. When looking for the AMD announcement video I watched it again and it says it's using VRS Tier 2.


It's obvious we are missing something because when it's well used (see the gears 5 image) it shows almost no perceivable differences. In fact you know those artifacts could be due to a number of different things. I don't know why we are assuming it's from VRS
 

Tmack

Member

Sony Launches “PlayStation 5 Launch Collection Merchandise Bundle;” Includes Water Bottle, Hat & More


yfTx0AM.jpg

PlayStation 5 Launch Collection Merchandise Bundle Contents:​

  • Merchandise approved by PlayStation for high quality Standards and Child Safety
  • Dad hat polyester-wool blend with embossed adjustable clasp
  • 17 oz. Stainless steel water bottle double wall insulation, keeps drinks hot/cold for hours, wide-mouth and leak-proof, bpa Free
  • Knit socks – high quality polyester-spandex blend, One size fits most
  • Tech Decals — Premium 3M Material, Scratch and water Resistant, Leaves No Residue


Number one best seller on Amazon under “Video Games”, “New Releases”.

Those color coded buttons...
 

sinnergy

Member
No, in fact uses Tier 2 VRS, I was wrong. When looking for the AMD announcement video I watched it again and it says it's using VRS Tier 2.


It's obvious we are missing something because when it's well used (see the gears 5 image) it shows almost no perceivable differences. In fact you know those artifacts could be due to a number of different things. I don't know why we are assuming it's from VRS

Tier 2 is Not used on Dirt Series X as far as I know . Maybe Tier 1, Tier 2 is used on PC, as showcased .
 

Bo_Hazem

Banned
No, in fact uses Tier 2 VRS, I was wrong. When looking for the AMD announcement video I watched it again and it says it's using VRS Tier 2.


It's obvious we are missing something because when it's well used (see the gears 5 image) it shows almost no perceivable differences. In fact you know those artifacts could be due to a number of different things. I don't know why we are assuming it's from VRS


Because even PS4 Pro has better far distance graphics, I believe XOX as well.




xsxvsps4pro.png


So AMD and Dirt 5 devs confirming it as the "holy grail" Tier 2 makes it pretty embarrassing to talk about VRS as a "feature".
 
Last edited:

ethomaz

Banned
No, in fact uses Tier 2 VRS, I was wrong. When looking for the AMD announcement video I watched it again and it says it's using VRS Tier 2.


It's obvious we are missing something because when it's well used (see the gears 5 image) it shows almost no perceivable differences. In fact you know those artifacts could be due to a number of different things. I don't know why we are assuming it's from VRS

If you read the own MS article about VRS Tier 2 it is a very case by case and the results can variate a lot... so the devs needs to see if it is worth or not.

Is it worth implementing Tier 2 VRS for my game?

Every engine is different and not all games will benefit equally from VRS. There are 2 things to keep in mind when evaluating VRS:

  1. VRS is an optimization that reduces the amount of pixel shader invocations. As such, it will only see improvement on games that are GPU bound due to pixel shader work.
  2. Tier 2 VRS sees higher performance gains when running at higher resolutions. While actual results will vary based on engine and content, we found that resolutions of 1080p or lower saw generally saw diminishing returns from Tier 2 VRS.
One of the perks of the VRS API the ease of integration. By using Tier 1 VRS and adding RSSetShading to the start of all command lists to set the shading rate to 2×2, you can quickly get a sense of the upper bound of the performance gain from VRS. We recommend taking 30-50% of the savings as an estimate of what you’d expect to get back from a proper Tier 2 implementation. It’s also important to look only at the savings of individual passes rather than the whole frame time, ignoring passes that Tier 2 VRS might not apply to. For example, our Tier 2 VRS texture couldn’t be used with a shadow pass since it’s generated from the point of view of the player camera, not the light.


What the hot take?

- Only see performance improvement in higher resolutions. It is in most cases useless for 1080p or below resolutions.
- It only see improvement in GPU bound games.
- It is very limited where you can use it and it not improve frametime in all places.

About the look... while in some cases it is very close to the original imagine you can still spot reduced resolution where VRS Tier 2 was applied (even in Gears 5 you can see it) but if you get the performance boost in frametime it is worth of use.
 
Last edited:

ethomaz

Banned
Guys the point is not about the quality of the VRS, it indeed degrade the overall image.
VRS was made to gain frametime and that it really do a great job in some cases and it is worth the degradation of the image.

It is a trade off option for devs.

I can see a 4k 30fps Quality Mode with VRS turned off and a 4k 60fps Performance Mode with VRS turned on on console hardware.
Now 1080p Performance modes won't have gains with VRS.
 
Last edited:

Allandor

Member
Guys the point is not about the quality of the VRS, it indeed degrade the overall image.
VRS was made to gain frametime and that it really do a great job in some cases and it is worth the degradation of the image.

It is a trade off option for devs.

I can see a 4k 30fps Quality Mode with VRS turned off and a 4k 60fps Performance Mode with VRS turned on on console hardware.
Now 1080p Performance modes won't have gains with VRS.
almost exactly, but I must correct the last sentence and the beginning (I just have to :) ... don't ask me why):
- it will not degrade the overall image quality, it should only degrade parts which are not "visible" (due to darkness or general blurred out details), if implemented correctly
- 1080p won't have a huge gain on a big GPU, yes. The big GPU will most of the time not be limited by the resolution of shading. But the smaller the GPU is, the higher is the gain from that ;)

It is just a feature to reduce the work to be done, so the saved cycles can be used elsewhere (e.g. more frames).
 

DeepEnigma

Gold Member
Not sure if seen/discussed.

Website dedicated to PS5 BC games and what they run at.

That's a nifty little site, thanks for sharing.

It would be nice if they would patch The Last Guardian to unlock the framerate so you can keep HDR or use the digital version.
 

Panajev2001a

GAF's Pleasant Genius
No, in fact uses Tier 2 VRS, I was wrong. When looking for the AMD announcement video I watched it again and it says it's using VRS Tier 2.


It's obvious we are missing something because when it's well used (see the gears 5 image) it shows almost no perceivable differences. In fact you know those artifacts could be due to a number of different things. I don't know why we are assuming it's from VRS

Because it seems to indicate the artefacts you would expect from essentially "localised lower resolution rendering": I am not saying it cannot be well used, just that the dev in this case may have been a bit aggressive and "oh who is going to zoom in and worry about that when the game is 120 Hz, SHIP SHIP SHIP!".
To get that effect it would have meant more draw calls as Tier 1 sets rasterisation rate per draw call only ( https://devblogs.microsoft.com/directx/gears-vrs-tier2/ ) which would seem a bit difficult in a 120 Hz framerate target.

Thanks for the correction btw, I appreciate the link and updated notes.
 
Last edited:

Dabaus

Banned
Not that id use it but im surprised Sony didnt try to do a new rebooted PS Home or whatever it was called. Not that i want them to nick and dime people but the potential to sell Microtransactions for your own personal hub seems like a no brainer when your brand awareness is is at an all time high and have like a 120 million installed base.
 

Shmunter

Member
Not that id use it but im surprised Sony didnt try to do a new rebooted PS Home or whatever it was called. Not that i want them to nick and dime people but the potential to sell Microtransactions for your own personal hub seems like a no brainer when your brand awareness is is at an all time high and have like a 120 million installed base.
Would be nice. Unfortunately they must’ve run it through the computer, and concluded taking time away from games, their dlc and micro transactions is less profitable.

Hope I’m wrong
 

LucidFlux

Member
Your still image of a single frame? Yeah, my point exactly. No way anyone notices that while playing. That also may have nothing to do with VRS.

I get what you're saying but the asset(s) in question aren't zipping by the player's field of view where the loss of image quality wouldn't be easily noticed. These are prominent areas of focus that hang right in the middle of the screen for long stretches of time.

Now, I agree whatever is causing this may in fact have absolutely nothing to do with VRS and if it does it's just a horrible implementation.
 

ethomaz

Banned
almost exactly, but I must correct the last sentence and the beginning (I just have to :) ... don't ask me why):
- it will not degrade the overall image quality, it should only degrade parts which are not "visible" (due to darkness or general blurred out details), if implemented correctly
- 1080p won't have a huge gain on a big GPU, yes. The big GPU will most of the time not be limited by the resolution of shading. But the smaller the GPU is, the higher is the gain from that ;)

It is just a feature to reduce the work to be done, so the saved cycles can be used elsewhere (e.g. more frames).
A feature that reduce the work to be done at the cost of degraded image.
It is a trade off feature... or lossy if you want to give a name.

But it is what the industry is moving for... same happened with AA techs... Dynamic and/or temporal retrotemporal resolutions, etc.
 
Last edited:
I don't know if Dirt 5 is using VRS, but the type of artefacting exhibited on the screens taken from the 120Hz mode are certainly indicative of such. Usually if it quacks like a duck, it's probably a duck.

That said, it would seem like a poor implementation. Everyone needs to remember that VRS usage among developers is still quite a fresh new thing. And given there's no one, all encompassing implementation (devs have to roll their own unique implementation), there's very likely to be some iteration and exploration devs will do until they get it right.

So a few poor examples of VRS Tier 2 implementations doesn't mean VRS Tier 2 is shit. It just means it'll take some time for devs to figure it out. As has been said many times from the beginning, VRS Tier 2 conceptually SHOULDN'T perceptively degrade IQ. If is it, the dev is doing something wrong. And at this stage, that's ok and we should expect that.

If we're still getting weird macro-blocking artefacts on texture detail for large polys 4yrs into the gen, then we need to start asking some deeper questions about the API or hardware setup for MS's VRS implementation. At this stage, however, teething issues are and should be expected.

A feature that reduce the work to be done at the cost of degraded image.
It is a trade off feature... or lossy if you want to give a name.
This is a bit of a gross over-simplification, ethomaz ethomaz .

Yes the cost of the performance optimization is degradation of IQ, but the question is and should be, "by how much". If it's perceptible, then throw the optimization in the bin and look for something else. If it's not, let's go bitches!

VRS Tier 2 SHOULD NOT produce a perceptible degradation in IQ, if implemented as intended. At least for now, Gears 5 provides a clear positive data point for an implementation that achieves the stated goal of the VRS technique. So nobody should be making sweeping statements about the viability of the VRS technique as a whole based on examples of bad implementation.

VRS isn't like some quick and dirty AA method where the dev just flips a switch in the renderer to enable it. The Dev needs to write their own unique algorithmic approach to it's utilization. Therefore, some implementations by some devs will be worse or better than others.
 
Last edited:

onesvenus

Member
A feature that reduce the work to be done at the cost of degraded image.
It is a trade off feature... or lossy if you want to give a name.
Would you say that checkerboarding or temporal reconstruction is also lossy 4K?
Just checking if that goes both ways
 

ethomaz

Banned
Would you say that checkerboarding or temporal reconstruction is also lossy 4K?
Just checking if that goes both ways
Yes... I always stated that.
The fact I don't like CBR of the PS4 Pro already tells you a lot.

Temporal reconstruction generates more detail. It’s opposite in function.
Actually it can't fill all parts and the reconstructions fell off due artifacts and blurry that makes the image less detailed.
 
Last edited:
Lossy isn't inherently bad.... I think is the point, onesvenus onesvenus is trying to make.

GPUs internally have been using lossy texture compression formats for decades. Should they throw them in the bin because they're lossy? No... they're valuable because their freaking fast to compress/decompress.

There's always a trade-off between performance vs quality with any lossy technique, and lossless techniques are often an order of magnitude more costly in terms of performance than lossy methods.

The important question is never, "is it lossy". It's "by how much".
 
Last edited:

ethomaz

Banned
Lossy isn't inherently bad.... I think is the point, onesvenus onesvenus is trying to make.

GPUs internally have been using lossy texture compression formats for decades. Should they throw them in the bin because they're lossy? No... they're valuable because their freaking fast to encode/decode.

There's always a trade-off between performance vs quality with any lossy technique, and lossless techniques are often an order of magnitude more costly in terms of performance than lossy methods.

The important question is never, "is it lossy". It's "by how much".
Yeap I agree... lossy gives beneficies in rendering time.
It is a trade-off like you said.

It is a dev by dev choice of what is worth for them.

The point is a lot of people try to pass CBR, DLSS, VRS, etc as something that gives better IQ and that is false... it gives a close enough IQ with better performance.... devs choose it in most cases that the performance results are bigger than the lose in IQ.
 
Last edited:

Shmunter

Member
Yes... I always stated that.
The fact I don't like CBR of the PS4 Pro already tells you a lot.


Actually it can't fill all parts and the reconstructions fell off due artifacts and blurry that makes the image less detailed.
I don’t know, there are some stellar results out there. If reconstruction served no purpose, it wouldn’t exist.

In all honesty any dev not doing reconstruction and wasting resources on native resolutions is doing it wrong.
 

onesvenus

Member
Temporal reconstruction generates more detail. It’s opposite in function.
It doesn't generate more detail, it uses already existing detail to fill in the missing detail. Can you point me to a temporal reconstruction that uses full 4k images? You won't find it because temporal reconstruction is done using partial 4k images, and using the previous ones to create a complete 4k image.

It falls completely in ethomaz ethomaz definition of lossy: "A feature that reduce the work to be done at the cost of degraded image."
 

ethomaz

Banned
About the Gears 5 vs Dirt 5 VRS comparison.

Take in mind that Gears 5 uses a higher resolution as base to use VRS than Dirt 5... that can explain the difference in overall quality plus the engine implementation.
 
Last edited:

Shmunter

Member
It doesn't generate more detail, it uses already existing detail to fill in the missing detail. Can you point me to a temporal reconstruction that uses full 4k images? You won't find it because temporal reconstruction is done using partial 4k images, and using the previous ones to create a complete 4k image.

It falls completely in ethomaz ethomaz definition of lossy: "A feature that reduce the work to be done at the cost of degraded image."
re-construction constructs a higher resolution image from a lower resolution one. Whatever the underlying technique.

Lossy compression takes a top source image and removes things to compress.

It’s direct opposite.
 
Last edited:
Yeap I agree... lossy gives beneficies in rendering time.
It is a trade-off like you said.

It is a dev by dev choice of what is worth for them.

The point is a lot of people try to pass CBR, DLSS, VRS, etc as something that gives better IQ and that is false... it gives a close enough IQ with better performance.... devs choose it in most cases that the performance results are bigger than the lose in IQ.

CBR and DLSS are fundamentally different quantities than VRS. CBR/Temporal Reconstruction/DLSS improve IQ by increasing resolution. By definition you're generation new information (new pixels). Yes the IQ is degraded compared to a native 4K but the techniques aren't using a native 4K framebuffer as an input in the first place, they're using one or more lower resolution buffers. So they do indeed increase IQ by increasing pixel density.

VRS is a pure performance optimization that reduces shading work in areas where you can get away with a lower resolution shading rate (thus degraded IQ). You're not creating new information and so it's by definition lossy and thus very different from the former.

It doesn't generate more detail, it uses already existing detail to fill in the missing detail. Can you point me to a temporal reconstruction that uses full 4k images? You won't find it because temporal reconstruction is done using partial 4k images, and using the previous ones to create a complete 4k image.

It falls completely in ethomaz ethomaz definition of lossy: "A feature that reduce the work to be done at the cost of degraded image."

Of course they generate new detail. You're creating whole new pixels, increasing the resolution. Neither CBR/Temporal Reconstruction/DLSS fall into your definition, as none of them reduce work (you are doing more work than simply rendering the base resolution image), and all of them increase IQ by increasing pixel density (resolution) from the base image.

The quality of the new pixels generated is immaterial. The IQ isn't degraded from the base image, as the original base image pixels are unchanged, and added to a collection of newly generated pixels to increase resolution. Instead of comparing to a native 4K image, if you compare with a basic 4k upscale using some shitty bilinear or bicubic function, it's more evident that these methods increase IQ over the base image.

It's the opposite of lossy, as has been said.

About the Gears 5 vs Dirt 5 VRS comparison.

Take in mind that Gears 5 uses a higher resolution as base to use VRS than Dirt 5... that can explain the difference in overall quality plus the engine implementation.

I don't know what the base resolution is for each game, neither have I looked all that closely at the Gears 5 implementation of VRS to see if or where any artifacts may appear. However, from the Dirt 5 screens it's pretty clear the issue isn't base rendering resolution. It's that VRS is applied to reduce shading rate on brightly lit, moderate contrast areas of the screen. This is the polar opposite of how and where it should be applied.

VRS should ONLY be applied to low contrast, high color uniformity, areas of the image; e.g. areas in shadow, and area with high color uniformity for example a clear blue sky.

The base rendering resolution will impact the performance improvement gained by using VRS more than the quality of the result, provided VRS is implemented properly. Again I must stress, provided VRS is implemented properly.
 
Last edited:

ethomaz

Banned
CBR and DLSS are fundamentally different quantities than VRS. CBR/Temporal Reconstruction/DLSS improve IQ by increasing resolution. By definition you're generation new information (new pixels). Yes the IQ is degraded compared to a native 4K but the techniques aren't using a native 4K framebuffer as an input in the first place, they're using one or more lower resolution buffers. So they do indeed increase IQ by increasing pixel density.

VRS is a pure performance optimization that reduces shading work in areas where you can get away with a lower resolution shading rate (thus degraded IQ). You're not creating new information and so it's by definition lossy and thus very different from the former.



Of course they generate new detail. You're creating whole new pixels, increasing the resolution. Neither CBR/Temporal Reconstruction/DLSS fall into your definition, as none of them reduce work (you are doing more work than simply rendering the base resolution image), and all of them increase IQ by increasing pixel density (resolution) from the base image.

The quality of the new pixels generated is immaterial. The IQ isn't degraded from the base image, as the original base image pixels are unchanged, and added to a collection of newly generated pixels to increase resolution. Instead of comparing to a native 4K image, if you compare with a basic 4k upscale using some shitty bilinear or bicubic function, it's more evident that these methods increase IQ over the base image.

It's the opposite of lossy, as has been said.



I don't know what the base resolution is for each game, neither have I looked all that closely at the Gears 5 implementation of VRS to see if or where any artifacts may appear. However, from the Dirt 5 screens it's pretty clear the issue isn't base rendering resolution. It's that VRS is applied to reduce shading rate on brightly lit, moderate contrast areas of the screen. This is the polar opposite of how and where it should be applied.

VRS should ONLY be applied to low contrast, high color uniformity, areas of the image; e.g. areas in shadow, and area with high color uniformity for example a clear blue sky.

The base rendering resolution will impact the performance improvement gained by using VRS more than the quality of the result, provided VRS is implemented properly. Again I must stress, provided VRS is implemented properly.
You are right.

My mistake was to take in mind that most people compares the resolution techs with the native resolution it is supposed to replace (the target resolution)... so the native option has better IQ and more details.

But what you said is right... 1800p base resolution with Resolution Tech to reach 4k will indeed looks better than 1800p native... not just look better it will be more expensive to render for the hardware... of couse it doesn't change the fact that target resolution (4k) will have better IQ and be more hungry in processing power than that 1800p + ResTech.

It is a trade off when compared target resolution.... but it is a improve if you compare the base resolution.

Lossy is really not the best word here.
 
Last edited:

Ramz87

Member
What’s happening with GT7, GOW Ragnarok? Haven’t heard anything on those for months now. Surely we’re due a state of play or something soon!
 

ethomaz

Banned
What’s happening with GT7, GOW Ragnarok? Haven’t heard anything on those for months now. Surely we’re due a state of play or something soon!
I believe the focus is to release R&C in March/April.

After hey will probably have two more first party for this year... GT7 in October? And a big one on holidays.

But that is all guesses.

I’m convinced next time they talk about GT7 will be with the release date.
 
Last edited:
What’s happening with GT7, GOW Ragnarok? Haven’t heard anything on those for months now. Surely we’re due a state of play or something soon!
Sony were drip feeding PS5 info throughout last year, and the fans just about managed to put up with it, as HeisenbergFX4 HeisenbergFX4 pointed out, this was part of their strategy to “bleed” Microsoft out.

I think they’ll be doing something similar this year with the big game announcements and trailers/gameplay because likewise Microsoft should have some ammunition this year which they’ll be ready to fire, I don’t think that’ll be in the form of triple A releases but likely triple A exclusive announcements, gameplay and trailers. I cannot wait for Hellblade 2!
 

onesvenus

Member
Of course they generate new detail. You're creating whole new pixels, increasing the resolution. Neither CBR/Temporal Reconstruction/DLSS fall into your definition, as none of them reduce work (you are doing more work than simply rendering the base resolution image), and all of them increase IQ by increasing pixel density (resolution) from the base image.
Compared to a native 4K image you are not generating new detail. And yes, it reduces work compared to rendering a native 4K framebuffer. That's why I said it fitted in ethomaz ethomaz definition of lossy. You are doing less work than what you'd do to get a native 4k framebuffer.
 

Dabaus

Banned
Sony were drip feeding PS5 info throughout last year, and the fans just about managed to put up with it, as HeisenbergFX4 HeisenbergFX4 pointed out, this was part of their strategy to “bleed” Microsoft out.

I think they’ll be doing something similar this year with the big game announcements and trailers/gameplay because likewise Microsoft should have some ammunition this year which they’ll be ready to fire, I don’t think that’ll be in the form of triple A releases but likely triple A exclusive announcements, gameplay and trailers. I cannot wait for Hellblade 2!
Id say with the besthesda acquisition Microsoft bled sony out, and im not even a microsoft fan.
 
Status
Not open for further replies.
Top Bottom