• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Software-based Variable Rate Shading in Call of Duty: Modern Warfare

Dv24Fm5.jpg

It's funny when it makes it look worse than PS4 Pro.
 
I don't think page numbers are right, page 12 is just an image and page 18 only talks about being able to match the flexibility of VRS Tier 1.
Anyway, having read the presentation, they are using an image-based mask instead of specifying the shading rate per primitive. If that's the way VRS will be used in the future, which frankly, I don't know, it seems like Microsoft really missed the mark only allowing for 8x8 and higher tile sizes.
The main point of that presentation is that VRS has been used even in the PS4 without the people on this forum knowing, making them look like fools when they claim VRS, in general, is the worst thing to happen to computer graphics.
Another one for the tech guys in Neogaf :LOL:
Single pass multi resolution render targets was actually quite important for PS4 Pro and PSVR in general and you can believe Sony wants to have a good approach for fast and powerful foveated rendering for VR purposes (I do believe they are looking at making a new headset, Tempest Engine and video out over USB seems an indication that they have plans to make it a lot simpler than for PS4 without requiring a breakout box anymore).

HW VRS has much smaller granularity than MRRT did in terms of tile size on PS4/PS4 Pro albeit the latter approach works better with thin long triangles, btw... not that I think Sony would stick with the old approach and not improve on it. This is why people doubting that Sony does not have HW acceleration for this in their pipeline are mistaken.
 
Xbox can do both HW and SW VRS the PS5 can't. As far as VRS goes the PS5 cannot compete with Xbox, Xbox could in fact use hardware and software VRS in tandem.

I am not sure why we are so hell bent into thinking Sony, who keeps pushing on VR R&D, would not improve on their MRRT solution and have something for fast HW accelerated foveated rendering is beyond me. It seems like getting stuck on a marketing material name and using it to one up each other with.
 
I am not sure why we are so hell bent into thinking Sony, who keeps pushing on VR R&D, would not improve on their MRRT solution and have something for fast HW accelerated foveated rendering is beyond me. It seems like getting stuck on a marketing material name and using it to one up each other with.

So Activision is wrong, also that MS engineer is wrong, and even Matt Hargett, who worked directly on PS5 before going to Roblox, is wrong but those gaffers are right...

 
Last edited:
Xbox can do both HW and SW VRS the PS5 can't. As far as VRS goes the PS5 cannot compete with Xbox, Xbox could in fact use hardware and software VRS in tandem.
Cannot compete? So is that the main reason of such discussion? Find something about Xbox which ps5 cannot compete? Ah.
 
Last edited:
Totally disingenuous from you as usual, the Devs said that VRS wasn't used until post launch, when the settings patch for 120hz was released. That's from the launch comparison pre patch.

The biggest improvements in quality were for the cars and roadside details and there is not really a big definitive word on them not using VRS for launch and toning it down or removing it post patch (from Dirt 5 devs I think):
SuWT0uX.jpg


News on this seem mixed... it is possible that their first pass at it went sour and they removed its use or improved it (still in the latest screenshots the 120 Hz mode had more detailed textures on far objects on other platforms than the XSX).
 
The biggest improvements in quality were for the cars and roadside details and there is not really a big definitive word on them not using VRS for launch and toning it down or removing it post patch (from Dirt 5 devs I think):
SuWT0uX.jpg


News on this seem mixed... it is possible that their first pass at it went sour and they removed its use or improved it (still in the latest screenshots the 120 Hz mode had more detailed textures on far objects on other platforms than the XSX).

Yes and on that patch the situation on the 60hz mode changed, you can see this quite clearly on the NX Gamer post patch comparison, where despite restoring all the settings the performance went up. There is no point showing screens from the launch version anymore, it's irrelevant.
ZFdntBx.jpg
 
And for some reason it looks and performs worse.
as most of the time you talk about technical things seem you just no have a single idea on the matter plus is 99.9% biased in favor of playstation this makes your contributions virtually irrelevant and your credibility equal to zero. You should give a read on b3d about this before make these incompetent figures
 
Last edited:
as most of the time you talk about technical things seem you just no have a single idea on the matter plus is 99.9% biased in favor of playstation this makes your contributions virtually irrelevant and your credibility equal to zero. You should give a read on b3d about this before make these incompetent figures
Yeah that guy doesn't know his arse from his elbow lol
 
VRS is a pretty big let down so far.

At first it was described as something that would speedup calculations in area of the screen that don't really need it (dark/barely visible or skybox like).

Because it is efficiency for slower gpus tech not top of the line. Much like DX12/Vulcan made viable to play games on older cpus that would normally get choked it is feature for slower gpus that have issues with shading their geometry. Moreover both Gears and call of duty are software implementations while true one is hardware based.

Most og GPUs actually have multiple bottlenecks and if one of them is filled rest of them can't move faster. So if you have too high resolution for your ROPs your shaders and geometry engines will barely run as your ROPS will struggle to output image. If you have geometry fill issues then you can rise resolution without FPS hit. Same with shading.

It is not magic bullet. It is meant to fix shader issues which are most common on lower end gpus not on high end ones.
 
I don't know why they are saying the SW VRS implementation allows smaller tile sizes when VRS Tier 2 allows a 2x2 tile size when doing it per primitive.
'Tier 2' is the screenspace mask(nothing is done per primitive), and that has the granularity mentioned in the paper.
Per primitive work is specified in Tier 1. The numbers you are referencing are sub-sample arrangements inside the pixel-quad (essentially, the level of detail for VRS) - these are the same on every hardware that has supported MSAA for the last 20 years.

If that's the way VRS will be used in the future, which frankly, I don't know, it seems like Microsoft really missed the mark only allowing for 8x8 and higher tile sizes.
That (Tier-2) was always the most interesting aspect of VRS use, so yes, it'll likely be the most commonly used one.
And this isn't some arbitrary roll of the dice, VRS is basically clever reuse of what's already in hw, so changing granularity of selection-mask isn't some free-for-all. Note that Turing supported 32x32, and Intel offered 16x16 a year later.

You don't keep optimizing once you reached your performance target.
Sure - you keep optimizing all the way until launch. Sometimes also after.

software vrs is shit on forward and f+ doable just on deffered rendering.
Well - no. We've been doing 'VRS' with deferred shading for over a decade - since PS3 era and even before.
The whole point/interest in this COD implementation is that it's not a deferred renderer.
 
'Tier 2' is the screenspace mask(nothing is done per primitive), and that has the granularity mentioned in the paper.
Per primitive work is specified in Tier 1. The numbers you are referencing are sub-sample arrangements inside the pixel-quad (essentially, the level of detail for VRS) - these are the same on every hardware that has supported MSAA for the last 20 years.


That (Tier-2) was always the most interesting aspect of VRS use, so yes, it'll likely be the most commonly used one.
And this isn't some arbitrary roll of the dice, VRS is basically clever reuse of what's already in hw, so changing granularity of selection-mask isn't some free-for-all. Note that Turing supported 32x32, and Intel offered 16x16 a year later.


Sure - you keep optimizing all the way until launch. Sometimes also after.


Well - no. We've been doing 'VRS' with deferred shading for over a decade - since PS3 era and even before.
The whole point/interest in this COD implementation is that it's not a deferred renderer.
so you think that is better sw vrs vs hw vrs? ....in curious of your answer )) (I'm not talking in this specific case obviously)
 
Last edited:
'Tier 2' is the screenspace mask(nothing is done per primitive), and that has the granularity mentioned in the paper.
Per primitive work is specified in Tier 1.
Well, the AMD presentation on VRS clearly says that Tier2 can also support per-primitive shading rate
Look at the image here:

 
Well, the AMD presentation on VRS clearly says that Tier2 can also support per-primitive shading rate
Yea sorry - that's where Tier 2 is superset of Tier 1 (per-primitive is inclusive of per draw-call, obviously). Anyway - it's irrelevant to conversation at hand - when you control rate per-primitive, that's your granularity(pixel quads will be clipped against primitive bounds, so you'll get shading controlled only inside interior of each primitive, excluding edges). Pixel granularity is essentially unbounded here, and it's just much harder to control (small enough polygons will yield no returns, large ones will.. obviously have a much more visible impact on quality).
 
so you think that is better sw vrs vs hw vrs? ....in curious of your answer )) (I'm not talking in this specific case obviously)
Different trade-offs, may depend on specific hw, but more so the content itself.
At a glance - it's one of those things you'd probably want to measure for each game and make decisions based off of that (or even look for hybrid solutions like the MS engineer suggested).

To be clear - it's true we've had numerous GPU features over the years where 'hardware' implementation turned out to not be very practical and people worked with alternatives instead(I can list some for pretty much every generation over last 2 decades). But it's way too early to talk about that here, at least from my perspective.
 
5120x2160 Ultra (GI 16 rays)

VRS Off

fUNun4B.png


VRS Performance

0mWFsJl.png



~14%, not too bad but IDK if there any IQ differences thanks to downsampling
 
So Activision is wrong, also that MS engineer is wrong, and even Matt Hargett, who worked directly on PS5 before going to Roblox, is wrong but those gaffers are right...


5pYy7cg.jpg


Notice Primitive Shaders can be used for "multi-resolution rendering". These features already exist for RX 5700 XT (NAVI 10), but it failed DirectX12U compliance and doesn't match Turing GTX and RTX.

NVIDIA Turing's VRS is bundled with Mesh/Amplification shaders Next-Generation Geometry Pipeline (NGGP).

AMD presented their NGGP and it was rejected by MS. AMD NGGP wasn't working with VEGA and it was later disabled with later driver updates.

NVIDIA presented their NGGP and it was accepted by MS.

XSX vs PS5 is just a side show for the larger AMD vs NVIDIA standard battles.
 
Last edited:
I'm adding NVIDIA's hardware VRS into the mix.
I have noticed this in pretty much console related thread (Software VRS is of chief interest for consoles like PS4 and Xbox One which lacks it completely but also consoles like XSX in terms of image quality of areas with lower shading rate), maybe you are nVIDIA's better informed Leonidas. You bumped an old thread to post nVIDIA press material as nVIDIA was not given enough spotlight.

or maybe…
XSX vs PS5 is just a side show for the larger AMD vs NVIDIA standard battles.

This is more PCMR where people are discussing consoles and ignoring nVIDIA since they are out of the AAA consoles race or mobile or Apple HW in general like the M1 Pro / Max thread.
 
Last edited:
Top Bottom