• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Software-based Variable Rate Shading in Call of Duty: Modern Warfare

The idea that people believe that VRS is useless due to the level of performance gain you get from it clearly shows people don't understand its purpose. When VRS is done correctly you notice zero perceptible change in image quality.
 
VRS is basically, after you've done all your necessary optimizations to reach your desired performance target, a further optimization step that will literally net you free performance with no visual impact to further keep you above your performance budget.
 
No. I perfectly understand what they do. VRS has been a dud and DLSS isn't. If you had to choose, DLSS is the way to go. Wolf Youngblood doesn't allow both, maybe some other game might. Most won't. Even AMDs adaptive sharpening looks better. VRS might be useful for VR. Meh.
You claim to understand what they do and then go on to seemingly imply that VRS is intended to improve the visuals of a game.

VRS is a performance optimisation, intended to make rendering faster for very little perceptual visual degradation (provided it's used correctly). DLSS is an image upscale technique... apples to oranges.
 
VRS is basically, after you've done all your necessary optimizations to reach your desired performance target, a further optimization step that will literally net you free performance with no visual impact to further keep you above your performance budget.

Gains are unpredictable though given its going to depend heavily on the scene composition. At the end of the day its an algorithmic solution so the input data is really important both in terms of visual result and effectiveness at saving bandwidth.

It really doesn't seem like a big deal to me, just another tool in the engine coder's arsenal.
 
Gains are unpredictable though given its going to depend heavily on the scene composition. At the end of the day its an algorithmic solution so the input data is really important both in terms of visual result and effectiveness at saving bandwidth.

It really doesn't seem like a big deal to me, just another tool in the engine coder's arsenal.

Free performance wherever possible and the feature gives it to you. So long as they optimize well, this extra bit is very useful for keeping you above budget.
 
It sounds like we have a similar situation to the checkerboard rendering feature of the PS4 PRO, but at least checkerboard rendering is a pretty good upscaling technique... unless you have access to DLSS it is at least as good as the best resolution scaling techniques out there.

The problem is that unlike say DLSS enabling VRS is akin to lowering the details so much that you may as well be watching a low quality YouTube stream--in good scenarios. I'd say that any developers worth its salary would be better off cutting on shadow quality to get that 8 to 20% frame time saving.

VRS is the worst panacea of rendering technique I have ever seen, I'm trying to think of an equivalent-- it gives you the promised performance boost, but for a clear sacrifice in IQ, what it's meant to avoid--I can't think of anything like this in the past.
You dont know what you are talking about. Stop shitting on features where u cant even pick them out being used. Gears tactics used tier 1 vrs because it could get away with the camera being far away from surfaces.

Tier 2 has no discernible visual differences and still delivers a performance improvement.

The only people sayin its a dud are sony system warriors because its not supported in ps5 HW. If it was you would claim its the best feature ever.
 
Last edited:
XSX HW VRS
8F0wizn.jpg


PS5 SW VRS ?
53YfMYV.jpg
Thats isnt an example of VRS. we have been over this in the past.
 
So all the crowing about VRS hardware in XSX/XSS and DX12U that MS and their fans have been harping on about and we see here that a software-based approach being more flexible actually provides superior performance and overall IQ?

Who would have thought... lol.

Isn't that a funny thing that HW-based VRS is actually inferior? :lollipop_tears_of_joy:

Honestly, didn't expect that. But seems like more educated devs/architects known that in advance while shaping their games/hardware.

paradise_cernynijt1.gif
 
It sounds like we have a similar situation to the checkerboard rendering feature of the PS4 PRO, but at least checkerboard rendering is a pretty good upscaling technique... unless you have access to DLSS it is at least as good as the best resolution scaling techniques out there.

The problem is that unlike say DLSS enabling VRS is akin to lowering the details so much that you may as well be watching a low quality YouTube stream--in good scenarios. I'd say that any developers worth its salary would be better off cutting on shadow quality to get that 8 to 20% frame time saving.

VRS is the worst panacea of rendering technique I have ever seen, I'm trying to think of an equivalent-- it gives you the promised performance boost, but for a clear sacrifice in IQ, what it's meant to avoid--I can't think of anything like this in the past.
What you are saying is absolute non-sense, and can be demonstrably seen to be non-sense by anyone who runs the game and enables these features. It is also pairable with a temporal reconstruction technique for AA and dynamic resolution, so it has further benefits in that way.
Finally, DLSS has serious problems in games with upscaling certain effects like reflections, so what are you on about talking of DLSS like it has no visual downgrades. ALL techniques sacrifice IQ for performance, there's no free lunch. There is no substitute for native.

UeykcQW.jpg
 
It sounds like we have a similar situation to the checkerboard rendering feature of the PS4 PRO, but at least checkerboard rendering is a pretty good upscaling technique... unless you have access to DLSS it is at least as good as the best resolution scaling techniques out there.

The problem is that unlike say DLSS enabling VRS is akin to lowering the details so much that you may as well be watching a low quality YouTube stream--in good scenarios. I'd say that any developers worth its salary would be better off cutting on shadow quality to get that 8 to 20% frame time saving.

VRS is the worst panacea of rendering technique I have ever seen, I'm trying to think of an equivalent-- it gives you the promised performance boost, but for a clear sacrifice in IQ, what it's meant to avoid--I can't think of anything like this in the past.


This is correct because VRS, or Foveated Rendering, has been researched for over a decade but it wasn't originally meant to be used in 2D panels. It's always been developed for using in VR headsets.
Here's a more in-depth explanation:



It's not like foveated rendering isn't important. It just isn't that important / impactful for non-VR.
 
It's not like foveated rendering isn't important. It just isn't that important / impactful for non-VR.
I recall Sony talking about something similar for objects that are out of the point of attention in PSVR on the PS4 PRO, which makes sense because out eye sight has a pretty low resolution on the periphery (it's basically good enough so that we can perceive movement and actually watch to see what's going on). Is it really used on PSVR titles? do we have any analysis of how it impacts the games?

Finally, DLSS has serious problems in games with upscaling certain effects like reflections, so what are you on about talking of DLSS like it has no visual downgrades. ALL techniques sacrifice IQ for performance, there's no free lunch. There is no substitute for native.
Thanks, I had not seen the counter examples for DLSS (in the good examples it looks pretty neat)... The problem is that from what I have seen so far with VRS is that it doesn't seem like a worthwhile sacrifice, you may as well just upscale the image from a lower resolution, depending on the upscaler it could be just as good or even better.

Maybe the AI thought that since the reflections were on water puddles they should be blurrier.
Glow Machine Learning GIF by xponentialdesign
 
The idea that people believe that VRS is useless due to the level of performance gain you get from it clearly shows people don't understand its purpose. When VRS is done correctly you notice zero perceptible change in image quality.
Well would you notice and perceptinle change to imagine quality ith the output was 1900p vs native 4k? Most people wouldn't and that would provide a bigger performance gain that VRS i wpuld wager.
 
Is it really used on PSVR titles? do we have any analysis of how it impacts the games?
I don't know of any specific game that is using foveated rendering at the moment. Ideally, foveated rendering is used with eye-tracking, which the PSVR1 doesn't have. So far, I believe only the Vive Pro Eye has eye-tracking.
We're supposed to be able to move our eyes around when using VR. If devs applied foveated rendering at the center by default then we'd be looking at ugly graphics every time we'd look up/down/sides. Eye-tracking would serve the purpose of determining which part of the image we're focusing on, to reduce the render quality everywhere else.
 
Modern Warfare's graphics are noticeable because since 2007 they haven't accomplished much in that department. Modern Warfare 2, 3 ordinary graphics. Black Ops 1-3, ordinary graphics etc. they worked their tail off with MW.
This is not entirely true. The first Black Ops was one of the first games to introduce Physically Based Shading in its lighting pipeline, as seen in the paper Physically Based Lighting in Call of Duty: Black Ops. (Powerpoint slides.) Do note this is something different than a full Physically Based Rendering pipeline that came soon after.

Black Ops 2 introduced refinemines to the PBS shading solution as shown in the paper Getting More Physical with Call of Duty Black Ops 2 (Course notes) (Notebook)

Lastly, as an addendum, The Self Shadow shading courses are worth a read.
 
It's simple. Software VRS is superior to Hardware VRS. Interesting results from a massive publisher.
A software solution always has more flexibility than a hardware one. Translating a given source code to a hardware implementation of it always results in speed improvements.
HW VRS Tier 2 (8*8) VS SW VRS (2*2)
xlXZGYW.jpg
Can you share the full presentation? I don't know why they are saying the SW VRS implementation allows smaller tile sizes when VRS Tier 2 allows a 2x2 tile size when doing it per primitive.
Can it be that they are comparing it with VRS Tier 1, which only allowed for 8x8 tile sizes, thus making these stats moot?
 
It's all here, direct from the horse's mouth.

A software solution always has more flexibility than a hardware one. Translating a given source code to a hardware implementation of it always results in speed improvements.

Can you share the full presentation? I don't know why they are saying the SW VRS implementation allows smaller tile sizes when VRS Tier 2 allows a 2x2 tile size when doing it per primitive.
Can it be that they are comparing it with VRS Tier 1, which only allowed for 8x8 tile sizes, thus making these stats moot?


On Tier 2 and higher, pixel shading rate can be specified by a screen-space image.

The screen-space image allows the app to create an "LOD mask" image indicating regions of varying quality, such as areas which will be covered by motion blur, depth-of-field blur, transparent objects, or HUD UI elements. The resolution of the image is in macroblocks, not the resolution of the render target. In other words, the subsampling data is specified at a granularity of 8x8 or 16x16 pixel tiles as indicated by the VRS tile size.

Tile size

The app can query an API to know the supported VRS tile size for its device.

Tiles are square, and the size refers to the tile's width or height in texels.

If the hardware does not support Tier 2 variable rate shading, the capability query for the tile size will yield 0.

If the hardware does support Tier 2 variable rate shading, the tile size is one of

  • 8
  • 16
  • 32
 
On Tier 2 and higher, pixel shading rate can be specified by a screen-space image.

The screen-space image allows the app to create an "LOD mask" image indicating regions of varying quality, such as areas which will be covered by motion blur, depth-of-field blur, transparent objects, or HUD UI elements. The resolution of the image is in macroblocks, not the resolution of the render target. In other words, the subsampling data is specified at a granularity of 8x8 or 16x16 pixel tiles as indicated by the VRS tile size.

Tile size

The app can query an API to know the supported VRS tile size for its device.

Tiles are square, and the size refers to the tile's width or height in texels.

If the hardware does not support Tier 2 variable rate shading, the capability query for the tile size will yield 0.

If the hardware does support Tier 2 variable rate shading, the tile size is one of

  • 8
  • 16
  • 32
From the same document

What you pasted only relates to the screen-space image mode of VRS2

The AMD documentation says this:


And from the Gears 5 VRS Tier 2 implementation:
 
I don't think page numbers are right, page 12 is just an image and page 18 only talks about being able to match the flexibility of VRS Tier 1.
Anyway, having read the presentation, they are using an image-based mask instead of specifying the shading rate per primitive. If that's the way VRS will be used in the future, which frankly, I don't know, it seems like Microsoft really missed the mark only allowing for 8x8 and higher tile sizes.
The main point of that presentation is that VRS has been used even in the PS4 without the people on this forum knowing, making them look like fools when they claim VRS, in general, is the worst thing to happen to computer graphics.
Another one for the tech guys in Neogaf :LOL:
 
I don't think page numbers are right, page 12 is just an image and page 18 only talks about being able to match the flexibility of VRS Tier 1.
Anyway, having read the presentation, they are using an image-based mask instead of specifying the shading rate per primitive. If that's the way VRS will be used in the future, which frankly, I don't know, it seems like Microsoft really missed the mark only allowing for 8x8 and higher tile sizes.
The main point of that presentation is that VRS has been used even in the PS4 without the people on this forum knowing, making them look like fools when they claim VRS, in general, is the worst thing to happen to computer graphics.
Another one for the tech guys in Neogaf :LOL:
you need to display the page notes.
page 12 :
Call of Duty : Infinite Warfare & Call of Duty : World War 2 already used a form of VRS.

IW7 used software based implementation of what would be VRS Tier1 in DX12
Per draw-call basis
Primary use case
Multi resolution VFX blending
Allowed for transparencies
Glass
Visors
Art controlled per-Material

Here you can see a typical scene from Infinite Warfare, with heavy VFX pass.

page 18 :
We already had a pipeline setup similar to VRS Tier 1 – could change sampling rate on per draw basis – in VFX and transparency draws only.
What if we extend this further to opaque draws.
Experiment 1 – where we extend the pipeline to match VRS Tier 1.0 capabilities

Render pre-pass in 4xMSAA (at ½ res to match original res target)
Render opaque in 4xMSAA
Allow draws to choose 4s/4p, 2s/4p, 1/4p
Automatically rotate subsampling patterns
Resolve pixels to patch up 'gaps' between pixels before postFx pass

Massive speedup to pre-pass rendering
Especially foliage (minimized wavefront counts, more efficient alpha cutout due to combined SV_Coverage output)

Flexibility matching DX12 VRS Tier 1.0
Actually better because we have custom 'patterns' like 3s/4p and others
 
Last edited:

Jacques van Rhyn Program Manager on the Graphics Team at Microsoft.

Software-Based Variable Rate Shading in
Call of Duty" presented at SIGGRAPH 2020 (http://advances.realtimerendering.com/s2020/index.htm) has some interesting thoughts on this topic as well. They present a method leveraging how console hardware handles MSAA to emulate VRS on platforms without hardware VRS support and extra flexibility such as smaller tile sizes. In addition, they present an optimized way to apply VRS to compute shaders that uses ExecuteDispatchIndirect to ensure only waves with actual work are dispatched in contrast to our brute force method. However, Software-Based VRS also has some trade-offs including implementation complexity and the overhead of a de-blocking pass. One possibility is to use a hybrid of both techniques, switching between VRS techniques based on the characteristics of the rendering pass.

This should end this thread.
 

Jacques van Rhyn Program Manager on the Graphics Team at Microsoft.

Software-Based Variable Rate Shading in
Call of Duty" presented at SIGGRAPH 2020 (http://advances.realtimerendering.com/s2020/index.htm) has some interesting thoughts on this topic as well. They present a method leveraging how console hardware handles MSAA to emulate VRS on platforms without hardware VRS support and extra flexibility such as smaller tile sizes. In addition, they present an optimized way to apply VRS to compute shaders that uses ExecuteDispatchIndirect to ensure only waves with actual work are dispatched in contrast to our brute force method. However, Software-Based VRS also has some trade-offs including implementation complexity and the overhead of a de-blocking pass. One possibility is to use a hybrid of both techniques, switching between VRS techniques based on the characteristics of the rendering pass.

This should end this thread.
you can read post with similar opinion on beyond 3d
 
Last edited:
In line with what I think.



HW VRS is a fixed generic solution that speed up things for most developers.
SW VRS is a more flexible solution that game beat the HW VRS if the team take enough time ($$$) to do it... so only big developers will enter in that scenario.

The two examples right now are CoD's devs and Metro's devs... they have their own SW VRS solution that are probably better for than that the DX12U HW VRS solution.

How can a software based solution be faster than a hardware one?
 


Most engines are deferred except Id and ?

Still think in the move to small triangles VRS will fade away...so who cares ?
 
Last edited:
VRS is basically, after you've done all your necessary optimizations to reach your desired performance target, a further optimization step that will literally net you free performance with no visual impact to further keep you above your performance budget.
You don't keep optimizing once you reached your performance target.

And so far the only places it did not sacrifice image quality is in PR communications.

I mean it's another option to balance how your games look and perform, but I fail to see how relevant it is compared to just upscaling a slightly lower resolution image by example.
 
You don't keep optimizing once you reached your performance target.

And so far the only places it did not sacrifice image quality is in PR communications.

I mean it's another option to balance how your games look and perform, but I fail to see how relevant it is compared to just upscaling a slightly lower resolution image by example.

There may always be things that push you below your frame time budget, something like VRS helps keep you above it. And there was no sacrifice to image quality in Gears 5 on Series X, Hivebusters DLC, or in Gears Tactics with upgraded Tier 2 VRS.


You're free to not believe it, but the results are there. Tier 2 VRS on Gears Tactics Series X no longer has the image quality issues Tier 1 did on PC.
 
Well would you notice and perceptinle change to imagine quality ith the output was 1900p vs native 4k? Most people wouldn't and that would provide a bigger performance gain that VRS i wpuld wager.
You must be new here. Everyone here can see the difference with their eyes closed.
 
Just tried Gears 5 SS VRS mode at 1440p/Ultra on 2060 Super and got ~3.5% boost with VRS 'Quality' and ~7% with VRS 'Performance'. This is pretty limited compared to what you'd see with other sauces.
 
Last edited:
It's simple. Software VRS is superior to Hardware VRS. Interesting results from a massive publisher.
No. you cannot do that with all engines...deffered , forward and forward+ act in difference manner ..software vrs is shit on forward and f+ doable just on deffered rendering. Having hw you are free from that problem
 
Last edited:
It states clearly that the hardware Tier 2 VRS has no discernible impact on image quality, you can't get better than that no matter how you spin it. It's free as it has hardware support, it's got no impact on image quality and it saves frametime budget.
 
Top Bottom