Please keep in mind that Gsync is for under 60fps gaming
I need to change my avatar as this has happened many times over the years lol...
Is this something that would negate microstutter as well? Ever since I upgraded to Win 10, one of the games I play will microstutter whenever I have a flight stick or controller plugged in, and so far, none of the other fixes I've tried have worked.
Using fastsync at a frame rate below your monitors refresh rate is functionally identical to using vsync, and you gain no latency advantage. Once your frame rate exceeds your monitors refresh rate you begin to see improvements in your latency. Once you exceed twice your monitors refresh rate (120 fps for 60 hz, 240 fps for 120hz, etc) you will see the full benefits of fastsync.
So basically this has 2 use cases:
1) Low to moderate frame rates and the user wants driver level triple buffering. They are willing to put up with judder and/or frame pacing issues. Latency improvement will be minimal or nonexistent over standard vsync.
2) Frame rates approaching or exceeding twice the refresh rate of the users monitor. The user will gain a latency improvement Nvidia claims is an average of only 8ms higher than running the game with vsync off.
I'm sure there is a middle ground between 1x the frame rate and 2x the frame rate where people will see a mixture of 1 and 2. I'd go even further and say that their 8ms claim exists only for gamers with a 144hz monitor. The advantage over vsync off on a 60hz display is likely to be far far less. Probably 20ms+ over vsync off.
Is this something that would negate microstutter as well? Ever since I upgraded to Win 10, one of the games I play will microstutter whenever I have a flight stick or controller plugged in, and so far, none of the other fixes I've tried have worked.
From what I'm reading here, then FASTsync is useless for 144Hz monitors as outside of some old or Valve/ Blizzard games, nothing will render at 150+ fps on even a GTX 1080 for me to not use triple buffer on borderless window with RTSS.
No. This will completely eliminate frame pacing and always display the last rendered frame. It basically tells the program that vsync is off. Then it uses 3 buffers to display the last completed frame before the next monitor refresh. It discards any unused frames.
So if you are using a 60 hz monitor and the game is averaging 74 fps, the game engine is running at 74 fps. The driver will display the latest rendered 60 frames from the 74 that the engine rendered
Just tested 0x18888888 on my end, it doesn't seem to do anything on laptops with Nvidia Optimus. Not surprising, but disappointing nonetheless.I've tried this out. It's possible to enable it through Nvidia Profile Inspector when using the current release driver. Set Vsync to 0x18888888.
It seems to work as advertised. I ran a game with arbitrary frame rates above display refresh rate and with no tearing. Vsync was disabled in game.
It appeared that input latency was decent, but there was absolutely a fair amount of judder as the frame rate approached that of the refresh rate. I need to test more thoroughly. It does not play nicely with SLI; extremely stuttery.
Depends on what you're looking for.So in theory on my 60Hz IPS screen, the best way for me to play Rocket League is with fast sync and the framerate locked at at 120 fps? Or 180 fps? In between those? Or is higher always better, even if it's variable?
So many posts with explainations, but I still have no idea what the correct answer is, haha.
So nvidia hasn't said when they are officially adding it to cp vsync options have they?
Since you can already enable it from inspector, maybe next driver update.
Like I said in another thread, it's "just" triple buffering.
Actual triple buffering as you can see described in textbooks from the 90s, not what has come to be called "triple buffering" recently but is really just a 3-buffer queue.
That said, it's a very good thing to get an option for it in the driver, since on a modern system it's basically impossible to enforce real triple buffering in fullscreen mode.
I know. When I said "enforce" I meant for the average user, not developers.What bugs me about all of this is that the Flip Presentation Model in DXGI 1.2 (Windows 8) already has this functionality. n-buffering with late frame discard.
Instead of the typical 1->2->3 sequence of buffer swaps, you can go 1->3 (with 2 simply dropped) if you've managed to pump out two frames in-between swap intervals. This has tremendous input latency advantages and I've modded many D3D11 games to use it to great success.
Waitable Swapchains (Windows 8.1) can even mitigate the loss of framepacing you'd get from VSYNC. Apparently NVIDIA has no solution for that problem, but Microsoft already solved it
We don't need a stupid proprietary driver feature, we need developers to utilize features that already exist in Windows
I know. When I said "enforce" I meant for the average user, not developers.
But in the absence of developers using it -- and face it, that's never going to happen across the board -- a driver-level switch is very convenient. I also don't see a reason to whine about it being "stupid proprietary" in this instance, it doesn't even have a developer-facing API! It's like complaining that DSR is proprietary.
Well, in the sense that this will be exposed through NvAPI while AMD never exposes it at all, it kind of is proprietary.
All of these driver features tend to work out that way, NvAPI exposes them and makes them something you can turn on/off in-game (which is where settings ideally belong), meanwhile AMD sits on their butts
Arkham Knight, for example, has Adaptive VSYNC as an option in-game for NV GPUs. AMD offers Adaptive VSYNC too but the only way you're going to engage it is fiddling around with Catalyst Control Center.
It's not exposed through anything, it's just a driver level "hack" of applications swap chains. It's no more "proprietary" than a driver level HBAO injection.
So in theory on my 60Hz IPS screen, the best way for me to play Rocket League is with fast sync and the framerate locked at at 120 fps? Or 180 fps? In between those? Or is higher always better, even if it's variable?
So many posts with explainations, but I still have no idea what the correct answer is, haha.
It will have an NvAPI setting that can be changed. Therefore it is exposed on the NVIDIA side of the equation.
AMD's ADL API has very little in the way of driver control, so developers are unable to make the option available in-game for users.
Recall that in OpenGL, Adaptive VSYNC is a standard extension available by all three vendors. Adaptive VSYNC in Direct3D didn't exist until DXGI 1.5 (Windows 10) and it has to be turned on with driver settings. NVIDIA has an API game developers can use to extend D3D and manipulate driver settings. AMD doesn't.
This feature will be no different. NvAPI profile setting? Check. ADL API call to turn it on/off? Nope.
Well, what's the problem here? If you want uniform support use the common API interfaces as they are present already as you've said (this will lead to the feature only working in Win8+ however). If not then you can use NVAPI for NV cards (which will make the feature work on any platform with an NV card basically). AMD don't have the same feature exposed in their ADL? How's that a problem of anyone but AMD?
I don't think that many developers will use this feature anyway as most developers don't make games which are expected to run at 120+ fps on launch. So a driver level "hack" is quite useful, be it "proprietary" or not.
Well, what's the problem here? If you want uniform support use the common API interfaces as they are present already as you've said (this will lead to the feature only working in Win8+ however). If not then you can use NVAPI for NV cards (which will make the feature work on any platform with an NV card basically). AMD don't have the same feature exposed in their ADL? How's that a problem of anyone but AMD?
I don't think that many developers will use this feature anyway as most developers don't make games which are expected to run at 120+ fps on launch. So a driver level "hack" is quite useful, be it "proprietary" or not.
How do you know?It will have an NvAPI setting that can be changed.
How do you know?
Even if true, clearly the primary purpose of this setting is to allow users to override the behavior of programs, and I fully support that.
It's just baffling to me why it has taken this long to get this basic feature back and why a GPU vendor had to implement it in their driver.
The reason we have this silly Fast Sync nomenclature is because the user base has long since lost visibilty of the concept. This is just triple buffering and you can read about it in this anandtech article from 7 years ago.
Either developers mostly didn't think it was important or DirectX makes it difficult to achieve, which I find hard to believe.
I suppose most developers used DirectX's buffer queue and didn't think anything more of it. Backpressure is what causes shitty latency on 60Hz displays with Vsync enabled.
So boo to Nvidia for not just calling this what it is, but yay for doing it and giving users more options. I've never owned a 60Hz LCD monitor and I don't need to play games at 300fps, so it doesn't affect me much. I suppose it might be good for latency improvement in the games where I use Vsync and monitor backlight strobing.
You asked how it was proprietary. I simply let you know that when a feature gets relegated to a driver setting, that means that one vendor will let developers finagle it while the other does not. It becomes the end-user's responsibility to figure out how to turn the thing on/off and the procedure therefore differs depending on who manufactures their hardware.
You cannot say that it is not proprietary when there is no standard. Maybe if NVIDIA released this as a GameWorks feature that wraps DXGI you could make the claim that it is not proprietary.
It's nobody's problem in the end, because DX12, Windows Store and UWP games are required to utilize the Flip Presentation Model so Microsoft is effectively planning the obsolescence of this feature (slowly).
If that same setting (or should I say behavior) can be controlled via D3D/OGL then how is it proprietary? The only proprietary part here is the NVAPI interface but it's up to the developer to decide how to implement the feature and what interface to use.
And again, I'm not expecting that a lot (or any tbh) of developers will even implement this feature on a game settings level. Seems way too forward looking for a market where a simple AF control is still missing from majority of titles released.
Like I said in another thread, it's "just" triple buffering.
Actual triple buffering as you can see described in textbooks from the 90s, not what has come to be called "triple buffering" recently but is really just a 3-buffer queue.
That said, it's a very good thing to get an option for it in the driver, since on a modern system it's basically impossible to enforce real triple buffering in fullscreen mode.
What would it take for that to happen?
What would it take for that to happen?
Classic triple buffering is just vsync with three buffers instead of two. Nothing more, nothing less. DirectX allows devs to specify how many buffers they want in the queue since pretty much forever. You could even have quadruple or quintuple buffer if you wanted!
Does FAST Sync even matter for 60Hz monitors?
Does FAST Sync even matter for 60Hz monitors?
What would it take for implementing triple buffering to happen or what would it take for enforcing triple buffering from the user side to happen?What would it take for that to happen?
What would it take for implementing triple buffering to happen or what would it take for enforcing triple buffering from the user side to happen?
The former was answered by Kaldaien above, the latter was answered by Nvidia (though they could have named it "triple buffering ).
By the way, do we know if this works for DX9.0 games? because it's really impossible to get triple buffering with those in fullscreen.
What would it take for implementing triple buffering to happen or what would it take for enforcing triple buffering from the user side to happen?
The former was answered by Kaldaien above, the latter was answered by Nvidia (though they could have named it "triple buffering ).
By the way, do we know if this works for DX9.0 games? because it's really impossible to get triple buffering with those in fullscreen.
Hey guys, I need help with this. I have a 980ti and I run most games at 1080p on my TV (60hz) so this tech sounds perfect for me. Getting rid of V-Sync would be a dream. When is this coming out? Do I enable it on the NVidia control panel? And then it transfers into the game I'm playing?
Sorry for not knowing anything and thanks for all the help!
You have to use msi afterburner. The option is under the V-Sync dropdown list and it's called 0x18888888.
It'll probably be in the control panel for the next driver release.
Though you'd probably be just as well off forcing v-sync and capping at 60.
It is probably worth mentioning that this is the polar opposite of Adaptive VSYNC. That alleviates some of the VSYNC performance penalty without letting the GPU run at unbounded framerates (helps with both power and thermal vs. no VSYNC at all).
With this, you'll run at your GPU's ceiling always and be at thermal/power limit constantly. If you're used to running games at a 60 FPS cap on your 60 Hz monitor, be prepared for a GPU that runs hotter and uses more electricity.