• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia FAST Sync, what pc gamers have needed for decades

Qassim

Member
Please keep in mind that Gsync is for under 60fps gaming

This is total and utter nonsense. '60fps' is a meaningless benchmark on a monitor with adaptive refresh, it's totally arbitrary. G-Sync is beneficial all the way through the refresh rate range (that's kind of the point..), the only reason why people use the old 60fps benchmark when talking about G-Sync is to give people a reference point to understand the benefits. People understand what happens on a standard display when they go under 60fps (or another set refresh rate), so telling them that fluctuating around that set refresh rate no longer has the same problems as it has before gives them something to understand.

But don't mistake that for 'is for under 60fps' because it's so, so far from the truth and 60fps is such an arbitrary number on a G-Sync monitor it doesn't even make sense. It makes as much sense as saying "It is for 56fps gaming" or "it is for 89fps gaming" or "between 45-55fps gaming", "90 - 112fps gaming", so on, and so on - the point is it's for all of that and more.

FastSync is cool and I'm really glad NVIDIA is still trying to solve one of the worst problems with computer game graphics in as many ways as possible (as they have different advantages), but I can't see any real reason to use it if I have a G-Sync monitor.
 
Is this something that would negate microstutter as well? Ever since I upgraded to Win 10, one of the games I play will microstutter whenever I have a flight stick or controller plugged in, and so far, none of the other fixes I've tried have worked.

No. If anything this has potential to add stutter depending on the framerate.
 
It would be nice if game devs would just implement real (!) triple buffering into their games natively, but since they don't, this seems fantastic.

I was going to switch to AMD for next GPU—now, I'm probably going to stick with nVidia. This just seems fantastic.
 
Using fastsync at a frame rate below your monitors refresh rate is functionally identical to using vsync, and you gain no latency advantage. Once your frame rate exceeds your monitors refresh rate you begin to see improvements in your latency. Once you exceed twice your monitors refresh rate (120 fps for 60 hz, 240 fps for 120hz, etc) you will see the full benefits of fastsync.

So basically this has 2 use cases:

1) Low to moderate frame rates and the user wants driver level triple buffering. They are willing to put up with judder and/or frame pacing issues. Latency improvement will be minimal or nonexistent over standard vsync.

2) Frame rates approaching or exceeding twice the refresh rate of the users monitor. The user will gain a latency improvement Nvidia claims is an average of only 8ms higher than running the game with vsync off.

I'm sure there is a middle ground between 1x the frame rate and 2x the frame rate where people will see a mixture of 1 and 2. I'd go even further and say that their 8ms claim exists only for gamers with a 144hz monitor. The advantage over vsync off on a 60hz display is likely to be far far less. Probably 20ms+ over vsync off.

Considering that Tom Petersen from Nvidia says that 2-3x is the minimum (he's even mentioning 200-300 fps), I wouldn't really expect much potential here at all.

https://www.youtube.com/watch?v=xtely2GDxhU&feature=youtu.be&t=2h25m
 

TSM

Member
Is this something that would negate microstutter as well? Ever since I upgraded to Win 10, one of the games I play will microstutter whenever I have a flight stick or controller plugged in, and so far, none of the other fixes I've tried have worked.

No. This will completely eliminate frame pacing and always display the last rendered frame. It basically tells the program that vsync is off. Then it uses 3 buffers to display the last completed frame before the next monitor refresh. It discards any unused frames.

So if you are using a 60 hz monitor and the game is averaging 74 fps, the game engine is running at 74 fps. The driver will display the latest rendered 60 frames from the 74 that the engine rendered. This means you may see quite a bit of judder, and any latency improvements will be minimal. If the game is averaging 46 fps, then all 46 frames will be delivered as they are available to display and 14 of those frames will be displayed twice to achieve 60 hz. You will have quite a bit of judder, and there will be no latency improvement over vsync. The only positive is that the driver level triple buffering may be better than having it done in software in terms of latency or having your frame rate jump between 30 fps and 60 fps constantly.

Fastsync is not a solution even beginning to approach gsync, nor was it intended to be. What's funny is that before gsycn/freesync this would have been seen as huge. In a variable refresh rate world this is a small bandaid on a gaping 60 hz wound.
 

taimoorh

Member
From what I'm reading here, then FASTsync is useless for 144Hz monitors as outside of some old or Valve/ Blizzard games, nothing will render at 150+ fps on even a GTX 1080 for me to not use triple buffer on borderless window with RTSS.
 

TSM

Member
From what I'm reading here, then FASTsync is useless for 144Hz monitors as outside of some old or Valve/ Blizzard games, nothing will render at 150+ fps on even a GTX 1080 for me to not use triple buffer on borderless window with RTSS.

Not useless, but it will only be useful for specific cases like games with terrible or nonexistant triple buffer solutions if you demand such. People with gsync monitors have it really good now. if a game runs under 140 fps, use gsync. If it runs over 140 fps turn on fastsync and frame pacing issues are unlikely to be noticeable. If a game runs over 200 fps then you win for sure. This basically eliminates tearing on gsync monitors for anyone that isn't hyper competitive and demands that last 8ms of input lag be gone by turning off vsync.
 

bwat47

Neo Member
No. This will completely eliminate frame pacing and always display the last rendered frame. It basically tells the program that vsync is off. Then it uses 3 buffers to display the last completed frame before the next monitor refresh. It discards any unused frames.

So if you are using a 60 hz monitor and the game is averaging 74 fps, the game engine is running at 74 fps. The driver will display the latest rendered 60 frames from the 74 that the engine rendered


So (correct me if I'm wrong), this sounds basically like playing a game with vsync disabled in borderless windowed mode?
 

riflen

Member
I've tried this out. It's possible to enable it through Nvidia Profile Inspector when using the current release driver. Set Vsync to 0x18888888.
It seems to work as advertised. I ran a game with arbitrary frame rates above display refresh rate and with no tearing. Vsync was disabled in game.
It appeared that input latency was decent, but there was absolutely a fair amount of judder as the frame rate approached that of the refresh rate. I need to test more thoroughly. It does not play nicely with SLI; extremely stuttery.
 

ValfarHL

Member
So in theory on my 60Hz IPS screen, the best way for me to play Rocket League is with fast sync and the framerate locked at at 120 fps? Or 180 fps? In between those? Or is higher always better, even if it's variable?

So many posts with explainations, but I still have no idea what the correct answer is, haha.
 

shockdude

Member
I've tried this out. It's possible to enable it through Nvidia Profile Inspector when using the current release driver. Set Vsync to 0x18888888.
It seems to work as advertised. I ran a game with arbitrary frame rates above display refresh rate and with no tearing. Vsync was disabled in game.
It appeared that input latency was decent, but there was absolutely a fair amount of judder as the frame rate approached that of the refresh rate. I need to test more thoroughly. It does not play nicely with SLI; extremely stuttery.
Just tested 0x18888888 on my end, it doesn't seem to do anything on laptops with Nvidia Optimus. Not surprising, but disappointing nonetheless.
So in theory on my 60Hz IPS screen, the best way for me to play Rocket League is with fast sync and the framerate locked at at 120 fps? Or 180 fps? In between those? Or is higher always better, even if it's variable?

So many posts with explainations, but I still have no idea what the correct answer is, haha.
Depends on what you're looking for.
Minimum latency without tearing = FastSync, VSync off, framerate unlocked. Lock your framerate only if you're concerned about GPU temperatures and/or don't want the GPU running at 100% all the time.
Perfect fluidity with low latency = VSync, framerate locked to equal refresh rate.
Both = GSync, framerate locked a couple FPS below monitor's maximum refresh rate (e.g. 140FPS at 144Hz)
 
Thanks for this thread. It's the real deal-- playing CSGO vsynced with almost no input latency! I will take stuttering vsync over screen tearing any day. Awesome. Very nice that it work on my old 560Ti 448 core. I used nvidia inspector and it already had the 1x0888888 whatever hex listed in the vsync dropdown menu.

Using fast sync on my system with CSGO, fps flip between 60 and 120 fps depending on how demanding things get in the game. But even at 60fps with fast sync, I don't notice NEARLY as much latency as regular driver-based vsync which makes it feel like I'm playing CSGO underwater.

Super cool. But, of course, this is the only game I play with such high fps on a 60Hz monitor. It's basically "CSGO vsync" for me, but that's still great.
 

Parsnip

Member
So nvidia hasn't said when they are officially adding it to cp vsync options have they?
Since you can already enable it from inspector, maybe next driver update.
 

dr_rus

Member
So nvidia hasn't said when they are officially adding it to cp vsync options have they?
Since you can already enable it from inspector, maybe next driver update.

It's enabled in Pascal drivers. I'd say that the next common driver will add it officially to other cards as well.
 

Kaldaien

Neo Member
Like I said in another thread, it's "just" triple buffering.
Actual triple buffering as you can see described in textbooks from the 90s, not what has come to be called "triple buffering" recently but is really just a 3-buffer queue.

That said, it's a very good thing to get an option for it in the driver, since on a modern system it's basically impossible to enforce real triple buffering in fullscreen mode.


What bugs me about all of this is that the Flip Presentation Model in DXGI 1.2 (Windows 8) already has this functionality. n-buffering with late frame discard.

Instead of the typical 1->2->3 sequence of buffer swaps, you can go 1->3 (with 2 simply dropped) if you've managed to pump out two frames in-between swap intervals. This has tremendous input latency advantages and I've modded many D3D11 games to use it to great success.

Waitable Swapchains (Windows 8.1) can even mitigate the loss of framepacing you'd get from VSYNC. Apparently NVIDIA has no solution for that problem, but Microsoft already solved it :p

We don't need a stupid proprietary driver feature, we need developers to utilize features that already exist in Windows :)
 

Durante

Member
What bugs me about all of this is that the Flip Presentation Model in DXGI 1.2 (Windows 8) already has this functionality. n-buffering with late frame discard.

Instead of the typical 1->2->3 sequence of buffer swaps, you can go 1->3 (with 2 simply dropped) if you've managed to pump out two frames in-between swap intervals. This has tremendous input latency advantages and I've modded many D3D11 games to use it to great success.

Waitable Swapchains (Windows 8.1) can even mitigate the loss of framepacing you'd get from VSYNC. Apparently NVIDIA has no solution for that problem, but Microsoft already solved it :p

We don't need a stupid proprietary driver feature, we need developers to utilize features that already exist in Windows :)
I know. When I said "enforce" I meant for the average user, not developers.

But in the absence of developers using it -- and face it, that's never going to happen across the board -- a driver-level switch is very convenient. I also don't see a reason to whine about it being "stupid proprietary" in this instance, it doesn't even have a developer-facing API! It's like complaining that DSR is proprietary.
 

Kaldaien

Neo Member
I know. When I said "enforce" I meant for the average user, not developers.

But in the absence of developers using it -- and face it, that's never going to happen across the board -- a driver-level switch is very convenient. I also don't see a reason to whine about it being "stupid proprietary" in this instance, it doesn't even have a developer-facing API! It's like complaining that DSR is proprietary.

Well, in the sense that this will be exposed through NvAPI while AMD never exposes it at all, it kind of is proprietary.

All of these driver features tend to work out that way, NvAPI exposes them and makes them something you can turn on/off in-game (which is where settings ideally belong), meanwhile AMD sits on their butts :)

Arkham Knight, for example, has Adaptive VSYNC as an option in-game for NV GPUs. AMD offers Adaptive VSYNC too but the only way you're going to engage it is fiddling around with Catalyst Control Center.
 

dr_rus

Member
Well, in the sense that this will be exposed through NvAPI while AMD never exposes it at all, it kind of is proprietary.

All of these driver features tend to work out that way, NvAPI exposes them and makes them something you can turn on/off in-game (which is where settings ideally belong), meanwhile AMD sits on their butts :)

Arkham Knight, for example, has Adaptive VSYNC as an option in-game for NV GPUs. AMD offers Adaptive VSYNC too but the only way you're going to engage it is fiddling around with Catalyst Control Center.

It's not exposed through anything, it's just a driver level "hack" of applications swap chains. It's no more "proprietary" than a driver level HBAO injection.
 

Kaldaien

Neo Member
It's not exposed through anything, it's just a driver level "hack" of applications swap chains. It's no more "proprietary" than a driver level HBAO injection.

It will have an NvAPI setting that can be changed. Therefore it is exposed on the NVIDIA side of the equation.

AMD's ADL API has very little in the way of driver control, so developers are unable to make the option available in-game for users.

Recall that in OpenGL, Adaptive VSYNC is a standard extension available by all three vendors. Adaptive VSYNC in Direct3D didn't exist until DXGI 1.5 (Windows 10) and it has to be turned on with driver settings. NVIDIA has an API game developers can use to extend D3D and manipulate driver settings. AMD doesn't.


This feature will be no different. NvAPI profile setting? Check. ADL API call to turn it on/off? Nope.
 

paskowitz

Member
So in theory on my 60Hz IPS screen, the best way for me to play Rocket League is with fast sync and the framerate locked at at 120 fps? Or 180 fps? In between those? Or is higher always better, even if it's variable?

So many posts with explainations, but I still have no idea what the correct answer is, haha.

You could also use DSR and no v sync. I run at 4k just to keep the FPS near 60.
 

dr_rus

Member
It will have an NvAPI setting that can be changed. Therefore it is exposed on the NVIDIA side of the equation.

AMD's ADL API has very little in the way of driver control, so developers are unable to make the option available in-game for users.

Recall that in OpenGL, Adaptive VSYNC is a standard extension available by all three vendors. Adaptive VSYNC in Direct3D didn't exist until DXGI 1.5 (Windows 10) and it has to be turned on with driver settings. NVIDIA has an API game developers can use to extend D3D and manipulate driver settings. AMD doesn't.


This feature will be no different. NvAPI profile setting? Check. ADL API call to turn it on/off? Nope.

Well, what's the problem here? If you want uniform support use the common API interfaces as they are present already as you've said (this will lead to the feature only working in Win8+ however). If not then you can use NVAPI for NV cards (which will make the feature work on any platform with an NV card basically). AMD don't have the same feature exposed in their ADL? How's that a problem of anyone but AMD?

I don't think that many developers will use this feature anyway as most developers don't make games which are expected to run at 120+ fps on launch. So a driver level "hack" is quite useful, be it "proprietary" or not.
 

Kaldaien

Neo Member
Well, what's the problem here? If you want uniform support use the common API interfaces as they are present already as you've said (this will lead to the feature only working in Win8+ however). If not then you can use NVAPI for NV cards (which will make the feature work on any platform with an NV card basically). AMD don't have the same feature exposed in their ADL? How's that a problem of anyone but AMD?

I don't think that many developers will use this feature anyway as most developers don't make games which are expected to run at 120+ fps on launch. So a driver level "hack" is quite useful, be it "proprietary" or not.

You asked how it was proprietary. I simply let you know that when a feature gets relegated to a driver setting, that means that one vendor will let developers finagle it while the other does not. It becomes the end-user's responsibility to figure out how to turn the thing on/off and the procedure therefore differs depending on who manufactures their hardware.

You cannot say that it is not proprietary when there is no standard. Maybe if NVIDIA released this as a GameWorks feature that wraps DXGI you could make the claim that it is not proprietary.

It's nobody's problem in the end, because DX12, Windows Store and UWP games are required to utilize the Flip Presentation Model so Microsoft is effectively planning the obsolescence of this feature (slowly).
 
Well, what's the problem here? If you want uniform support use the common API interfaces as they are present already as you've said (this will lead to the feature only working in Win8+ however). If not then you can use NVAPI for NV cards (which will make the feature work on any platform with an NV card basically). AMD don't have the same feature exposed in their ADL? How's that a problem of anyone but AMD?

I don't think that many developers will use this feature anyway as most developers don't make games which are expected to run at 120+ fps on launch. So a driver level "hack" is quite useful, be it "proprietary" or not.

Not to mention most people don't give a shit about any of these kind of features.
 

Kaldaien

Neo Member
How do you know?

Even if true, clearly the primary purpose of this setting is to allow users to override the behavior of programs, and I fully support that.

Because there's not a single driver setting that cannot be changed through NvAPI :) The NVIDIA Control Panel uses it to manipulate its settings. NVIDIA Inspector can be used to show you the myriad of options you didn't even know you had (many of them undocumented and specific to one or two games).

I fully support this as a driver option as well, especially after seeing how profoundly modding Fallout 4 from BitBlit to Flip Model fixed its frame pacing issues.

I'm just not pleased with the way stuff in Direct3D always works out :(

Driver settings are the goto solution and that puts a huge gulf between the two vendors, one is open to having their driver settings changed through software and the other requires the end-user to use their clunky software. I can't even find my way around the new AMD driver settings app.
 

Durante

Member
Ok, if that's what you meant. But I'd consider a game which actively changes driver settings (ones clearly intended for the end-user to set) misbehaved.
 

Kaldaien

Neo Member
I don't know about that.

Like I said, Adaptive VSYNC is only available in D3D through driver settings. Many AAA games like Far Cry 4, Arkham Knight, etc. offer the end-user the ability to turn Adaptive VSYNC on in-game and they do it by manipulating driver settings (only on NV hardware).
 

M3d10n

Member
It's just baffling to me why it has taken this long to get this basic feature back and why a GPU vendor had to implement it in their driver.
The reason we have this silly Fast Sync nomenclature is because the user base has long since lost visibilty of the concept. This is just triple buffering and you can read about it in this anandtech article from 7 years ago.

Either developers mostly didn't think it was important or DirectX makes it difficult to achieve, which I find hard to believe.
I suppose most developers used DirectX's buffer queue and didn't think anything more of it. Backpressure is what causes shitty latency on 60Hz displays with Vsync enabled.

So boo to Nvidia for not just calling this what it is, but yay for doing it and giving users more options. I've never owned a 60Hz LCD monitor and I don't need to play games at 300fps, so it doesn't affect me much. I suppose it might be good for latency improvement in the games where I use Vsync and monitor backlight strobing.

Classic triple buffering is just vsync with three buffers instead of two. Nothing more, nothing less. DirectX allows devs to specify how many buffers they want in the queue since pretty much forever. You could even have quadruple or quintuple buffer if you wanted!
 

dr_rus

Member
You asked how it was proprietary. I simply let you know that when a feature gets relegated to a driver setting, that means that one vendor will let developers finagle it while the other does not. It becomes the end-user's responsibility to figure out how to turn the thing on/off and the procedure therefore differs depending on who manufactures their hardware.

You cannot say that it is not proprietary when there is no standard. Maybe if NVIDIA released this as a GameWorks feature that wraps DXGI you could make the claim that it is not proprietary.

It's nobody's problem in the end, because DX12, Windows Store and UWP games are required to utilize the Flip Presentation Model so Microsoft is effectively planning the obsolescence of this feature (slowly).

If that same setting (or should I say behavior) can be controlled via D3D/OGL then how is it proprietary? The only proprietary part here is the NVAPI interface but it's up to the developer to decide how to implement the feature and what interface to use.

And again, I'm not expecting that a lot (or any tbh) of developers will even implement this feature on a game settings level. Seems way too forward looking for a market where a simple AF control is still missing from majority of titles released.
 

Kaldaien

Neo Member
If that same setting (or should I say behavior) can be controlled via D3D/OGL then how is it proprietary? The only proprietary part here is the NVAPI interface but it's up to the developer to decide how to implement the feature and what interface to use.

And again, I'm not expecting that a lot (or any tbh) of developers will even implement this feature on a game settings level. Seems way too forward looking for a market where a simple AF control is still missing from majority of titles released.

This actually cannot be enabled in OpenGL. OpenGL lacks a lot of low-level control over the way it presents stuff largely because of its GDI legacy baggage :-\ You more or less assume, however, that when you turn on the driver setting for Triple Buffering in OpenGL games that it works this way (not D3D's rigid 1->2->3 presentation order, but rather discarding the oldest image in the two backbuffers).

Anisotropic Filtering cannot be enabled in certain games because the textures are packed in such a way that bleeding would occur if the sample pattern changed. That's not the reason for ALL games lacking AF settings of course, but it does happen for a handful of them :)

Driver overridden AF is a bad thing a lot of the time, it will cause bleeding on lightmaps, spritesheets and other things. No developer in their right mind would ever use a driver setting rather than deciding which textures were acceptable to be anisotropically filtered.

But this feature is more or less benign. It would take 5 minutes to add the option to a game and would shut up a lot of people who complain about bad input latency when framerate is unstable. There are benefits to FastSync beyond just the double-FPS use-case you've been discussing.

To reap those benefits without using a driver setting would actually be weeks worth of work. 5 minutes vs. weeks? I think it's worth it :)
 

Calm Mind

Member
Like I said in another thread, it's "just" triple buffering.
Actual triple buffering as you can see described in textbooks from the 90s, not what has come to be called "triple buffering" recently but is really just a 3-buffer queue.

That said, it's a very good thing to get an option for it in the driver, since on a modern system it's basically impossible to enforce real triple buffering in fullscreen mode.

What would it take for that to happen?
 

riflen

Member
Classic triple buffering is just vsync with three buffers instead of two. Nothing more, nothing less. DirectX allows devs to specify how many buffers they want in the queue since pretty much forever. You could even have quadruple or quintuple buffer if you wanted!

I'm going to ask if you read the article I linked. Triple buffering doesn't implement a queue. This has been discussed ad-nauseam in this thread already. What you're describing is a render ahead or flip queue.
 

riflen

Member
Does FAST Sync even matter for 60Hz monitors?

It matters most for 60 Hz monitors imo. With a refresh ceiling of only 60 Hz, you're quite likely to be rendering at a higher rate than your panel can display, especially if you're playing trivial to render games like CS:GO or Overwatch.
 

Kaldaien

Neo Member
It is probably worth mentioning that this is the polar opposite of Adaptive VSYNC. That alleviates some of the VSYNC performance penalty without letting the GPU run at unbounded framerates (helps with both power and thermal vs. no VSYNC at all).

With this, you'll run at your GPU's ceiling always and be at thermal/power limit constantly. If you're used to running games at a 60 FPS cap on your 60 Hz monitor, be prepared for a GPU that runs hotter and uses more electricity.
 

Durante

Member
What would it take for that to happen?
What would it take for implementing triple buffering to happen or what would it take for enforcing triple buffering from the user side to happen?
The former was answered by Kaldaien above, the latter was answered by Nvidia (though they could have named it "triple buffering :p).

By the way, do we know if this works for DX9.0 games? because it's really impossible to get triple buffering with those in fullscreen.
 

ACH1LL3US

Member
What would it take for implementing triple buffering to happen or what would it take for enforcing triple buffering from the user side to happen?
The former was answered by Kaldaien above, the latter was answered by Nvidia (though they could have named it "triple buffering :p).

By the way, do we know if this works for DX9.0 games? because it's really impossible to get triple buffering with those in fullscreen.


I hope you do an extensive article with testing on Fast sync, would love to hear your in depth testings and thoughts on it!!
 

dogen

Member
What would it take for implementing triple buffering to happen or what would it take for enforcing triple buffering from the user side to happen?
The former was answered by Kaldaien above, the latter was answered by Nvidia (though they could have named it "triple buffering :p).

By the way, do we know if this works for DX9.0 games? because it's really impossible to get triple buffering with those in fullscreen.

I tried it in Half-Life 2, seems like it works fine. Less latency than regular v-sync, and msi afterburner is showing framerates above 60, while the steam counter is showing a flat 60.
 

BruceCLea

Banned
Hey guys, I need help with this. I have a 980ti and I run most games at 1080p on my TV (60hz) so this tech sounds perfect for me. Getting rid of V-Sync would be a dream. When is this coming out? Do I enable it on the NVidia control panel? And then it transfers into the game I'm playing?

Sorry for not knowing anything and thanks for all the help!
 

dogen

Member
Hey guys, I need help with this. I have a 980ti and I run most games at 1080p on my TV (60hz) so this tech sounds perfect for me. Getting rid of V-Sync would be a dream. When is this coming out? Do I enable it on the NVidia control panel? And then it transfers into the game I'm playing?

Sorry for not knowing anything and thanks for all the help!

You have to use msi afterburner. The option is under the V-Sync dropdown list and it's called 0x18888888.

It'll probably be in the control panel for the next driver release.

Though you might be just as well off forcing v-sync and capping at 60.
 

BruceCLea

Banned
You have to use msi afterburner. The option is under the V-Sync dropdown list and it's called 0x18888888.

It'll probably be in the control panel for the next driver release.

Though you'd probably be just as well off forcing v-sync and capping at 60.

Thank you!!
 

Mabufu

Banned
"What PC gamers needed for decades" = "What a few elitists that need the best of the best of the best to be able to sleep at night wants"

But looks like a cool feature tho.
 

Buburibon

Member
It is probably worth mentioning that this is the polar opposite of Adaptive VSYNC. That alleviates some of the VSYNC performance penalty without letting the GPU run at unbounded framerates (helps with both power and thermal vs. no VSYNC at all).

With this, you'll run at your GPU's ceiling always and be at thermal/power limit constantly. If you're used to running games at a 60 FPS cap on your 60 Hz monitor, be prepared for a GPU that runs hotter and uses more electricity.

Interesting, so there's no way to set an arbitrary ceiling yourself at say 120fps, or 80% GPU load, or what have you? I'd rather keep GPU load at or below 90% outside of benchmarks.

I'll try setting the power limit to 90% via MSI Afterburner and see what happens. :p
 
Top Bottom