• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

BlurBusters 240Hz G-SYNC Input Lag Test (excellent article / tons of real-world data)

Izuna

Banned
So does having RTSS set to 141 global and a game like Overwatch at 141 using their frame rate limiter overwrite the option?

I have my RTSS set to 141 on global just so I don't have to configure each game separately but wouldn't mind doing so if I needed to.

A convenient solution is to add the games to RTSS that have in-game fps limiters and make sure it doesn't affect them. You will need to do this for Nier, for example.

I just load up RTSS when I wanna play a game. Most games don't sit on my system for very long.
 

jorimt

Member
This is not the case, and that part of the article isn't very clear.
It's right in that there are two entirely different things commonly called "triple buffering": one of them adds lag, one of them reduces it, and only one of them is what I would call triple buffering.

And in fact, that -- as in, real triple buffering -- is what NV has now branded FastSync! So with that you can actually use it in most games, and they do in fact test it in the article :p

Yeah, I didn't want to get into the weeds on the whole "triple buffering" subject, as it wasn't pertinent to G-SYNC, and I tested Fast Sync, which (as you say) is basically true old-style triple buffering.

So any other available "triple buffer" method is effectively superseded if you have a non-G-SYNC display and Nvidia hardware. And obviously, with G-SYNC, all forms of triple buffering are useless when paired with it anyway.

Wait, am I reading this right??

Even if the FPS are below the refresh rate, Fast Sync(right bottom) is still very superior to normal V-Sync(left top)?
Wasn't it advertised as "use this, if you have ~3x FPS of your display refresh rate" ??

Simply put, Fast Sync prevents the over-queuing of frames in the buffers when the framerate exceed the refresh rate, V-SYNC does not. That's the difference you're seeing. Both methods are relatively similar when capped below the refresh rate, at least input latency-wise.
 

TheExodu5

Banned
This is not the case, and that part of the article isn't very clear.
It's right in that there are two entirely different things commonly called "triple buffering": one of them adds lag, one of them reduces it, and only one of them is what I would call triple buffering.

And in fact, that -- as in, real triple buffering -- is what NV has now branded FastSync! So with that you can actually use it in most games, and they do in fact test it in the article :p

Makes sense.

D3DOverrider definitely didn't inplement proper triple buffering.
 

jorimt

Member
Durante is right; while passable, my triple buffer paragraph wasn't 100% clear in the differentiation between the two methods.

Both methods are called "triple buffering" because they have a third buffer, and both methods avoid locking to half refresh rate when the sustained framerate falls below the refresh rate, but that is where their similarity ends.

Each uses the third buffer differently:

"Fake" triple buffer V-SYNC simply adds a third buffer to the two existing buffers of double buffer V-SYNC, and serves the sole purpose of holding a rendered backup frame that can be swapped to when the framerate falls below the refresh rate and a fixed delivery window is missed. Unlike double buffer V-SYNC, this prevent the lock to half refresh rate, but when the refresh rate is exceeded, the third buffer is simply another buffer that can be over-filled, and (as I noted in my article) causes up to 1 added frame of delay over double buffer V-SYNC in the same instance.

This "fake" method is what is featured in the majority of modern games, and is really only useful if you must have V-SYNC, but your framerate can only sustain framerates inside the refresh rate.

"True" triple buffer V-SYNC is rarely featured in most games that use "triple buffering," but it is effectively the method that the DWM (Desktop Window Manager, or "Aero") implements for borderless and windowed mode. Fast Sync, as Durante mentioned, uses a form of this "true" method as well.

Like "fake" triple buffer V-SYNC, the "true" method has a third buffer, and it behaves pretty similarly to the fake method below the refresh rate. The real difference is what it does above the refresh rate...

Above the refresh rate, the third buffer utilizes the excess frames to display the most recently completed frame that coincides with the next refresh cycle, drops the rest, repeat. This prevents tearing, and does not allow over-queuing of the buffers (unlike the fake method), and thus has very little added latency (1/2 to 1 frame over V-SYNC OFF on average). But since it drops frames, it can also introduce uneven frame pacing, a.k.a microstutter.

The DWM composition, however, adds 1 frame of delay on top of whatever latency the triple buffer method adds over V-SYNC OFF, so borderless/windowed mode under the DWM is not as low latency as Fast Sync is in exclusive fullscreen (which bypasses the DWM).

All that said, I'm not 100% certain of the complete list of differences between "true" triple buffer V-SYNC, DWM triple buffer, and Fast Sync, and honestly, I'm a little weak on the triple buffer subject.

Also, after describing all that, I realized now why I didn't elaborate in my article; it wasn't worth the effort in light of the primary subject at hand.

However, I would be happy for Durante to share any further clarification or corrections of the above, and I could also attempt to clarify that paragraph (in a much shorter, to the point form of the above) in my article to more accurately reflect the differing nature of the two methods.
 

TheExodu5

Banned
Ok, for my own clarification and sanity, since it's so hard to find any literature on the subject, I'd like to clarify.

As I understand, fake triple buffering draws the frame in the buffer and then doesn't replace it until it has been displayed. Proper triple buffering continues to draw frames and will replace the buffer with the latest frame regardless of wether it's been displayed or not. Is this right?

What happens when the framerate goes higher than the refresh rate with fake triple buffering? I assume it just stops drawing until the monitor polls (which I guess would be v-sync)?
 

Tomeru

Member
Fucking confusing. Should I enable vsync on my 144 gsync monitor for every game, while limting framw to 141?
 
Great article. I understand now that I should enable vsync and gsync in the nVidia control panel. But the article is a bit above my pay grade, can someone expain in simple terms why I should have vsync on at all? I thought vsync introduced input lag, and is a large part of the reason why gsync exists at all.
 

Durante

Member
jorimt, that was a great description, I don't have anything to add.

Ok, for my own clarification and sanity, since it's so hard to find any literature on the subject, I'd like to clarify.

As I understand, fake triple buffering draws the frame in the buffer and then doesn't replace it until it has been displayed. Proper triple buffering continues to draw frames and will replace the buffer with the latest frame regardless of wether it's been displayed or not. Is this right?
It will replace the oldest buffer with the latest frame, yes.

What happens when the framerate goes higher than the refresh rate with fake triple buffering? I assume it just stops drawing until the monitor polls (which I guess would be v-sync)?
Yes, it stops, just like double-buffered V-sync does, but with one more frame in the queue.
 

Mechazawa

Member
Yeah, I don't actually understand why people are saying to turn on vsync and cap your refresh rate below the gsync threshold. That seems like contradicting advice.
 

Arkanius

Member
Fucking confusing. Should I enable vsync on my 144 gsync monitor for every game, while limting framw to 141?

Gsync on + Vsync on Nvidia Control Panel + Vsync OFF INGAME + 141 FPS limit ingame or with RTSS

Yeah, I don't actually understand why people are saying to turn on vsync and cap your refresh rate below the gsync threshold. That seems like contradicting advice.

Because the difference is beyond small and Vsync on in the NVCP has smoothness benefits
 

Mechazawa

Member
Because the difference is beyond small and Vsync on in the NVCP has smoothness benefits

But what smoothness? If you're already instituting an in-game cap, vsync basically never kicks in since it's only supposed to do that once you blow past your gsync threshold.
 

Paragon

Member
Yeah, I don't actually understand why people are saying to turn on vsync and cap your refresh rate below the gsync threshold. That seems like contradicting advice.
If framerate limiters were perfect there would be no need, but you will occasionally get frame-time spikes that exceed your framerate limit.
The limiter itself usually does not pick these up - RTSS may show a perfect frametime graph - but the refresh rate counter built into the display will show them.

If you have V-Sync disabled, it will tear badly and stutter for multiple frames.
If you have V-Sync enabled, there is apparently frame-time compensation being applied to prevent those spikes causing problems, rather than simply switching over to V-Sync behavior.

If I have V-Sync completely disabled when G-Sync is active, I have to set the framerate limiter to around 15 FPS below the refresh rate to prevent many games from ever tearing.

Ok, for my own clarification and sanity, since it's so hard to find any literature on the subject, I'd like to clarify.
As I understand, fake triple buffering draws the frame in the buffer and then doesn't replace it until it has been displayed. Proper triple buffering continues to draw frames and will replace the buffer with the latest frame regardless of wether it's been displayed or not. Is this right?
What happens when the framerate goes higher than the refresh rate with fake triple buffering? I assume it just stops drawing until the monitor polls (which I guess would be v-sync)?
'Fake' triple buffering has two back-buffers queuing up frames in series, waiting once they are full. So the framerate is capped at your refresh rate.
'Real' triple buffering has two back-buffers in parallel, and will flip between them, rendering as many frames as possible. Then when it's time to refresh the display, the most recent complete frame is moved to the front-buffer.
 

jorimt

Member
Yeah, I don't actually understand why people are saying to turn on vsync and cap your refresh rate below the gsync threshold. That seems like contradicting advice.

But what smoothness? If you're already instituting an in-game cap, vsync basically never kicks in since it's only supposed to do that once you blow past your gsync threshold.

Your comments more than likely means you did not read the article in full (or at all), where I give the answer in detail.

As this has been a common reactive question to my article (or more likely, a reaction to other comments about my article) thus far, I can't answer it every time, but I will attempt to here once.

With G-SYNC enabled, "V-SYNC" is not "V-SYNC." It is instead (as far as can be figured) effectively a flag at the driver level that permits G-SYNC to allow itself to compensate for sudden frametime variances. These frametime variances occur whenever the frametime of an already delivered frame was within the G-SYNC range/refresh rate, and then the frametime of the next frame either exceeds the refresh rate, or drops far below the previous frame's frametime.

G-SYNC can compensate for gradual changes to frametime, even with V-SYNC "Off," but without the frametime compensation mechanic V-SYNC "On" allows, it will tear in these instances of sudden fluctuation.

If you want "Adaptive G-SYNC," by all means, go ahead and use G-SYNC + V-SYNC "Off," just know it will tear at one point or another, and that it actually disables core G-SYNC functionality to allow tearing in the first place. If you want no tearing at any point, G-SYNC + V-SYNC "On" is the only option.

As for why Nvidia chose the labeling and placement they did when they introduced the option a couple of years ago, I understand the reasons they did (logistical issue), but I don't agree with the reasons they did; it has caused needlessly massive confusion, paranoia, and misinformation since. A few simple placement and label changes to the G-SYNC section and V-SYNC option dropdown in the control panel would resolved this confusion quite easily once and for all.

tldr; read my article in full if you're interested, it has the answers.
 

TheExodu5

Banned
jorimt, that was a great description, I don't have anything to add.

It will replace the oldest buffer with the latest frame, yes.

Yes, it stops, just like double-buffered V-sync does, but with one more frame in the queue.

Alright great. All makes sense now.

So, for simplicity's sake, lets call "fake" Triple Buffering simply Triple Buffering and proper Triple buffering FastSync.

Are there any examples of games/engines that implemented FastSync in the past? I recall Sanctum 2 having remarkably low input lag back when I wasn't using GSync and so I assume it's using FastSync.

The Source engine, at the time of Portal 2's release was curious. It has a double/triple buffering toggle within the game settings. At the time of release, choosing Double Buffering resulted in noticeably lower input lag. However, some years later, this didn't seem to be the case anymore. They both seemed to offer similarly high input lag. I wondered if this was an engine issue or if NVidia defaulted to triple buffering at a driver level at some point.

I can't remember the last time on PC where I played a game that was truly double buffered, which seemed to go against the general recommendation of using d3doverrider in the past.

As for the recommendation from Blur Busters to frame cap just below the monitor refresh rate, I take it this is essentially to force just-in-time rendering. It seems as though GSync is still not completely GPU driven. The monitor is still polling the GPU, I'm assuming at each refresh interval. If GSync truly can't be made to be GPU driven, I could foresee some improvements in the polling implementation in the future.

Say, the polling could be made asynchronous to the refresh intervals. The GSync module could poll at 1000Hz and could interrupt the next refresh interval until the following frame is truly ready. This could eliminate the ceiling issue entirely. Don't know how feasible it would be, but I certainly hope some improvements can be made here so that we don't need to resort to frame limiting on a game-per-game basis.
 

jorimt

Member
As for the recommendation from Blur Busters to frame cap just below the monitor refresh rate, I take it this is essentially to force just-in-time rendering. It seems as though GSync is still not completely GPU driven. The monitor is still polling the GPU, I'm assuming at each refresh interval. If GSync truly can't be made to be GPU driven, I could foresee some improvements in the polling implementation in the future.

G-SYNC must be able to adjust the refresh rate to the framerate, or it reverts to fixed refresh rate V-SYNC behavior. So even if the polling rate was reduced or eliminated, that simple fact would stand. Though, yes, there may be a clever way to keep G-SYNC within the refresh rate without need of an external FPS limiter, but if Nvidia had figured that out, you think they would have done it by now.

Say, the polling could be made asynchronous to the refresh intervals. The GSync module could poll at 1000Hz and could interrupt the next refresh interval until the following frame is truly ready. This could eliminate the ceiling issue entirely. Don't know how feasible it would be, but I certainly hope some improvements can be made here so that we don't need to resort to frame limiting on a game-per-game basis.

As far as is known, G-SYNC already has a 1000Hz (1ms) polling rate. However, even if it was instantaneous, due to multiple factors that create various delivery offsets, it would still have to suspend the frame periodically between refresh cycles before it could align with the top of the next scanout because of scanout speed.

As stated in my article, G-SYNC can control how many times the scanout is repeated per second, and this also controls the duration of the vertical blanking interval from frame to frame. However, G-SYNC can do nothing about the scanout speed (completion time) of the given refresh rate, and it never will be able to. This is why you see large differences between V-SYNC OFF and G-SYNC at 60Hz (16.6ms scanout speed), and little to no differences at 240Hz (4.2ms scanout speed) in my tests.

That, and even if it could, there would still be frametime variances output by the system which would have to be accounted for. This means the frametime compensation mechanic already implemented in G-SYNC is actually necessary for its own polling rate (whether it still exists or no), frametime variances, and scanout speed.

E.g. G-SYNC is limited by current display technology.

This will become less and less of an issue as maximum refresh rate, and thus, scanout speed is increased, however.
 

Knurek

Member
jorimt, just a question, could you maybe test Kaldaien's SpecialK framelimiter? How does it stack up against RTSS wrt input lag?
(Might be relevant given that RTSS is closed source and can pretty much disappear overnight on Unwinder's whim)
 

LCGeek

formerly sane
G-SYNC must be able to adjust the refresh rate to the framerate, or it reverts to fixed refresh rate V-SYNC behavior. So even if the polling rate was reduced or eliminated, that simple fact would stand. Though, yes, there may be a clever way to keep G-SYNC within the refresh rate without need of an external FPS limiter, but if Nvidia had figured that out, you think they would have done it by now.

We only have gsync cause amd and nvidia aren't allowed to modify windows to better handle this issue. Nvidia has plenty figured out in this area sadly they lack the means to make the changes that basically now require users to have hardware to do the same vs software that could do it.

We could have gysnc and strobing mixed but due to the way the most popular gaming os works we can't have plenty of good things be it vsync, cpu scheduling, or a useful network kernel.
 

jorimt

Member
jorimt, just a question, could you maybe test Kaldaien's SpecialK framelimiter? How does it stack up against RTSS wrt input lag?
(Might be relevant given that RTSS is closed source and can pretty much disappear overnight on Unwinder's whim)

I've never used it, so I'd have to take a look at the documentation and experiment a bit. The difference could depend on several factors, and I'm not sure how he is intercepting the frames; if it's anything like Nvidia Inspector (doubtful), it may clash with G-SYNC functionality.

I have only taken a brief glance, but am I correct in assuming it can only be used in certain games currently, and RTSS must be used along with it? The instructions also seem all over the place, and I can't quite pin how it is used without some effort.

If I get the time (busy with other projects), I may revisit. Also, doing a framerate limiter-only article in the future (no ETA) isn't out of the question.
 

Knurek

Member
I have only taken a brief glance, but am I correct in assuming it can only be used in certain games currently, and RTSS must be used along with it? The instructions also seem all over the place, and I can't quite pin how it is used without some effort.

It should work with anything using D3D8 to D3D11, plus OpenGL, there are two ways of operation - local injection (just put SpecialK32 or SpecialK64 dll files in the game folder, rename them to match the API game uses and just run the game), or global injection (which works out of the box for everything run from SteamApps folder and can be made to run with other games as well).
No need for RTSS, having it running while using SpecialK can cause issues.
There's a decent guide on Steam forums.
 
I remember using fast sync once and it fucked up the game I was playing, so I disabled it. I hardly ever reach 144 fps anyway so maybe I don't need it.
 

Darktalon

Member
Jorimt, what effect does SLI have in regards to latency while using Gsync? I've heard that it adds 1-2 frames of latency but I haven't seen any concrete data.

For example, I have a 144hz monitor+ gsync, and let's say that with only one gpu I can produce 60 fps, but with 2 gpus I can produce 100 fps.

My assumption is that the faster fps offsets somewhat an extra 1-2 frames of delay, but I'm not sure.

Also I would assume that if I can already reach 141fps with just one gpu, like league of legends or csgo, I'm better off running the game with sli disabled, unless I want to supersample or something.
 

jorimt

Member
@aj_hix36

I'm afraid I have no practical experience with SLI, and all tests were conducted on a single GPU. It's possible SLI introduces latency due to the way it functions, but the severity likely varies by the game's support for SLI and/or the system it's being run on.

One thing I can tell you, is yes, if you can sustain acceptable framerates within the G-SYNC range on a single GPU in a given game, disable SLI for the best possible consistency in frame delivery.
 

jorimt

Member
Thanks to Durante and TheExodu5's input, I have amended my "triple buffer" paragraph in the "G-SYNC Ceiling vs. V-SYNC: Identical or Fraternal?" of my "G-SYNC 101: Input Lag & Optimal Settings" article to more clearly differentiate the two distinct triple buffer methods.

It is by no means an extensive or complete explanation (nor is it meant to be; this is primarily a G-SYNC article), but it should now be clearer to users upon their first or revisited read through of the article.
 

SapientWolf

Trucker Sexologist
Is there any major input lag difference between 144hz and 240hz at a 60fps cap? I don't think there was a data point for 60fps at 240hz.
 

jorimt

Member
Is there any major input lag difference between 144hz and 240hz at a 60fps cap? I don't think there was a data point for 60fps at 240hz.

Theoretically? Well, the difference between the scanout speed at 60Hz (16.6ms) and 144Hz (6.9ms) is 9.7ms.

In my 60Hz vs. 144Hz test, with first on-screen reactions measured, I was seeing roughly a 10ms reduction. If you count middle screen (crosshair-level) reactions only (the traditional way), you'd be seeing about half that: 5ms of reduction.

240Hz has a 4.2ms scanout speed, which means it is 12.4ms faster than 60Hz at delivering a single frame. In light of all of that, theoretically, as the reduction seems to be directly linked to the increase in scanout speed, 240Hz 60 FPS would be about 13ms or so of reduction over 60Hz 60 FPS, from first reaction, and 6-7ms of reduction from middle screen level. So 240Hz would be up to 3ms faster than 144Hz from first reaction, and up to 1-2ms faster from middle screen.

That's just an educated (and oversimplified) guess though. I'd have to test it to know exactly, but that's an approximate projection based on the existing results.
 

grendelrt

Member
Theoretically? Well, the difference between the scanout speed at 60Hz (16.6ms) and 144Hz (6.9ms) is 9.7ms.

In my 60Hz vs. 144Hz test, with first on-screen reactions measured, I was seeing roughly a 10ms reduction. If you count middle screen (crosshair-level) reactions only (the traditional way), you'd be seeing about half that: 5ms of reduction.

240Hz has a 4.2ms scanout speed, which means it is 12.4ms faster than 60Hz at delivering a single frame. In light of all of that, theoretically, as the reduction seems to be directly linked to the increase in scanout speed, 240Hz 60 FPS would be about 13ms or so of reduction over 60Hz 60 FPS, from first reaction, and 6-7ms of reduction from middle screen level. So 240Hz would be up to 3ms faster than 144Hz from first reaction, and up to 1-2ms faster from middle screen.

That's just an educated (and oversimplified) guess though. I'd have to test it to know exactly, but that's an approximate projection based on the existing results.

Any advice for people running a non gsync 60hz screen. I have both gsync and non gsync, so using this advice on my gsync.
 

jorimt

Member
@grendelrt

My article isn't just limited to G-SYNC, and compares it directly to V-SYNC OFF, double buffer V-SYNC, and Fast Sync across six refresh rates.

Read my "G-SYNC vs. V-SYNC w/FPS Limit: So Close, Yet So Far Apart" section and "G-SYNC vs. Fast Sync: The Limits of Single Frame Delivery" for more information about standalone V-SYNC and Fast Sync, and optimal settings for each, if you haven't already.

On a standard non-G-SYNC 60Hz display, if you want to eliminate tearing, but avoid V-SYNC input lag, there are couple of options:

1. Double Buffer V-SYNC:
If you can sustain framerates above your refresh rate at all times (it will lock to half refresh each time sustained frames drop within the refresh rate, if not), enable double buffer V-SYNC (Nvidia Control Panel) and limit the framerate to 59 FPS. If you limit in-game, there will be no additional latency from the framerate limiter itself, but for the reasons explained in my article, V-SYNC will accumulated delay and cause occasional stutter, due to repeated frames. You can reduce this effect if you use RTSS to limit a few decimal points below the refresh rate (see article for link to guide in that section), but RTSS, unlike an in-game limiter, adds up to 1 frame of delay.

2. Fast Sync:
If you can sustain framerates far above the refresh rate, use Fast Sync with no FPS limit. This can cause slightly more frequent/sever microstutter than the above double buffer V-SYNC settings, due to dropped frames, but it won't lock to half refresh if your framerate drops below the refresh rate at any point, and if you can sustain framerates 2x or more above your refresh rate, Fast Sync will have less lag than V-SYNC in scenario "1."

Complete details and 60Hz input latency tests are in the article. Take a second (or first) look over it. The fourth part of the article also features a basic external framerate limiter set up guide.
 

grendelrt

Member
@grendelrt

My article isn't just limited to G-SYNC, and compares it directly to V-SYNC OFF, double buffer V-SYNC, and Fast Sync across six refresh rates.

Read my "G-SYNC vs. V-SYNC w/FPS Limit: So Close, Yet So Far Apart" section and "G-SYNC vs. Fast Sync: The Limits of Single Frame Delivery" for more information about standalone V-SYNC and Fast Sync, and optimal settings for each, if you haven't already.

On a standard non-G-SYNC 60Hz display, if you want to eliminate tearing, but avoid V-SYNC input lag, there are couple of options:

1. Double Buffer V-SYNC:
If you can sustain framerates above your refresh rate at all times (it will lock to half refresh each time sustained frames drop within the refresh rate, if not), enable double buffer V-SYNC (Nvidia Control Panel) and limit the framerate to 59 FPS. If you limit in-game, there will be no additional latency from the framerate limiter itself, but for the reasons explained in my article, V-SYNC will accumulated delay and cause occasional stutter, due to repeated frames. You can reduce this effect if you use RTSS to limit a few decimal points below the refresh rate (see article for link to guide in that section), but RTSS, unlike an in-game limiter, adds up to 1 frame of delay.

2. Fast Sync:
If you can sustain framerates far above the refresh rate, use Fast Sync with no FPS limit. This can cause slightly more frequent/sever microstutter than the above double buffer V-SYNC settings, due to dropped frames, but it won't lock to half refresh if your framerate drops below the refresh rate at any point, and if you can sustain framerates 2x or more above your refresh rate, Fast Sync will have less lag than V-SYNC in scenario "1."

Complete details and 60Hz input latency tests are in the article. Take a second (or first) look over it. The fourth part of the article also features a basic external framerate limiter set up guide.
Thanks! I have been currently using Fast Sync on the 60hz screen.

So using RTSS adds up to 1 frame of delay, for games that dont have a limiter option like Titanfall 2 for example, would you say Gsync + Nvidia Vsync + RTSS is still the best solution if you can hit the refresh cap on your monitor?
 

jorimt

Member
@grendelrt

It's that or let G-SYNC hit its ceiling, where it will revert to fixed refresh rate V-SYNC behavior, and you will get anywhere from 2-6 frames of delay, depending on the game.

RTSS does add up to 1 frame of delay, but it has very little frametime drift (e.g. it's accurate), and allows G-SYNC to remain in its range, retaining all other benefits of the technology; smooth variable framerates with no tearing or sync-induced stutter.

So, yes, as I lay out in my article, if there is no in-game limiter available, and the framerate can exceed the refresh rate the majority of the time, RTSS is the next best thing to an in-game limiter...for now.

Hopefully one day in-game limiters will be a given feature in any game, and/or an external predictive, latency-free FPS limiter will be achieved (not counting it out, but not holding my breath on the latter either).
 

grendelrt

Member
@grendelrt

It's that or let G-SYNC hit its ceiling, where it will revert to fixed refresh rate V-SYNC behavior, and you will get anywhere from 2-6 frames of delay, depending on the game.

RTSS does add up to 1 frame of delay, but it has very little frametime drift (e.g. it's accurate), and allows G-SYNC to remain in its range, retaining all other benefits of the technology; smooth variable framerates with no tearing or sync-induced stutter.

So, yes, as I lay out in my article, if there is no in-game limiter available, and the framerate can exceed the refresh rate the majority of the time, RTSS is the next best thing to an in-game limiter...for now.

Hopefully one day in-game limiters will be a given feature in any game, and/or an external predictive, latency-free FPS limiter will be achieved (not counting it out, but not holding my breath on the latter either).

That helps a lot, thanks for the replies.
 

Profanity

Member
Don't forget about Battle(non)sense. He has similar findings.

FreeSync vs. G-Sync Delay Analysis:
https://www.youtube.com/watch?v=mVNRNOcLUuA

Free/G Sync add no input lag
Must cap at -2fps from refresh rate
In-game capping is best
RTSS is lowest latency cap if you cannot do in-game

Also tested fast sync:
https://www.youtube.com/watch?v=L07t_mY2LEU
Adds input lag, but less than V Sync
Stutter

He has many other videos worth checking out too.

Yeah this guy does great netcode analysis for most multiplayer games.
 

mdrejhon

Member
Continuing tests with a different monitor, which I have now received:

A 480 Hz monitor prototype!



We think this is the first time 480 Hz is being tested.

Mark Rejhon
Chief Blur Buster
 

Paragon

Member
Such overkill, there had to be a limit on motion clarify at some point, right?
Yes, but 480Hz (2ms) is not nearing the limits if it's a sample-and-hold display. (or an LCD panel)
CRTs had ≤1ms persistence and have noticeably better motion clarity than anything else; but were still not perfect, and higher resolution displays need even lower persistence. Ideal would be something below 0.5ms, perhaps 0.1ms.
 

mdrejhon

Member
Such overkill, there had to be a limit on motion clarify at some point, right?
Yeah, diminishing points of returns to be sure.

However, it doesn't disappear until the quadruple digits (>1000Hz) or the quintuple digits (>10,000Hz), depending on use case. Also, 480Hz is still 2ms full-persistence which is more motion blur than the "1ms MPRT" (not GtG) strobe backlights such as at shorter ULMB Pulse Widths.

Also, Michael Abrash, chief VR scientist at Oculus (in this blog post), said:

Michael Abrash said:
...the sweet spot for 1080p at 90 degrees FOV is probably somewhere between 300 and 1000 Hz, although higher frame rates would be required to hit the sweet spot at higher resolutions.

Whether for VR or desktop, simultaneous high resolution and high FOV, starts to demand high Hz to eliminate motion imperfections.

Game engine / GPU / mouse microstutters is more the limiting factor. But remove that (make things super smooth, like a TestUFO animation), and the benefits still continue for a while even as diminishing points of return occur. Old game engines such as CS:GO still benefits even at 480 Hz, though.

Also, reduces stroboscopic effect (e.g. mouse stepping effect) which still requires thousands of Hz to eliminate if you don't want to add artificial GPU motion blur to the frames themselves.
 

mdrejhon

Member
Yes, but 480Hz (2ms) is not nearing the limits if it's a sample-and-hold display. (or an LCD panel)
CRTs had ≤1ms persistence and have noticeably better motion clarity than anything else; but were still not perfect, and higher resolution displays need even lower persistence. Ideal would be something below 0.5ms, perhaps 0.1ms.
This is correct.

Another consideration is the stroboscopic effect (also a problem on CRT).



This requires thousands of Hz to solve, unless one adds artificial motion blur (which would muddy panning images = like artifically increased persistence)

Solving both motion blur & stroboscopic effects -- at the same time -- and perfectly & simultaneously (to human vision limits at an average human's maximum eye tracking speed) requires thousands of Hz. In some extreme cases, >10,000Hz -- before things look analog-motion enough you pass any theoretical "Holodeck Turing Tests" (aka things look like perfect reality).

You can't control whether a user decides to do a fixed-gaze (stroboscopic effect problem) or decides to do eye tracking of moving objects (motion blur problem).

The higher the resolution you go and the bigger the FOV, the longer your eyes can track over more pixels, and thus, raises the Hz ceiling before it punches past the "diminishing points of returns" ceiling. Once you reach retina 180 degree Holodeck-like VR displays, the diminishing points of returns (Hz) apparently doesn't fully end till the quintuple digit refresh rates.

NVIDIA tested a 16,000Hz AR display and Viewpixx has a laboratory 1440Hz DLP projector.

The human invention of using static images to represent analog moving images that fits well with analog eyes -- still creates one-or-the-other artifacts (especially in VR) until you go well into the thousands of Hz. Unless we can invent framerateless displays...

Diminishing points of returns? Yes. Fully ends? No.
 

mdrejhon

Member
Dont source games cap out at 300 or more?

Yes, they do.

Geforce 1080/Titans and Radeon RX Vega's can run at >1000fps in CS:GO, a game that's still popular. Testing on EVGA GTX 1080 FTW card:

At 60Hz:

blur-busters-gsync-101-vsync-off-w-fps-limits-60Hz.png


At 240Hz:

blur-busters-gsync-101-vsync-off-w-fps-limits-240Hz.png


1000fps high speed video camera, button-to-pixels, from LED of modified gaming mouse (light illuminates instantly on button press) to first on-screen reaction. Captures the lag of the whole chain from mouse/usb/computer/display processing/GtG/etc lag. Min/Max covers the 'lag jitter' range of lag inconsistency.

Findings:
-- During VSYNC OFF, things continue to improve even at escalating frame rates far above your refresh rate.
-- Lag consistency improves at higher frame rates (tighter spread for min/max/avg)
-- This remained true regardless of refresh rate (60Hz, 120Hz, 240Hz)
-- Lag consistency also improved further at higher refresh rates. (tighter spread for min/max/avg)
-- Best of both worlds is high Hz and high fps = best lag consistency (tightest min/max/avg)
-- Sometimes lag predictably/consistency is as important as average input lag.
-- Lag consistency can be quite important in eSports too. Inconsistencies leads to mis-aiming.

GSYNC at 240 Hz is really, really, really good -- but for the top league of CS:GO players -- once you gain 1000fps VSYNC OFF (CS:GO), lag is further reduced and more predictably consistently low.

We'll have a similar chart for 480Hz tests.

I love variable refresh rate technologies -- and it makes motion beautiful -- that said, the graphs partially explain why many eSports players often prefer VSYNC OFF over GSYNC/FreeSync in order to gain a few extra milliseconds especially for older game engines. In the simultaneous-shoot-draw situations (turn corner, see each other, shoot simultaneously) similar reaction times means it is like crossing the olympics finish line -- reacting a millisecond earlier can matter when winning a big pot prize. They may not always feel the millisecond, like Olympics racers may not know they beat their competitor until they see the scoreboard. In simultaneous-draw, it is you getting the bullet first, or them getting the bullet first in these same-reaction-time situations. Even with tick granularity (128 ticks), that millisecond might round you off to the previous tick cycle, gaining you that frag in simul-draw. In this rareified stratosphere of competition and well-matched human reaction times, differences in lag at millisecond level of equipment can sometimes make/break a well-matched match, even if you can't feel the millisecond. Being that, eSports being a billion dollar industry nowadays, there's certainly a market for even higher Hz displays (even if not all of us "needs" it). True, using VSYNC OFF instead of VRR mainly matters when you can get ultra-high frame rates, especially far above the monitor refresh rate.
 
Top Bottom