• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia GeForce GTX 1080 reviews and benchmarks

Looking at this makes me think that I'm better off upgrading my CPU (including RAM, Mainboard) for now and then go for a 1080Ti later.

7BgP8Iv.png

That's XCOM 2 though, where there doesn't seem to be any reason or logic when it comes to performance.

The other games they tested on there (Anno 2205, Tomb Raider, Battlefront) show little-to-no difference between CPUs. Some games will be CPU bound on a 2500K, but in most cases, it's not going to hold you back too much, especially when it's overclocked.
 

nubbe

Member
My 780 is dead

It is almost a 100% bump in performance, so it is worth it
The reason I will upgrade from my 780 SLI to 1080 SLI

but.... it is kinda tempting to wait for the 1080Ti since I suspect its "only" 6 months away... depending on what AMD do
 

wachie

Member
Just got into this before, and I agree with the other poster that the comparison is kinda of pointless... however, speculating that a GPU that is 300% more powerful than a PS4 neo, with a "ti" verison that will likely be 350% more powerful, might just match or meet a PS5 isn't entirely implausible.
It's not pointless. It gives us a pretty good gauge on how Nvidia sets their internal targets.
 

Cels

Member
Looking at this makes me think that I'm better off upgrading my CPU (including RAM, Mainboard) for now and then go for a 1080Ti later.

7BgP8Iv.png

what's the point of this benchmark?

stock 2500K vs OC 6700K? obviously even an OC 2500K will lose but it would certainly be closer.
 

OmegaX06

Member
Looking at this makes me think that I'm better off upgrading my CPU (including RAM, Mainboard) for now and then go for a 1080Ti later.

7BgP8Iv.png

That's actually an outlier though. Three other games were included in that same benchmark set and they saw very minimal FPS boosts (Inlcuding the frostbite engine). Xcom must be extremely CPU intensive and that isn't the case for a lot of games.
 

Kaako

Felium Defensor
Just read/saw most of the links in the OP. The OC potential should be pretty nice with custom PCBs/coolers for sure. This card so far seems to be as fast as advertised which is good. But we need faster; much, much faster.
 

riflen

Member
Hmm, interesting.
So what you claim is that it's triple buffering with the option of not actually rendering frames. You let the engine generate CPU-side frames as quickly as possible, and then pick the most recent one as soon as you are ready to render something new.

That would be neat if it is really what is happening.

Check out this video.

They describe it as sampling a rendering pipeline and flipping the frame that's most appropriate. Things in the video I thought were important:

- Designed for times when your render rate (frame rate) is high
- Adds around 8ms latency compared to Vsync off.
- Selectable from NVCP. With Fast Sync enabled, the game renders as if Vsync is off.
- Can be used together with G-Sync to keep frames in sync above the refresh rate, with lower latency than currently possible with Vsync.
- Will not be Pascal only, should be supported by a "wide range" of Nvidia GPUs.

Yes it seems like driver level triple buffering to me. Good stuff Nvidia.
 

K' Dash

Member
Checking ebay and all I see is everybody selling their 980ti lol, tempted to grab one but they have to go for $300 or less for me to consider it.
 

Durante

Member
Check out this video.

They describe it as sampling a rendering pipeline and flipping the frame that's most appropriate. Things in the video I thought were important:

- Designed for times when your render rate (frame rate) is high
- Adds around 8ms latency compared to Vsync off.
- Selectable from NVCP. With Fast Sync enabled, the game renders as if Vsync is off.
- Can be used together with G-Sync to keep frames in sync above the refresh rate, with lower latency than currently possible with Vsync.
- Will not be Pascal only, should be supported by a "wide range" of Nvidia GPUs.

Yes it seems like driver level triple buffering to me. Good stuff Nvidia.
That's what I assumed too, but from Goldfishking's explanation it actually sounds better than triple buffering. Triple buffering still needs to render every frame, while "Fastsync" could completely drop superfluous ones without even rendering them.

Good stuff either way, and good on them for not artificially limiting it to the new GPUs.
 
Lol at people offloading their 980tis. I'll nab one when the price dips low enough. I'd like to SLI mine for shits and giggles.

OT, the 1080 absolutely sips power and performs like a beast. It makes me that much more excited to get the 1080ti when that drops.
 
That's XCOM 2 though, where there doesn't seem to be any reason or logic when it comes to performance.

The other games they tested on there (Anno 2205, Tomb Raider, Battlefront) show little-to-no difference between CPUs. Some games will be CPU bound on a 2500K, but in most cases, it's not going to hold you back too much, especially when it's overclocked.

Agreed. And just so that one image isn't going to get repeated over and over without the other ones being shown at least once:

annot5sev.png


rottrzosvi.png


swbfl3sdq.png
 

riflen

Member
That's what I assumed too, but from Goldfishking's explanation it actually sounds better than triple buffering. Triple buffering still needs to render every frame, while "Fastsync" could completely drop superfluous ones without even rendering them.

Good stuff either way, and good on them for not artificially limiting it to the new GPUs.

Yes. I was under the impression that it was perfectly possible for your implementation of triple buffering to be able to drop frames. I guess I was wrong there.
In Q&A at the end, some guy asks "how is this different from triple buffering?" and the answer is that triple buffering cannot relieve back pressure and so latency goes up when the rate is way above refresh rate. Fast sync is all about lowest latency possible for frame rates above fresh rate, whilst maintaining sync.
 

dr_rus

Member
Check out this video.

They describe it as sampling a rendering pipeline and flipping the frame that's most appropriate. Things in the video I thought were important:

- Designed for times when your render rate (frame rate) is high
- Adds around 8ms latency compared to Vsync off.
- Selectable from NVCP. With Fast Sync enabled, the game renders as if Vsync is off.
- Can be used together with G-Sync to keep frames in sync above the refresh rate, with lower latency than currently possible with Vsync.
- Will not be Pascal only, should be supported by a "wide range" of Nvidia GPUs.

Yes it seems like driver level triple buffering to me. Good stuff Nvidia.
That's not triple buffering, in TB you have three back buffers which are presented sequentially. With fast sync you can have an unlimited number of back buffers basically from which the driver selects the one to show next.

I wonder if this in combination with really high framerates will lead to stuttering though. This also means that whatever framerate the engine is outputting will no longer be the same framerate as you see on the display.

Another side effect of that approach is that the GPU will always run at peak load even though you may not see like half of frames it will be outputting.
 

Durante

Member
Yes. I was under the impression that it was perfectly possible for your implementation of triple buffering to be able to drop frames. I guess I was wrong there.
In Q&A at the end, some guy asks "how is this different from triple buffering?" and the answer is that triple buffering cannot relieve back pressure and so latency goes up when the rate is way above refresh rate. Fast sync is all about lowest latency possible for frame rates above fresh rate, whilst maintaining sync.
I think that's just bullshitting.

Driver-level triple buffering should be good enough news without such shenanigans.

That's not triple buffering, in TB you have three back buffers which are presented sequentially.
No, that's not triple buffering, that's a rendering queue with 3 buffers.

What the NV guy describes is what we've been calling triple buffering since the 90s.
 

Kaleinc

Banned
Vsync is like AA, so many options but it ends up being super/down sampling anyway. But show me your fastsync, nv.
 

3pic

Neo Member

I posted on the last page. No one seems interested.

Definitely more than one eight pin going to it, and it got posted yesterday.

Well, it seems an "improved" design so it might be the 1080. I don't reckon seeing that look in any other card by Gigabyte.

8-pin + 6-pin. They also posted a pic which has also the same power pins yesterday:

Cil2FE-UkAAq2u4.jpg
 

finalflame

Member
As a 980 Ti owner, I am really conflicted on whether to pick up the FE or not. Seems like this has very little overclocking headroom, and the upgrade is substantial but not insane. Waiting for the 1080 Ti would mean a monumental increase in performance, but the 1080 is just so attractive.
 

dr_rus

Member
No, that's not triple buffering, that's a rendering queue with 3 buffers.

What the NV guy describes is what we've been calling triple buffering since the 90s.

Wait, triple buffering is three buffers: rendered into, completed, presented right now. They are sequential, you can't skip any of them straight to presentation, hence why TB always adds lag when compared even to DB.

Fast sync adds this ability to skip to any completed buffer to present it instead of waiting while all of them will be shown. That's how I see it at least.
 

riflen

Member
That's not triple buffering, in TB you have three back buffers which are presented sequentially. With fast sync you can have an unlimited number of back buffers basically from which the driver selects the one to show next.

I wonder if this in combination with really high framerates will lead to stuttering though. This also means that whatever framerate the engine is outputting will no longer be the same framerate as you see on the display.

Another side effect of that approach is that the GPU will always run at peak load even though you may not see like half of frames it will be outputting.

Yup, seems this is true. Presenter confirms that you can potentially get frame pacing weirdness. His response is that at high frame rates the delta between frames is very slight and so suggesting people wont notice. =)
This is meant for MLG peeps, so I doubt they care if GPU usage is max all the time. They want the most frames possible, whether they are rendered to the display or not I guess.
Anyway, more options are what we want.
 

bj00rn_

Banned

I remember there was some kind of controversy regarding Nvidia not having the architecture needed for doing "proper" async compute, and this theory was somewhat vigorously used to prove that AMD was the only future ahead. I understand the basic principles, but I'm not knowledgeable enough to understand if that chart there beyond first glance accurately tells another story or not. Anyone?
 

Durante

Member
Wait, triple buffering is three buffers: rendered into, completed, presented right now. They are sequential, you can't skip any of them straight to presentation, hence why TB always adds lag when compared even to DB.
No, that's what some people started calling "triple buffering" since it's the default behavior of DirectX with 3 buffers.

Fast sync adds this ability to skip to any completed buffer to present it instead of waiting while all of them will be shown. That's how I see it at least.
As I said, that's what we used to call triple buffering in the 90s.
3 buffers, always render into the oldest as quickly as possible, always pick the newest as the new framebuffer when it's time to present a frame.

Don't get me wrong, it's a very good thing for real triple buffering to return as an easily accessible option, but presenting it like this utterly novel innovation...

I remember there was some kind of controversy regarding Nvidia not having the architecture needed for doing "proper" async compute, and this theory was somewhat vigorously used to prove that AMD was the only future ahead. I understand the basic principles, but I'm not knowledgeable enough to understand if that chart there beyond first glance accurately tells another story or not. Anyone?
This whole discussion is extremely annoying since it would first need a formal description of what "proper" async compute entails, and such is basically never provided -- or if it is, it ends up "exactly like GCN does it". Ultimately, "async compute" is simply a different way of presenting work to the GPU, and whether it works or not is just a function of how well it allows the GPU to keep its execution units fed.
 

cripterion

Member
This is a big fat no for me. Card still doesn't handle 4K correctly. Although the gains are not too bad on the 980Ti the price is definitely shitty.

G-sync + 980 Ti @1440p till 1080Ti hits!
 

SRG01

Member
From TechPowerUp, the fact that it goes from 44FPS on Tomb Raider for the GTX 970 @ 1080p to 120 FPS is fucking unreal. Holy shit. Almost doubles the frames too with The Witcher 3 and almost doubles the FPS of GTAV as well. It also about doubles the FPS in Crysis 3 as well. This thing is so damn crazy.

There are a number of things that really stand out with the Tomb Raider Bench, and it's that the 970 and 970 SLI have nearly identical performance. Anyhow, the 970 is more of a mid-range card whereas the 1080 is more of a high-end card to replace the 980/980Ti.

Having said all that, the comparison @1080p between the 980 Ti and the 1080 is about 40%, which is totally on par with what most people were expecting.

https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1080/21.html
 

riflen

Member
Wait, triple buffering is three buffers: rendered into, completed, presented right now. They are sequential, you can't skip any of them straight to presentation, hence why TB always adds lag when compared even to DB.

Fast sync adds this ability to skip to any completed buffer to present it instead of waiting while all of them will be shown. That's how I see it at least.

Nah this is semantics of naming. That is a queue, as Durante says. What was first named triple buffering, way back in the day, features back buffers in parallel and the logic chooses which to flip to front.
In fact now I type this out, the presenter's response to the triple buffer question was bullshit, because what he referenced in his reply was a 3-buffer-queue.
 
As a 980 Ti owner, I am really conflicted on whether to pick up the FE or not. Seems like this has very little overclocking headroom, and the upgrade is substantial but not insane. Waiting for the 1080 Ti would mean a monumental increase in performance, but the 1080 is just so attractive.

You're throwing away your money going for the Founders Edition. If you absolutely must get a 1080 (don't), wait for the third party cards. They'll be cheaper and very likely run faster.

I also own a 980ti and I'm happy to wait for the 1080ti.
 
As a 980 Ti owner, I am really conflicted on whether to pick up the FE or not. Seems like this has very little overclocking headroom, and the upgrade is substantial but not insane. Waiting for the 1080 Ti would mean a monumental increase in performance, but the 1080 is just so attractive.

Good point, I was surprised that 1080 had benchmarks slower than a 980Ti or Titan X (and relatively small gains otherwise as a Titan X owner), but thinking in those terms the HUGE boost would come with a Ti if its got a similar boost

And they can just do the same thing where they release the super expensive Titan 1080 X, get the huge bucks from early adopters, then release the affordable 1080 Ti months later. Of course this depends on what AMD releases in terms of competitive cards
 

x3sphere

Member
Founder's Edition is not much of an upgrade from an heavily OC'd 980 Ti (1450-1500Mhz).

I couldn't find any direct comparisons but it looks like with the typical OC performance on FE, you'd only be gaining ~15% performance at best.

So definitely wait for AIB custom cards if you've got a 980 Ti.
 
Top Bottom