• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia GeForce GTX 1080 reviews and benchmarks

Me too. Why didn't you show all the results?

Maybe because it shows a 20% advantage for the 1080 while the others show only a 12-16%?

Compared to the 40% plus advantage in other games on the same site this is significant. Don't you think?

I was replying to a Benchmark for Hitman running in DX11 at 1440p, so I quoted the same test from a site that is very trustworthy. You can see a lot of the cards match up except the Fury X which on the one I quoted was at 61fps vs. TPU's result of 51.7fps. That's a 10 fps disparity only on the side of AMD that I felt was unusual.
 

K' Dash

Member
So what are you saying is, Nvidia will lower 980ti performance after new pascal drivers arrives?

Last page has a few posts on this, like I said in one of them, I'm not one to believe in planned obsolesence, but in this case, Nvidia's case, I found damning evidence of fucking up Keppler for the benefit of Maxwell cards, just a few google searches and you can find benchmarks and numbers to support the fact that Nvidia fucked up their old architecture in favor of the new one.
 
I was replying to a Benchmark for Hitman running in DX11 at 1440p, so I quoted the same test from a site that is very trustworthy. You can see a lot of the cards match up except the Fury X which on the one I quoted was at 61fps vs. TPU's result of 51.7fps. That's a 10 fps disparity only on the side of AMD that I felt was unusual.

That's fine, but you can't ignore the Fury coming in at 40% of the 1080 in other benches, but only 15% in Hitman. This points towards a Fury advantage in Async.
 

Freiya

Member
Unless you are a competitive gamer and need insane good input lag, there are plenty of high quality 4k monitors or 4:4:4 4k 40" tvs that are wider than any ultrawide, for less money.


As I browse neogaf on my 65 inch curved 4k tv. Witcher 3 4k60 is bonkers.
 

J-Rzez

Member
Question about Vega, I could have sworn AMD got rights to early batches of HBM2 since they used HBM1 already, or am I wrong? That's the only way I can see them getting Vega out in Oct, unless they changed to 5X.

All I know is if those rumors are true, they better announce soon before people like me buy up 1080s.
 
Question about Vega, I could have sworn AMD got rights to early batches of HBM2 since they used HBM1 already, or am I wrong? That's the only way I can see them getting Vega out in Oct, unless they changed to 5X.

All I know is if those rumors are true, they better announce soon before people like me buy up 1080s.

They helped finance the development of HBM1 and had first priority, which is why Fury was out way before Nvidia could get an HBM-equipped card. I think Nvidia was also scared by the 8GB limit on HBM1 (with 4GB being the only cost-effective solution) so they opted to just wait it out and go with GDDR5X for the 1080 to keep cards profitable.
 

Filth

Member
Speaking of sabatauge. I've been using my 770 for all this time and have had no problems with it at all. I was playing battleborn on medium - high settings with zero issues. I updated the video drivers just the other day and while playing battleborn the card overheated. I honestly think they are forcing my hand here
 
Last page has a few posts on this, like I said in one of them, I'm not one to believe in planned obsolesence, but in this case, Nvidia's case, I found damning evidence of fucking up Keppler for the benefit of Maxwell cards, just a few google searches and you can find benchmarks and numbers to support the fact that Nvidia fucked up their old architecture in favor of the new one.

Your misintepreting the situation. Nvidia isnt purposly gimping kepler, kepler is just a bad architecture for the types of workloads developers are creating.
 
I did point out a page back that Hitman favored AMD heavily, another reason why I think it's a poor benchmark for new hardware.

So a benchmark that uses async and favors AMD is a poor benchmark? Why would you think that? AMD developed their hardware towards that particular feature. A feature that will be utilized heavily in DX12, but is somehow a poor benchmark? Come on now.

Do you disregard Doom benchmarks that favor Nvidia?
 

holygeesus

Banned
So a benchmark that uses async and favors AMD is a poor benchmark? Why would you think that? AMD developed their hardware towards that particular feature. A feature that will be utilized heavily in DX12, but is somehow a poor benchmark? Come on now.

Do you disregard Doom benchmarks that favor Nvidia?

I thought the Metro: Last Light benchmarks were more interesting (odd) in that they have a 980ti (OC) and Titan X outperforming the new 1080 at all resolutions tested using DX11 obviously. Some weird results among all the tests for sure.
 

slapnuts

Junior Member
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.

Maybe you should wait for AMD to release their cards before thumbing your chest, geesh..you make it sound like these benchmarks are tit for tat when reality is..only Nvida has shown their cards. Some times its impossible to hide inner fanboysim. No offense but some of your posts reek of it.
 
I thought the Metro: Last Light benchmarks were more interesting (odd) in that they have a 980ti (OC) and Titan X outperforming the new 1080 at all resolutions tested using DX11 obviously. Some weird results among all the tests for sure.

When you have a discrepancy like that it's most likely a driver issue. Similar to how a 380X was out performing 390 cards in Doom.
 

Sciz

Member
Last page has a few posts on this, like I said in one of them, I'm not one to believe in planned obsolesence, but in this case, Nvidia's case, I found damning evidence of fucking up Keppler for the benefit of Maxwell cards, just a few google searches and you can find benchmarks and numbers to support the fact that Nvidia fucked up their old architecture in favor of the new one.

The Crysis 2 water/tessellation thing is bullshit.
 

Renekton

Member
Maybe you should wait for AMD to release their cards before thumbing your chest, geesh..you make it sound like these benchmarks are tit for tat when reality is..only Nvida has shown their cards. Some times its impossible to hide inner fanboysim. No offense but some of your posts reek of it.
To be fair, last year he was leaning team red for his 4K setup then AMD just HDMI1.4ed him.
 

Hawk269

Member
Has any of the review sites done a deep dive into SLI with the 1080? I am curious how 2 in SLI and with the new SLI bridge compares to other SLI setups. What I am more curious about is what the impact of the new SLI Bridge tech brings and how much more performance you may gain due to it. I read many of the reviews and not found one with SLI details.

I just wonder what 2 of these suckers running at 2100 each would equate as far as benchmarks.
 

slapnuts

Junior Member
This is a big fat no for me. Card still doesn't handle 4K correctly. Although the gains are not too bad on the 980Ti the price is definitely shitty.

G-sync + 980 Ti @1440p till 1080Ti hits!

In my opinion, all of this feels like a new sub-generation from the 28nm years. I mean the jumps in performance during the entire 28nm generation, this is what the 1080's improvements feel to me, not so much as a brand new generation on a new smaller node at all to be frank. It feels like we are starting to hit that same brick wall that cpu's hit years ago that continue on to this day. AMD still has yet to show its cards so there is hope that they will come with something nice that will push Nvidia to put something out much more compelling.

I don't see any need for anyone in the high end/enthusiast level to upgrade to a 1080 for the simple fact the 1080 is still not able to hit that golden mark of 4k/60fps. Maybe the 1080Ti will...i would hope so. Personally i opted for a 390 8gb only because i had a inclination this would happen and that is why i felt the only worthy upgrade last year was getting a card with 8gb since i game at 1080p only at the moment. People say you don't need more than 4gb for 1080p but make no mistake about it....YOU WILL Need more than 4gb for 1080p gaming. Heck, i remember when people were swearing up and down that you don't need no more than 1gb of vram for 1080p...and we know how all that turned out.

I think AMD will really push on the DirectX12 efficiency with their Polaris cards that will catch a lot of folks off guard but in a good way, of course.
 

Chiggs

Member
Saw a few interesting comments at Tom's Hardware:

I'm quite disappointed at this: after halving the process node, improving the architecture, adding many improvements along the way, this card comes at a higher TDP than the previous one and performs a bit better.

Chris, please do a review with GTX 980, 980 Ti and 1080 with the last one clocked exactly like a 980. I'd really like to see how much of this 15-FPS-avg gain is thanks to that 40%-ish increase in clock speed.
Lower power consumption...well...the 980 stood around the 300W mark in load and the 1080 appears to spend most of its time around the 250W mark. It's probably even better if we negate that clock speed advantage. But what about its performance in that case?

What are the actual architectural gains? Of course a 3.7GHz Pentium 4 is gonna be faster than a 2.6GHz one. We all know that... Combining clock speed improvements with architectural gains masks the actual gains, but I believe a clock-for-clock comparison would be very useful.

and

i see 1080 as a disappointment in performance. Clock for clock it's not much faster than the 980 and a part of the performance increase actually comes from the new image compression technic (according to Nvidia's slides image compression alone increases performance by like 30%). Good things are like better frame times and much less power consumption

http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1080-pascal,4572-10.html
 

takriel

Member
Last page has a few posts on this, like I said in one of them, I'm not one to believe in planned obsolesence, but in this case, Nvidia's case, I found damning evidence of fucking up Keppler for the benefit of Maxwell cards, just a few google searches and you can find benchmarks and numbers to support the fact that Nvidia fucked up their old architecture in favor of the new one.

That's so evil, holy shit. They need to be called out on this more.
 

Vex_

Banned
Last page has a few posts on this, like I said in one of them, I'm not one to believe in planned obsolesence, but in this case, Nvidia's case, I found damning evidence of fucking up Keppler for the benefit of Maxwell cards, just a few google searches and you can find benchmarks and numbers to support the fact that Nvidia fucked up their old architecture in favor of the new one.

Yooooooooo. What?
 

K' Dash

Member
That's so evil, holy shit. They need to be called out on this more.

That's what everyone said in the thread we had about that: "OMG it's true, now nothing will happen and everything will stay the same"

After reading articles, benchmarks, etc. I'll just wait and get the 1070, even if it is not happening on purpose ($$$LOL$$$), I'll not risk it again.
 
So a benchmark that uses async and favors AMD is a poor benchmark? Why would you think that? AMD developed their hardware towards that particular feature. A feature that will be utilized heavily in DX12, but is somehow a poor benchmark? Come on now.

Do you disregard Doom benchmarks that favor Nvidia?

For the same reason I don't criticize AMD cards for doing poorly on Hairworks. Async is a feature, but it remains to be seen if it will be "heavily utilized" by game developers. Doom is an interesting scenario, because it scales really well with Nvidia cards, and really shitty with AMD cards. I think it just ties into the fact that AMD has really poor support for OpenGL, and always has. It's definitely something to consider, but I wouldn't consider it a great benchmark. I also am big on being an informed consumer so in my mind buy the card that fits games you're going to play--if you really want to play Doom you should factor that into your purchase--or the one that generally wins out within the price range you have.

I don't think AMD squeezing out a bit more in Hitman is that huge of a deal, but I also don't plan on playing that game so I have no problem disregarding it. You have to look at the entirety of benchmarks when making a PC Hardware purchase, and generally the 1080 edges out the 980 Ti by ~30%, and the Fury X by similar numbers. There will always be some variance.
 

takriel

Member
That's what everyone said in the thread we had about that: "OMG it's true, now nothing will happen and everything will stay the same"

After reading articles, benchmarks, etc. I'll just wait and get the 1070, even if it is not happening on purpose ($$$LOL$$$), I'll not risk it again.

In this day and age, things like that can easily backfire. But I guess it just wasn't that big a deal unlike the 3.5 GB 970 debacle.
 

Sciz

Member
In this day and age, things like that can easily backfire. But I guess it just wasn't that big a deal unlike the 3.5 GB 970 debacle.

It's not a big deal because, unlike the 970 VRAM issue, there's no substance behind the claim.

http://www.neogaf.com/forum/showpost.php?p=193814937&postcount=221

http://www.hardwarecanucks.com/foru...iews/70125-gtx-780-ti-vs-r9-290x-rematch.html

http://www.bytemedev.com/the-gtx-780-ti-sli-end-of-life-driver-performance-analysis/
 
Debating whether I should get a 1440p or 4k monitor with a 1080. It seems a single GPU still can't get 4k/60 on most modern games.

So I'm wondering how does dropping the resolution of a game to 1440p look in a 4k monitor?
 

ISee

Member
https://youtu.be/xtely2GDxhU?t=7610

Tom Petersen from Nvidia discussed the Founder's Edition a bit in this video (around 2:09:00), he says they intended it to be in the middle of the product stack price/quality wise.

At work atm, can't watch the video but does this mean that overclocked 1080 3d Party modells with good cooling (like the gigabyte g1 gaming series) will be priced above 800€? I already read some rumors that 3d party prices could go up to 900€.
 

Hawk269

Member
Debating whether I should get a 1440p or 4k monitor with a 1080. It seems a single GPU still can't get 4k/60 on most modern games.

So I'm wondering how does dropping the resolution of a game to 1440p look in a 4k monitor?

That is a big decision to make. Is the 1440p you are looking at limited to 60fps? or are you going 1440p at 120-144? 4k/60 is very demanding as well and the 1080 comes closest so far to hitting it, but depending on the game it does not do it. I would assume based on the 1080 specs and benchmarks that if things develop we could see a 1080ti or Titan level Pascal can finally hit 4k/60fps with a single card. Unlike others, I have have not had many issues with SLI so I don't mind using that tech, but for now, I am waiting to see what happens over the next 3 months or so before I decide if I want to move to a new card.

It would be amazing to have a single card that could do 4k/60fps, but that goal is always moving and with newer games it still can be a challenge. If a single card can do 4k/60fps in Witcher 3 with hairworks fully enabled, with highest AA Hairworks settings and every other setting set to max then that would be magical. Hairworks in Witcher 3 is very demanding.

I am really interested in what a EVGA Classified Hydrocopper 1080 would be like or one of the other board partners that make high end cards. I am sure we will see 1080's with the additiona 8pin power port and some sick clocks in the next 2-3 months.
 

x3sphere

Member
At work atm, can't watch the video but does this mean that overclocked 1080 3d Party modells with good cooling (like the gigabyte g1 gaming series) will be priced above 800€? I already read some rumors that 3d party prices could go up to 900€.

It suggests that they could be. Hopefully not. Wouldn't be surprised to see some designs like EVGA's hybrid cooler to go beyond $700.
 

Bigrx1

Banned
Hmmmm, after looking at quite a few benchmarks a 980Ti OC'd is pretty close to the 1080, at least as far as I'm concerned and my definition of "close". Will wait another card generation or two me thinks.
 

ISee

Member
Hmmmm, after looking at quite a few benchmarks a 980Ti OC'd is pretty close to the 1080, at least as far as I'm concerned and my definition of "close". Will wait another card generation or two me thinks.

People right now are comparing a reference 1080 model with overclocked 980Ti custom designs. The thing is: 3d party 1080 models will be (probably) able to reach 2ghz. The 30% gap will most likely stay.

IMO:

You don't care, you want the best: Wait for the 1080Ti or at least for 3d party 1080 models.
4k: Wait for 1080Ti and vega benchmarks .
1440p (owning a 980Ti or Fury X): No need to upgrade atm
1080p (owning a 980 or 390X): No need to upgrade atm, if you're willing to set shadows to very high instead of ultra.
1080p (owning a 970 or 390): Are you unhappy with overall high-v.high quality settings and 1080p@60fps? Wait for Polaris 10 and 1070 benchmarks.
Are you still gaming on a 2500k/3570k (OC) and want a 1080?. Get a new CPU first.

Everybody else: Wait for Polaris 10 and 1070 benchmarks or go for the 1080 if you want to go above 1080p.

Just my opinion, people have different reasons for upgrading. Some really feel the urge to be able to max out everything, other want to feel VR ready, other want to own the very best other want to 'future proof'. It's all viable.
 
About "planned obsolescence"... It's not that Nvidia is evil and deliberately gimps older models. The situation is a bit more subtle.

You have a certain amount of dev-hours to allocate on optimization. This both at the software house as well Nvidia engineers doing drivers work. A driver you can download and install has taken months to develop and test, it's not a last-minute thing.

When a brand new model launches of course most of the engineers are moved to focus on those. Same for the software house. When they see there's a model the majority of customers use, it makes sense to spend hours optimizing and testing that model. It's simply a business evaluation.

So, these days, as long a game runs without crashing or bugs on a 7xx model, Nvidia is happy. While they focus their optimization to take care of the newer models. With time, those dev-hours will be moved from the 9xx models to the 10xx models, and naturally the gap between those videocards will increase, as it will increase the hardware requirements on the games.

If two years ago a 770 was the standard for high settings 1080p/60fps, now the performance is halved and that slot is owned by a 970. Wait another two years and you'll need a 1070 to aim at a similar target of high settings 1080p/60fps. (unless VR and the new consoles make it even worse)

There are usually a number of different factors at play, but that's the end result you see.
 
Top Bottom