• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia GeForce GTX 1080 reviews and benchmarks

That's what I'm waiting for. Reference to reference it's ~+30% in my resolution. If that gap will be bigger between my 980Ti G1 (which is 980Ti+15% basically) and a factory OC 1080 then I might cave, if it will be smaller I'll probably just wait for GP102 cards.



That "driver obsolescence" thingie doesn't exist in the real world. And beyond that Pascal is Maxwell in pretty much every metric sans one DX12 feature where Pascal is one tier above Maxwell. So nothing should happen to Maxwell which won't affect Pascal as well.



980Ti has 6GB of VRAM.

I mis-snarked
 

Drazgul

Member
How cheap do you guys think new and used 970s will get after the 1080/1070 launches?

I have a 1900x1200 monitor, so don't really need the Pascal power. But a 970 will be a tasty upgrade for my 760 card.

So do I but I still want that 1070 - I need more vram than the 3.5 the 970 has. And DSR, mmm can't wait to run games with that.
 
That's just 16 number of lanes.

You only need 4 so it's sufficient.
Really?! That would be exciting if I only need to upgrade the GPU and nothing else! It's been a few years since I last built a PC and so I forgot all my self-taught crash course about it.

Here is my GPU just in case:
yaUTzaG.jpg


Talk about an upgrade, huh? I wonder if I'll notice the difference?
 

LordOfChaos

Member
Maxwell could switch between graphics and compute processing through context switches at predefined or at least very coarse grained points in the draw request queues, and Pascal is being billed as being able to do context switches at any arbitrary instruction/rasterized pixel block.

However, this is confusing and disingenuous marketing, since the context switches still take close to 0.1 ms and nobody should really care whether an interrupt can occur on a cycle boundary if the switch still needs many tens of microseconds to complete.

GCN is able to smoothly interleave ALU instructions from both graphics and compute instruction queues on a cycle-by-cycle basis per compute block. Though like other Simultaneous Multithreading implementations like HyperThread this no doubt is tricky to get right and not cause undue register/cache pressure, this is what people mostly consider the "proper" implementation of async compute shaders.

Now, the 1080s raw power still lets it brute force past GCN, but the real test will be against Polaris, which would hopefully combine both the added execution hardware 14/16nm can provide on a die size, with AMDs superior switching.
 
It will probably be down at $499-$549 in a years time. Odds are Nvidia releases a 1080 Ti in the Fall/Winter at $649-$699 and knocks the 1070 and 1080 down a bit. They have done this with the 700 and 900 series.

I imagine the 1070 will be just above the 980 Ti in terms of performance given the 1080 clears the 980 Ti by 25-30%, it would make sense that the 1070 at least meets the 980 Ti in performance.

Thanks for the response. If I'm planning on gaming on a 1080p tv, would a 980Ti or 1070 be the most cost effective choice? VR would be a nice option, but I'm probably going to hold out a bit before jumping in.
 
Maxwell could switch between graphics and compute processing through context switches at predefined or at least very coarse grained points in the draw request queues, and Pascal is being billed as being able to do context switches at any arbitrary instruction/rasterized pixel block.

However, this is confusing and disingenuous marketing, since the context switches still take close to 0.1 ms and nobody should really care whether an interrupt can occur on a cycle boundary if the switch still needs many tens of microseconds to complete.

GCN is able to smoothly interleave ALU instructions from both graphics and compute instruction queues on a cycle-by-cycle basis per compute block. Though like other Simultaneous Multithreading implementations like HyperThread this no doubt is tricky to get right and not cause undue register/cache pressure, this is what people mostly consider the "proper" implementation of async compute shaders.

Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.
 

Freiya

Member
As in I don't believe it. I can believe it being on par with a 980 ti but the 1080 seems very close in performance for the 1070 to also be better. Obviously I could be wrong but I will definitely be surprised if so.
 
Trying to find info on Nvidia dropping Kepler support but can't find anything concrete, can you elaborate please?

https://www.youtube.com/watch?v=Imhpe5KNIW0

780 Ti out performed by GTX 970.

This is just one thing I found real quick. There are several examples of newer games being disproportionately outperformed by newer generation but weaker cards. I'm trying to find it but there was a major publication that did a big write up about Nvidia kneecapping older GPUs in games that utilize nvidia specific features.. The specific example I remember from the article had to do with the way water was rendered in Crysis 2. I really need to find it again... if anyone knows what I'm talking about please provide a link.
 

K' Dash

Member
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.

Umm really?, I just saw DF benchmarks and the fury X was very close to the 1080 in those two games iirc.


LxQb.jpg


Is this getting its ass handed to it? A whole 3 fps?

Yup, DF was right.
 

tuxfool

Banned
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.

LxQb.jpg


Is this getting its ass handed to it? A whole 3 fps?
 
As in I don't believe it. I can believe it being on par with a 980 ti but the 1080 seems very close in performance for the 1070 to also be better. Obviously I could be wrong but I will definitely be surprised if so.

It could be better in the same way the 970 was better than a 780Ti in later games as opposed to at launch. Anyway it doesn't matter at all if it's 5% slower, 0 or 5% faster. It would have the same overall performance as a 980Ti and you wouldn't tell the difference unless you benchmark it. Let's wait for the reviews though as always.
 
Umm really?, I just saw DF benchmarks and the fury X was very close to the 1080 in those two games iirc.

http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Hitma

Hitman_3840x2160_FRAPSFPS_0.png


32% faster average framerate than Fury X in Hitman in 4K~

Is this getting its ass handed to it? A whole 3 fps?

LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.

DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.
 
Maxwell could switch between graphics and compute processing through context switches at predefined or at least very coarse grained points in the draw request queues, and Pascal is being billed as being able to do context switches at any arbitrary instruction/rasterized pixel block.

However, this is confusing and disingenuous marketing, since the context switches still take close to 0.1 ms and nobody should really care whether an interrupt can occur on a cycle boundary if the switch still needs many tens of microseconds to complete.

GCN is able to smoothly interleave ALU instructions from both graphics and compute instruction queues on a cycle-by-cycle basis per compute block. Though like other Simultaneous Multithreading implementations like HyperThread this no doubt is tricky to get right and not cause undue register/cache pressure, this is what people mostly consider the "proper" implementation of async compute shaders.

yeah, pascal still cant run graphics and compute simultaneously on an sm.
 
http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Hitma

Hitman_3840x2160_FRAPSFPS_0.png


32% faster average framerate than Fury X in Hitman in 4K~



LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.

DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.

vram limits, what are they?

https://www.youtube.com/watch?v=RqK4xGimR7A#t=12m18s

15% faster in hitman is hardly crushing
 

LordOfChaos

Member
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.

This is true enough, against AMDs 28nm cards. I'm interested to see AMDs superior method combined with the new Polaris architecture though. Pascal can brute force past older GCN, but can it brute force past another new uArch, with the more transistors to throw at the problem 14nm will provide?


Still impressive though that 28nm Fury X isn't too far off in AoS DX12. 1080 has a lead there, but a shrunk one compared to how much it dominated everything else.
 

SRG01

Member

ZOONAMI

Junior Member

tuxfool

Banned
LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.

DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.

Hitman_2560x1440_FRAPSFPS_0.png


Fine. Here it is from the same. Now the card *is* more powerful so one should expect it to power through, but handing its ass still isn't the case. +17%
 

K' Dash

Member
Having trouble deciding what to do. I'm looking for a 4k60fps solution (not doing sli again).
Seems as the 1080 may not cut it, I'm considering getting a 980ti for cheap(er) and holding out for a 1080ti. Or same thing but 1080 and then sell for 1080ti but I feel like I'll lose more money this way.

Suggestions?
 
"Having trouble deciding what to do. I'm looking for a 4k60fps solution (not doing sli again).
Seems as the 1080 may not cut it, I'm considering getting a 980ti for cheap(er) and holding out for a 1080ti. Or same thing but 1080 and then sell for 1080ti but I feel like I'll lose more money this way.

Suggestions?"


Don't spend money at all until there's actually 4k60fps cards if that's what you really want?
 

tuxfool

Banned
Well this shit made me sick, I'd say I don't believe in conspiracy theories of planned obsolescence, but the evidence is damning.

I think those videos overstate it, but it cannot be denied that Nvidia abandons its old architectures. AMD also benefits in the fact that globally their architectures don't change much between iterations allowing any optimization to apply globally to various generations of cards. One could also speculate on the fact that AMDs GPUs are more powerful intrinsically, but they leave a lot of performance off the table due to lack of driver optimization.
 
Having trouble deciding what to do. I'm looking for a 4k60fps solution (not doing sli again).
Seems as the 1080 may not cut it, I'm considering getting a 980ti for cheap(er) and holding out for a 1080ti. Or same thing but 1080 and then sell for 1080ti but I feel like I'll lose more money this way.

Suggestions?

What card do you currently have? Also, what CPU?
 

J-Rzez

Member
Well hopefully AMD brings it, and soon, because the 1080 looks good to me, especially as someone who has a 144Hz monitor and its nice that it should be able to run eye candy 4k games on my tv that I don't care about high fps in.

Only thing that really posses me off is this FE crap. I'm sure nvidia saying nothing is stopping AIB from releasing non-ref cards same day now is saving face but they know that they changing stance doesn't give them enough time to get their stuff out.

If AMD is bringing Vega with HBM out Oct, they better announce it damned soon.
 

ZOONAMI

Junior Member
"Having trouble deciding what to do. I'm looking for a 4k60fps solution (not doing sli again).
Seems as the 1080 may not cut it, I'm considering getting a 980ti for cheap(er) and holding out for a 1080ti. Or same thing but 1080 and then sell for 1080ti but I feel like I'll lose more money this way.

Suggestions?"


Don't spend money at all until there's actually 4k60fps cards if that's what you really want?

An ocd 980 ti will do 4k60 med/high, 1080 will do high. Gonna have to wait to max stuff 4k 60. I'm fine with tweaking things with my 980 ti, but if you aren't, then wait.

My 980 ti does Doom 4k/60 high preset, for example.
 
390x beating a Titan x in hitman at 4k dx12.

Vega with 8gb + of hbm should wipe the floor with the 1080 in async titles, and likely in general, hopefully amd gets it out asap.

http://www.tweaktown.com/guides/7634/hitman-pc-performance-analysis-directx-12-finest/index2.html

Vega's competitor isn't the Titan X or even the 1080. It's the 1080 Ti or Titan P, assuming Nvidia bothers with another Titan, they might just price the 1080 Ti at $1000 and call it a day. So I guess we'll see who wins that fight.
 

ZOONAMI

Junior Member
Vega's competitor isn't the Titan X or even the 1080. It's the 1080 Ti or Titan P, assuming Nvidia bothers with another Titan, they might just price the 1080 Ti at $1000 and call it a day. So I guess we'll see who wins that fight.

Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.
 
An ocd 980 ti will do 4k60 med/high, 1080 will do high. Gonna have to wait to max stuff 4k 60. I'm fine with tweaking things with my 980 ti, but if you aren't, then wait.

My 980 ti does Doom 4k/60 high preset, for example.

I am fine with that, also good to know! I'm planning on playing Doom on PC once I figure out what I'm doing.

May get a 980 ti for now as there are some local sellers offering good prices.
 

ZOONAMI

Junior Member
If only Polaris wasn't cock-blocking lol.
Vega is gonna be quite the wait I feel, sadly.

Rumored that amd has moved up vega release schedule to October, so hopefully that happens.

Hoping that amd has some sort of larger die polaris up their sleeve that will launch this summer, but that's doubtful.
 

tuxfool

Banned
Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.

Nvidia has to abide by the same limitation as Vega, namely the availability and cost of implementing HBM2.

They are offering a GP100 GPU with HBM2 simply because in that market (HPC) the margins enable them to offer a low volume product that is likely to have low yields.
 

ZOONAMI

Junior Member
I am fine with that, also good to know! I'm planning on playing Doom on PC once I figure out what I'm doing.

May get a 980 ti for now as there are some local sellers offering good prices.

Keep in mind my 980 ti is clocked at 1500/8000, so you are going to want a good clocker, like gigabyte xtreme, evga classified, or an msi gaming/golden typically are good clockers.
 

Renekton

Member
Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.
Don't hold your breath as both as struggling with bigger die, moreso AMD since Vega is reportedly done by TSMC so they gotta design for two diff foundries yeesh.
 

Leatherface

Member
I paused playing the Witcher about 1/2 way through with my 780GTX so I could enjoy at max settings in 1440P @60FPS. I wonder if this card will handle this game at max in 1440/60 with hairworks enabled? Also, 1070 or just go for it with the 1080?
 
Keep in mind my 980 ti is clocked at 1500/8000, so you are going to want a good clocker, like gigabyte xtreme, evga classified, or an msi gaming/golden typically are good clockers.

One of the sellers was selling an asus strix, some quick research seems to show its decent, any knowledge about it?
 
Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.

Unless AMD has invented a time machine and brought HBM2 back to October from 2017, I think it's highly unlikely we'll see Vega by then. I mean Nvidia already announced the Tesla P100 and that has HBM2 and they are saying early 2017 general availability for that.
 
Top Bottom