That's just 16 number of lanes.
You only need 4 so it's sufficient.
You only need x4 for 1070?
Am I missing something here?
That's just 16 number of lanes.
You only need 4 so it's sufficient.
You only need x4 for 1070?
Am I missing something here?
It'll have less performance, but it'll run.
980Ti has 6GB of VRAM.
That's what I'm waiting for. Reference to reference it's ~+30% in my resolution. If that gap will be bigger between my 980Ti G1 (which is 980Ti+15% basically) and a factory OC 1080 then I might cave, if it will be smaller I'll probably just wait for GP102 cards.
That "driver obsolescence" thingie doesn't exist in the real world. And beyond that Pascal is Maxwell in pretty much every metric sans one DX12 feature where Pascal is one tier above Maxwell. So nothing should happen to Maxwell which won't affect Pascal as well.
980Ti has 6GB of VRAM.
How cheap do you guys think new and used 970s will get after the 1080/1070 launches?
I have a 1900x1200 monitor, so don't really need the Pascal power. But a 970 will be a tasty upgrade for my 760 card.
Really?! That would be exciting if I only need to upgrade the GPU and nothing else! It's been a few years since I last built a PC and so I forgot all my self-taught crash course about it.That's just 16 number of lanes.
You only need 4 so it's sufficient.
It will probably be down at $499-$549 in a years time. Odds are Nvidia releases a 1080 Ti in the Fall/Winter at $649-$699 and knocks the 1070 and 1080 down a bit. They have done this with the 700 and 900 series.
I imagine the 1070 will be just above the 980 Ti in terms of performance given the 1080 clears the 980 Ti by 25-30%, it would make sense that the 1070 at least meets the 980 Ti in performance.
Maxwell could switch between graphics and compute processing through context switches at predefined or at least very coarse grained points in the draw request queues, and Pascal is being billed as being able to do context switches at any arbitrary instruction/rasterized pixel block.
However, this is confusing and disingenuous marketing, since the context switches still take close to 0.1 ms and nobody should really care whether an interrupt can occur on a cycle boundary if the switch still needs many tens of microseconds to complete.
GCN is able to smoothly interleave ALU instructions from both graphics and compute instruction queues on a cycle-by-cycle basis per compute block. Though like other Simultaneous Multithreading implementations like HyperThread this no doubt is tricky to get right and not cause undue register/cache pressure, this is what people mostly consider the "proper" implementation of async compute shaders.
It'll have less performance, but it'll run.
Trying to find info on Nvidia dropping Kepler support but can't find anything concrete, can you elaborate please?
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.
Is this getting its ass handed to it? A whole 3 fps?
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.
How do you mean?
As in I don't believe it. I can believe it being on par with a 980 ti but the 1080 seems very close in performance for the 1070 to also be better. Obviously I could be wrong but I will definitely be surprised if so.
Umm really?, I just saw DF benchmarks and the fury X was very close to the 1080 in those two games iirc.
Is this getting its ass handed to it? A whole 3 fps?
Maxwell could switch between graphics and compute processing through context switches at predefined or at least very coarse grained points in the draw request queues, and Pascal is being billed as being able to do context switches at any arbitrary instruction/rasterized pixel block.
However, this is confusing and disingenuous marketing, since the context switches still take close to 0.1 ms and nobody should really care whether an interrupt can occur on a cycle boundary if the switch still needs many tens of microseconds to complete.
GCN is able to smoothly interleave ALU instructions from both graphics and compute instruction queues on a cycle-by-cycle basis per compute block. Though like other Simultaneous Multithreading implementations like HyperThread this no doubt is tricky to get right and not cause undue register/cache pressure, this is what people mostly consider the "proper" implementation of async compute shaders.
http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Hitma
32% faster average framerate than Fury X in Hitman in 4K~
LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.
DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.
Unfortunately for AMD, sheer brute force easily overcomes whatever difference there is in implementation of async compute. The 1080 easily crushes the Fury X in Ashes of the Singularity and even hands the Fury X it's ass in Hitman, a game which AMD actually wrote the async compute code for themselves.
http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Hitma
32% faster average framerate than Fury X in Hitman in 4K~
LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.
DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.
http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Hitma
32% faster average framerate than Fury X in Hitman in 4K~
LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.
DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.
vram limits, what are they?
This is comparing a 8GB card to a 4GB card...???
LMAO you just showed a graph of a DX11 benchmark running 1440p. I'll let you think about that a moment. This is an example of being CPU bottlenecked.
DX12 is as I just showed when GPU is the limiting factor. I don't even know if DF knows what they are testing.
Nvidia kneecaps their older cards. I'm convinced.
Here it is! It was a youtube video!
https://www.youtube.com/watch?v=O7fA_JC_R5s
And the NeoGAF thread that came with it.
http://www.neogaf.com/forum/showthread.php?t=1177496
http://techreport.com/review/21404/crysis-2-tessellation-too-much-of-a-good-thing
The Fury X is beating the 980 Ti, so you tell me?
The 6 GB 980 Ti IS RIGHT THERE LMAO
Well this shit made me sick, I'd say I don't believe in conspiracy theories of planned obsolescence, but the evidence is damning.
Having trouble deciding what to do. I'm looking for a 4k60fps solution (not doing sli again).
Seems as the 1080 may not cut it, I'm considering getting a 980ti for cheap(er) and holding out for a 1080ti. Or same thing but 1080 and then sell for 1080ti but I feel like I'll lose more money this way.
Suggestions?
What card do you currently have? Also, what CPU?
"Having trouble deciding what to do. I'm looking for a 4k60fps solution (not doing sli again).
Seems as the 1080 may not cut it, I'm considering getting a 980ti for cheap(er) and holding out for a 1080ti. Or same thing but 1080 and then sell for 1080ti but I feel like I'll lose more money this way.
Suggestions?"
Don't spend money at all until there's actually 4k60fps cards if that's what you really want?
390x beating a Titan x in hitman at 4k dx12.
Vega with 8gb + of hbm should wipe the floor with the 1080 in async titles, and likely in general, hopefully amd gets it out asap.
http://www.tweaktown.com/guides/7634/hitman-pc-performance-analysis-directx-12-finest/index2.html
If only Polaris wasn't cock-blocking lol.390x beating a Titan x in hitman at 4k dx12.
Vega with 8gb + of hbm should wipe the floor with the 1080 in async titles, and likely in general, hopefully amd gets it out asap.
http://www.tweaktown.com/guides/7634/hitman-pc-performance-analysis-directx-12-finest/index2.html
Vega's competitor isn't the Titan X or even the 1080. It's the 1080 Ti or Titan P, assuming Nvidia bothers with another Titan, they might just price the 1080 Ti at $1000 and call it a day. So I guess we'll see who wins that fight.
An ocd 980 ti will do 4k60 med/high, 1080 will do high. Gonna have to wait to max stuff 4k 60. I'm fine with tweaking things with my 980 ti, but if you aren't, then wait.
My 980 ti does Doom 4k/60 high preset, for example.
If only Polaris wasn't cock-blocking lol.
Vega is gonna be quite the wait I feel, sadly.
Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.
I am fine with that, also good to know! I'm planning on playing Doom on PC once I figure out what I'm doing.
May get a 980 ti for now as there are some local sellers offering good prices.
Don't hold your breath as both as struggling with bigger die, moreso AMD since Vega is reportedly done by TSMC so they gotta design for two diff foundries yeesh.Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.
Is this getting its ass handed to it? A whole 3 fps?
Keep in mind my 980 ti is clocked at 1500/8000, so you are going to want a good clocker, like gigabyte xtreme, evga classified, or an msi gaming/golden typically are good clockers.
Not gonna see a 1080 ti by October, so hopefully amd really does get vega out by then, as rumored. If the next titan is out by then, yeah they are probably fucked by sheer horsepower once again. But if titan x releases soon, at least they can undercut it on price with vega.