• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

R9 390 or GTX970? Whic one?

The 390 is faster and would have been the card I would have gone with *if* it would fit in my case and my power supply would run it.

I'm running a Silverstone Sugo SG05 (with the front intake fan reversed to exhaust) with a 450w SFX PSU (flipped so it only draws in cool air from the top of the case) and the card physically wouldn't fit so I went with an EVGA GTX 970 SSC (which barely fits, it has a bigger PCB than most of the other 970s and isn't a blower which is why I reversed the front case fan on the SG05) and I am very happy with the performance. If I was running a bigger case and meatier PSU I would almost certainly have gone for the 390, it's better value for money and faster in general from most benchmarks I've seen.

Either is a very quick card though, certainly at 1080p (which is all I game at on a 51" Samsung plasma in my living room). The 390 would likely be the clear choice above 1080p.
Can a normal psu fit in that case? Sfx psu are a pain to track down sometimes. In Australia at least.
 
Thats exactly what I´m worried about with Nvidia: Driver optimisation with older cards.

They showed this practice this year, when Maxwell hit.
The Kepler performance was seriously crippled with new driver updates, to the point where a 970 was faster then a 780 Ti.
In Project Cars even a 960 surpassed a 780!
Similiar problems with Witcher 3 at launch.
First they denied the problem and only when more and more sites reported about the issue and the pressure started growing they slowly reacted with better optimizations.

This is bad business parctice, where you force your costumers to buy the newest product.
What will happen with Maxwell driver support, once Pascal hits next year?

This isn't true either.
I was reading a new test about this only yesterday, it's in Russian but graphs are self explanatory:

03.png
04.png


11.png
12.png


19.png
20.png


Kepler chips are aging a bit badly simply because of their h/w not being suited that well for newer workloads which came with the new consoles and NV concentrating their driver optimization efforts on Maxwell obviously. Nothing was "crippled with drivers".
 
Probably nothing because according to Nvidia roadmaps, Pascal is just Maxwell on top of HBM2, so the uArch is mostly the same, hence why it's so important to know right now if Maxwell is crippled in DX12 or not due the lack/or software emulation of Async computing.

According to NV roadmaps Pascal is 10x faster than Maxwell. Entirely possible in specific compute scenarios but doubtful in gaming.

NV roadmaps don't really reveal all that much about pascal so we have no idea what the architecture will be like.

I do agree with dr_rus though regarding kepler, it was not crippled through drivers it is just that focus shifted to Maxwell as you would expect. The concern is with the 970's memory arrangement how much optimisation is needed to support that on a game by game basis and how long will NV give it top tier support? It is one of those things that you we don't know and is worth considering when there is a viable alternative.
 
Can a normal psu fit in that case? Sfx psu are a pain to track down sometimes. In Australia at least.

Sadly no, iirc there is a slightly bigger case (Silverstone as well I think) that can take a normal ATX PSU and is only marginally bigger but I can't for the life of me remember where I saw it, sorry.
 
Personally I swing the way of GTX 970, to be honest. I do not entertain the thought of +60-121 extra watts of heat. If I do get slightly worse performance, so be it.

Besides, I do play slightly older games that happen to excel in NVIDIA's strengths, too...

Also, there's too much value-add to consider.

(Well, I might be a bit biased, since I got a free copy of MGSV with my card.)
 
I do agree with dr_rus though regarding kepler, it was not crippled through drivers it is just that focus shifted to Maxwell as you would expect.
You can spin it like you want, that doesn´t change the situation.
Fact is, that better cards from only 1 year ago performed worse then weaker current gen cards.
That is bad business practice to milk your (loyal) customers to buy the newest product.
For me NV looks more and more like Apple and the same goes for their defenders.
 
This isn't true either.
I was reading a new test about this only yesterday, it's in Russian but graphs are self explanatory:

03.png
04.png


11.png
12.png


19.png
20.png


Kepler chips are aging a bit badly simply because of their h/w not being suited that well for newer workloads which came with the new consoles and NV concentrating their driver optimization efforts on Maxwell obviously. Nothing was "crippled with drivers".

yeah there was no driver crippling, it was poor architectural decisions by nvidia catching up to them or the increased focus on gcn because of consoles. or some combo of both. same thing appears to be happening to maxwell.

According to NV roadmaps Pascal is 10x faster than Maxwell. Entirely possible in specific compute scenarios but doubtful in gaming.

NV roadmaps don't really reveal all that much about pascal so we have no idea what the architecture will be like.

I do agree with dr_rus though regarding kepler, it was not crippled through drivers it is just that focus shifted to Maxwell as you would expect. The concern is with the 970's memory arrangement how much optimisation is needed to support that on a game by game basis and how long will NV give it top tier support? It is one of those things that you we don't know and is worth considering when there is a viable alternative.

that was jen-hsun making a joke.
 
You can spin it like you want, that doesn´t change the situation.
Fact is, that better cards from only 1 year ago performed worse then weaker current gen cards.
That is bad business practice to milk your (loyal) customers to buy the newest product.
For me NV looks more and more like Apple and the same goes for their defenders.

Spinning is what you're doing at the moment, friend.
There is nothing strange in "weaker" current gen cards performing better than "better" cards from 1 year ago. No matter how you spin it.

yeah there was no driver crippling, it was poor architectural decisions by nvidia catching up to them or the increased focus on gcn because of consoles. or some combo of both. same thing appears to be happening to maxwell.

This "poor architectural decision" gave them ~20% of the market. And AMD's "good architectural decision" in GCN back in 2012 had to wait for four years for DX12 to be accessible to be used in gaming. It's all a matter of perspective.

Nothing seems to be happening to Maxwell.

Btw, I'm still waiting on you to name these 3-4 DX12 benchmarks we have available right now.
 
Spinning is what you're doing at the moment, friend.
There is nothing strange in "weaker" current gen cards performing better than "better" cards from 1 year ago. No matter how you spin it.



This "poor architectural decision" gave them ~20% of the market. And AMD's "good architectural decision" in GCN back in 2012 had to wait for four years for DX12 to be accessible to be used in gaming. It's all a matter of perspective.

Nothing seems to be happening to Maxwell.

Btw, I'm still waiting on you to name these 3-4 DX12 benchmarks we have available right now.

marketshare doesnt necessarily correlate to how good or bad an architecture/product is. i mean just look how many people are in this thread telling this guy to buy an inferior product. not sure what youre on about having to wait 4 years.

it is happening to maxwell, look at how the hawaii chipscompare to the 980 and 970 now against how they compared a year ago when maxwell launched. combination of better drivers for amd and newer games has had quite an impact on the relative performances.

the dx12 benches would be ashes, fable, and 3dmark test. i thought there might have been a 4th but i wasnt 100 on it.
 
I just picked up a 970 as an inbetween card, till I get an idea of what VR is going to actually cost me. I assume, "a lot".

For the past few years I've been running a 7870, then bought another 7870 to CF as I found an open-box clearance item for stupid cheap at Microcenter. However that experience has been far more downs than ups, driver support typically awful, and the drivers that did get released weren't all that impressive. I did have some impressive benchmarks.
 
I would go for the 980Ti

Or if you want to pay less, the Sapphire Fury (non X version), that is the most silent card ever and packs a punch.

Ahan. What kinda PSU would I need to buy for said card? Right now, I have a Cooler Master Xtreme Power Plus, 500w.
 
Ahan. What kinda PSU would I need to buy for said card? Right now, I have a Cooler Master Xtreme Power Plus, 500w.

You need something new if you dont want to grill your hardrives. I am not a PSU expert especially if it comes to stuff you mostly use in america.
You cant go wrong with a be quiet Dark Power or Straight Power. 600-700 Watt should be more then enough, depending on how much you oc.
 
Don't want to derail this thread too much but does nvidia do multiple display profile shortcuts? I switched from an ati 4870 to a gtx 580 and that's one feature I miss dearly. I have a 24" PlayStation display and a 46" Samsung I like being able to switch between using a keyboard shortcut like ctrl shift J or K, etc. Google pointed to "no" profiles.
 
THIS IS PEQUOD, R9 390 ARRIVING SOON! (If you can call 5 days soon)
I ordered a Asus Strix, though.

Nice, an Asus. They make fine cards. I have their 970, you see, and it does run pretty well. Not the best overclocker, though, since you usually can only take it to around a 980, nothing more (so no 980-beating performance). That's why I wish I had an MSI instead... a little bit more power to power it, but much nicer cooling and potential.

Either way, since you went with an open-air non-reference cooler, enjoy the improved cooling!

How's the case fan situation, by the way?
 
980 Ti beats Fury X in most cases. Fury X is competitive at 4K resolution in some games. Once both cards are overclocked, it's no contest, 980 Ti pretty much wins.

I find Fury X performance really strange. It's scaling is crap considering the extra shaders, added bandwidth and the colour compression tech yet it sees large gains from memory over clocking. Really memory over clocking should have little impact due to the amount of bandwidth it has and latency is already lower than GDDR5 so that should not be an issue either.

It seems to point to drivers that are not getting the best out of the memory system so I could imagine some gains down the road but who really knows? If you must buy a top tier card now the 980Ti is the way to go but don't be surprised if in a year or two the Fury X catches up or even beats the 980Ti.
 
marketshare doesnt necessarily correlate to how good or bad an architecture/product is. i mean just look how many people are in this thread telling this guy to buy an inferior product. not sure what youre on about having to wait 4 years.
Oh you know exactly what I'm on about having to wait 4 years as you're the one who keeps telling how bad Kepler and even Maxwell are doing compared to GCN right now. Market share and profits is the only metric which is important when talking about an architecture. They could've built a Maxwell core in times of GF3. It would've ran like shit compared to GF3 and wouldn't sell at all and NV would probably go bust as a result of this but it would've been advanced as fuck. Of course you would need to wait till DX12 to get access to this advanced functionality.

it is happening to maxwell, look at how the hawaii chipscompare to the 980 and 970 now against how they compared a year ago when maxwell launched. combination of better drivers for amd and newer games has had quite an impact on the relative performances.
Ok, let's look at how Hawaii chips compare to 980 now and a year ago.

Now (as close as there can be, that's from their Fury X review, they've upgraded to 390 after that):
Hh8b.png

290X is about 20% slower than 980

Edit: I've found a newer one, from their Nano review:
Lh8b.png

Same results.

A year ago:
Ih8b.png

290X is about 15% slower than 980.
Hey, wait... That means that contrary to what you're saying Hawaii actually went down when compared to Maxwell... Could it be that as I've been saying NOTHING of the sorts is happening to Maxwell?

Well, there's another possibility which came to my mind - you're actually comparing 970 and 980 to 390 and 390X now instead of 290 and 290X somehow arriving to a conclusion that these are the same cards as the latter. So they aren't. And they do compare to Maxwell more favorably as they are overclocked and tweaked to provide better performance.

the dx12 benches would be ashes, fable, and 3dmark test. i thought there might have been a 4th but i wasnt 100 on it.
Ok so that's (1) unreleased alpha version of a game benchmark which isn't indicative of anything really, (2) unreleased beta version of a game benchmark running on alpha version of UE4's D3D12 renderer where NV seems to be doing better than AMD actually and (3) a synthetic draw call throughput benchmark which basically measure how fast your CPU is.
Great set of data for such claims you have there.
 
I find Fury X performance really strange. It's scaling is crap considering the extra shaders, added bandwidth and the colour compression tech yet it sees large gains from memory over clocking. Really memory over clocking should have little impact due to the amount of bandwidth it has and latency is already lower than GDDR5 so that should not be an issue either.

It seems to point to drivers that are not getting the best out of the memory system so I could imagine some gains down the road but who really knows? If you must buy a top tier card now the 980Ti is the way to go but don't be surprised if in a year or two the Fury X catches up or even beats the 980Ti.

I think the fact that it's competitive at 4K resolution in games means I don't want to rule it out completely, plus it comes from a line of GPUs that are somewhat more future proofed. Will be interesting to revisit it in another 6-12 months.
 
Nice, an Asus. They make fine cards. I have their 970, you see, and it does run pretty well. Not the best overclocker, though, since you usually can only take it to around a 980, nothing more (so no 980-beating performance). That's why I wish I had an MSI instead... a little bit more power to power it, but much nicer cooling and potential.

Either way, since you went with an open-air non-reference cooler, enjoy the improved cooling!

How's the case fan situation, by the way?

The case has 3 fans. When the card arrives, I'll check the possitions and all that, more importantly, I'll check how it behaves. It's a mid-tower, btw (This one).
 
Oh you know exactly what I'm on about having to wait 4 years as you're the one who keeps telling how bad Kepler and even Maxwell are doing compared to GCN right now. Market share and profits is the only metric which is important when talking about an architecture. They could've built a Maxwell core in times of GF3. It would've ran like shit compared to GF3 and wouldn't sell at all and NV would probably go bust as a result of this but it would've been advanced as fuck. Of course you would need to wait till DX12 to get access to this advanced functionality.


Ok, let's look at how Hawaii chips compare to 980 now and a year ago.

Now (as close as there can be, that's from their Fury X review, they've upgraded to 390 after that):
Hh8b.png

290X is about 20% slower than 980

Edit: I've found a newer one, from their Nano review:
Lh8b.png

Same results.

A year ago:
Ih8b.png

290X is about 15% slower than 980.
Hey, wait... That means that contrary to what you're saying Hawaii actually went down when compared to Maxwell... Could it be that as I've been saying NOTHING of the sorts is happening to Maxwell?

Well, there's another possibility which came to my mind - you're actually comparing 970 and 980 to 390 and 390X now instead of 290 and 290X somehow arriving to a conclusion that these are the same cards as the latter. So they aren't. And they do compare to Maxwell more favorably as they are overclocked and tweaked to provide better performance.


Ok so that's (1) unreleased alpha version of a game benchmark which isn't indicative of anything really, (2) unreleased beta version of a game benchmark running on alpha version of UE4's D3D12 renderer where NV seems to be doing better than AMD actually and (3) a synthetic draw call throughput benchmark which basically measure how fast your CPU is.
Great set of data for such claims you have there.

those computerbase numbers are a pretty extreme outlier, and no i was not comparing the 390x to the 980.

980 launch
perfrel_2560.gif


nano review
perfrel_2560.gif


that still includes lots of older games too. when you start looking at individual benchmarks of the vast majority of newer games, the 290x is roughly even.

http://tpucdn.com/reviews/AMD/R9_Nano/images/ryse_2560_1440.gif
http://tpucdn.com/reviews/AMD/R9_Nano/images/farcry4_2560_1440.gif
http://tpucdn.com/reviews/AMD/R9_Nano/images/acu_2560_1440.gif
http://tpucdn.com/reviews/AMD/R9_Nano/images/som_2560_1440.gif
http://www.guru3d.com/index.php?ct=articles&action=file&id=19035
http://gamegpu.ru/images/remote/htt...efront_Beta-test-starwarsbattlefront_2560.jpg
http://gamegpu.ru/images/remote/htt...inbow_Six_Siege_Beta-test-RainbowSix_2560.jpg
http://gamegpu.ru/images/remote/htt...Test_GPU-Action-Mad_Max_-test-MadMax_2560.jpg
http://gamegpu.ru/images/remote/htt..._Black_Ops_III_Beta-test-BlackOps3_2560_2.jpg
http://gamegpu.ru/images/remote/htt...s-Test_GPU-Action-Evolve-test-Evolve_2560.jpg

the 3dmark drawcall test doesnt measure cpu speed, it measures the front end abilities of the gpu. nvidia only does better at fable when looking at the 980ti and titan x. amd wins the rest. and why wouldnt the benchmarks matter? these things have typically remained pretty constant from beta to final throughout history.
 
those computerbase numbers are a pretty extreme outlier,
Really? I don't think so, they are looking rather close to other benchmarks out there.
The difference in TPU test you're referring to is likely due to the changes in the benchmarking suit and has nothing to do with Maxwell doing worse on average.
The difference in CB.de tests where GCN is acting slower in newer benches is likely attributed to the same thing btw.
You're trying to show some case as the only one out there because that case suit your needs. But there are more cases which clearly disprove what you're saying.

the 3dmark drawcall test doesnt measure cpu speed, it measures the front end abilities of the gpu. nvidia only does better at fable when looking at the 980ti and titan x. amd wins the rest. and why wouldnt the benchmarks matter? these things have typically remained pretty constant from beta to final throughout history.

Sure.

AMD Radeon R9 290X(1x) and AMD FX-9370
DX11 Multi-threaded draw calls per second 685 565
DX11 Single-threaded draw calls per second 692 905
DX12 draw calls per second 15 113 327

AMD Radeon R9 290X(1x) and Intel Core i7-4790K
DX11 Multi-threaded draw calls per second 1 298 473
DX11 Single-threaded draw calls per second 1 324 209
DX12 draw calls per second 17 774 164

NVIDIA GeForce GTX 980(1x) and Intel Core i7-4790K
DX11 Multi-threaded draw calls per second 2 500 965
DX11 Single-threaded draw calls per second 1 501 694
DX12 draw calls per second 15 815 383

Clearly not a CPU test /sarcasm
The difference is smaller in DX12, true, but it's not like GCN has so big of a lead over Maxwell here for us to talk about some fundamental architectural efficiency. I'm actually 90% sure that this difference is coming from the drivers and not the h/w. AMD had a couple more years to work on this in Mantle driver.

As for things staying pretty constant from alpha to final there are more than enough evidence to the contrary so I won't even bother countering this with graphs and links.
 
Really? I don't think so, they are looking rather close to other benchmarks out there.
The difference in TPU test you're referring to is likely due to the changes in the benchmarking suit and has nothing to do with Maxwell doing worse on average.
The difference in CB.de tests where GCN is acting slower in newer benches is likely attributed to the same thing btw.
You're trying to show some case as the only one out there because that case suit your needs. But there are more cases which clearly disprove what you're saying.



Sure.

AMD Radeon R9 290X(1x) and AMD FX-9370
DX11 Multi-threaded draw calls per second 685 565
DX11 Single-threaded draw calls per second 692 905
DX12 draw calls per second 15 113 327

AMD Radeon R9 290X(1x) and Intel Core i7-4790K
DX11 Multi-threaded draw calls per second 1 298 473
DX11 Single-threaded draw calls per second 1 324 209
DX12 draw calls per second 17 774 164

NVIDIA GeForce GTX 980(1x) and Intel Core i7-4790K
DX11 Multi-threaded draw calls per second 2 500 965
DX11 Single-threaded draw calls per second 1 501 694
DX12 draw calls per second 15 815 383

Clearly not a CPU test /sarcasm
The difference is smaller in DX12, true, but it's not like GCN has so big of a lead over Maxwell here for us to talk about some fundamental architectural efficiency. I'm actually 90% sure that this difference is coming from the drivers and not the h/w. AMD had a couple more years to work on this in Mantle driver.

the dx12 results are gpu limited on the intel chip. its not a cpu benchmark. 290x is still beating a 980. from reading around, it seems likely the front end in maxwell is mostly unchanged since tesla.

did you look at the individual benchmarks i posted? that covers almost every recent game other than pcars and witcher 3. the results cant be argued
 
the dx12 results are gpu limited on the intel chip. its not a cpu benchmark. 290x is still beating a 980. from reading around, it seems likely the front end in maxwell is mostly unchanged since tesla.
290X is a more complex chip so there's nothing strange in it beating 980. What is strange is that 980 is beating it quite often.
A simple thought on your part instead of "reading around" should've told you that the frontend of a DX10 Tesla couldn't be used without changes in DX11+ APIs. But you surely will trust what you want to trust that much is clear.
As for the API overhead it doesn't matter really as these numbers are peak theoreticals which will never be reached on these chips in real world applications. Still, there are no clear advantage of 290X over 980 in this test either.

did you look at the individual benchmarks i posted? that covers almost every recent game other than pcars and witcher 3. the results cant be argued
I've looked on all these benchmarks and many more which you didn't see probably since you are saying yourself that you've already removed some of them which don't fit into the picture you're painting. There are others like these you removed, many many more of them, you simply tried to avoid them as they go against what you're saying:
http://www.techspot.com/articles-info/1061/bench/TWA.png
http://www.techspot.com/articles-info/1061/bench/Metro.png
http://techreport.com/r.x/radeon-r9-fury-x/gtav-fps.gif
http://techreport.com/r.x/titan-x/bf4-fps.gif
http://gamegpu.ru/images/remote/htt...tor-The_Crew_Wild_Run_Beta-test-crew_2560.jpg
http://gamegpu.ru/images/remote/htt...st_GPU-strategy-Heroes_VII-test-MMH7_2560.jpg
http://gamegpu.ru/images/remote/htt..._Ethan_Carter_Redux-test-EthanCarter_2560.jpg
http://gamegpu.ru/images/remote/htt...est_GPU-MMO-Dota_2_Reborn-test-dota2_3840.jpg
http://gamegpu.ru/images/remote/htt...tman_Arkham_Knight__GPU_v_2.0-test-2560_h.jpg
http://gamegpu.ru/images/remote/htt..._Gear_Solid_V_The_Phantom_Pain-test-m2560.png
http://gamegpu.ru/images/remote/htt...-strategy-Total_War_Arena-test-arena_2560.jpg
http://gamegpu.ru/images/remote/htt...ction-ARK_Survival_Evolved-test-arc_1920h.jpg

Even for these games you've posted there are other benchmarks which show a bit of a different picture, for example:
http://techreport.com/r.x/radeon-r9-fury-x/fc4-99th.gif
http://i.picpar.com/ek8b.png
http://www.techspot.com/articles-info/1011/bench/AC.png
http://images.anandtech.com/graphs/graph9059/72494.png
etc.

There is nothing new in a game preferring one architecture over another and there are no indications of GCN somehow doing better than Maxwell in newer games. What I see on average tell me that nothing has changed since Maxwell launch.
 
290X is a more complex chip so there's nothing strange in it beating 980. What is strange is that 980 is beating it quite often.
A simple thought on your part instead of "reading around" should've told you that the frontend of a DX10 Tesla couldn't be used without changes in DX11+ APIs. But you surely will trust what you want to trust that much is clear.
As for the API overhead it doesn't matter really as these numbers are peak theoreticals which will never be reached on these chips in real world applications. Still, there are no clear advantage of 290X over 980 in this test either.


I've looked on all these benchmarks and many more which you didn't see probably since you are saying yourself that you've already removed some of them which don't fit into the picture you're painting. There are others like these you removed, many many more of them, you simply tried to avoid them as they go against what you're saying:
http://www.techspot.com/articles-info/1061/bench/TWA.png
http://www.techspot.com/articles-info/1061/bench/Metro.png
http://techreport.com/r.x/radeon-r9-fury-x/gtav-fps.gif
http://techreport.com/r.x/titan-x/bf4-fps.gif
http://gamegpu.ru/images/remote/htt...tor-The_Crew_Wild_Run_Beta-test-crew_2560.jpg
http://gamegpu.ru/images/remote/htt...st_GPU-strategy-Heroes_VII-test-MMH7_2560.jpg
http://gamegpu.ru/images/remote/htt..._Ethan_Carter_Redux-test-EthanCarter_2560.jpg
http://gamegpu.ru/images/remote/htt...est_GPU-MMO-Dota_2_Reborn-test-dota2_3840.jpg
http://gamegpu.ru/images/remote/htt...tman_Arkham_Knight__GPU_v_2.0-test-2560_h.jpg
http://gamegpu.ru/images/remote/htt..._Gear_Solid_V_The_Phantom_Pain-test-m2560.png
http://gamegpu.ru/images/remote/htt...-strategy-Total_War_Arena-test-arena_2560.jpg
http://gamegpu.ru/images/remote/htt...ction-ARK_Survival_Evolved-test-arc_1920h.jpg

Even for these games you've posted there are other benchmarks which show a bit of a different picture, for example:
http://techreport.com/r.x/radeon-r9-fury-x/fc4-99th.gif
http://i.picpar.com/ek8b.png
http://www.techspot.com/articles-info/1011/bench/AC.png
http://images.anandtech.com/graphs/graph9059/72494.png
etc.

There is nothing new in a game preferring one architecture over another and there are no indications of GCN somehow doing better than Maxwell in newer games. What I see on average tell me that nothing has changed since Maxwell launch.

why are you linking a bunch of older games? i originally said newer titles designed around the gen 8 consoles as a baseline. yes maxwell is faster in older titles, most of which are coded for ps360 as a baseline. i guess it matters what you think is more relevent. performance in modern games that will be coming now and in the future, or performance in older games and engines that will never be used again
 
why are you linking a bunch of older games? i originally said newer titles designed around the gen 8 consoles as a baseline. yes maxwell is faster in older titles, most of which are coded for ps360 as a baseline. i guess it matters what you think is more relevent. performance in modern games that will be coming now and in the future, or performance in older games and engines that will never be used again

All these games were launched in 2015. Your personal definition of "older game" doesn't interest me in the slightest as there is an objective metric - time of release.
And sure, UE4 and Source 2 will never be used again, as RAGE won't be used again and FOX engine won't be used again (hey, you may be right here!).
 
So, I can't seem to find the other thread, so I'll just ask here. I've got an AMD 8350 OC'ed and its paired with a 280x. I'm going to be staying at 1080 for the foreseeable future, so what makes more sense for best performance at 1080? The 970 or 390? Just looking for a low to moderate performance bump.
 
All these games were launched in 2015. Your personal definition of "older game" doesn't interest me in the slightest as there is an objective metric - time of release.
And sure, UE4 and Source 2 will never be used again, as RAGE won't be used again and FOX engine won't be used again (hey, you may be right here!).

the next time you see the rage engine, it will be vastly different. source 2 will have no relevance outside crappy f2p games that a toaster could run. UE4 is the only engine likely to favor nvidia going forward. you can argue semantics all you want but the trend is very clear. looking at newer titles designed from the ground up for the gen 8 consoles paints a very different performance picture when comparing these chips.
 
So, I can't seem to find the other thread, so I'll just ask here. I've got an AMD 8350 OC'ed and its paired with a 280x. I'm going to be staying at 1080 for the foreseeable future, so what makes more sense for best performance at 1080? The 970 or 390? Just looking for a low to moderate performance bump.

With that CPU, a 970 without a doubt.
 
Probably been said a bunch already, but wait for Pascal if you're not buying a 980 Ti... even then, best to wait and see.
 
Top Bottom