• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[HUB] NVIDIA Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

That still doesn't exactly validate driver overhead. The way he had to set that up the ram has less bandwidth than the console (68GB/sec vs 14GB/sec) and the gpu is short on bandwidth aswell because he had to run it through the nvme port with an adapter, maybe like a bad pcie riser cable is caused issues for the rtx gpu.

You didn't really watched the video...



People, THE "PROBLEM" IS REAL.
It's a fact.
People already knew years ago that the Nvidia driver was doing more work than the competition on the CPU and that was not a problem, for most of the time it worked as and advantage. But now that the CPU have to calculate the work for DOUBLE ALUs on Ampere a downside manifested itself, that is, by doing so much work on the CPU weaker CPUs may not be able to keep with with the necessary amount of work to feed the GPU hurting performance.

Not everyone will be affected by this, but it's there.
 

PhoenixTank

Member
And most modern games are developed on AMD hardware first, because that makes making the Xbox/PS code later on easier or vicaversa, it's easier to port from Xbox/PS to an AMD hardware first.

Also, AMD provides tools for PC developers with hardware tracking and a tool for analysing memory usage. Optimisiation and fixing is done on AMD hardware and later checked if it works well on GeForce. If it works but with poorer performance the developers can always choose to optimise further for GeForce but this is not something that every dev will do.
I was on board until about here.
Citation needed regarding development hardware choices. Absolute newest stuff, maybe?

I swear a few/several months ago VFXVeteran VFXVeteran was telling us all that the industry by & large sticks with Nvidia as the default. In one of the Ampere threads maybe? It has been a while.
 

Bluntman

Member
I was on board until about here.
Citation needed regarding development hardware choices. Absolute newest stuff, maybe?

I swear a few/several months ago VFXVeteran VFXVeteran was telling us all that the industry by & large sticks with Nvidia as the default. In one of the Ampere threads maybe? It has been a while.

It's easier to start the porting process with AMD because of the hardware built in tracking profiler and the full memory analysing tool. The reason PC AMD GPU-s have a tracker built into the hardware is that Sony and Microsoft requested that and it trickled down to the PC GPUs as well.

Nvidia doesn't have that and they don't release their hardwares documentation either. It's easier to start with AMD hardware so you can map and root out issues in the code.
 
Yeah, let's ignore the clear advantage NVIDIA GPU's have over the rest and let's find a silly scenario in which they perform worse.
Silly scenario? What are you on? This doesn't just affect the top tier cards you know. I'm still rocking Devil's Canyon...and I'm not ready to upgrade just yet. Right now I have a 1070, if I upgrade to an RTX 3070, or even a 3060, which is not an unreasonable upgrade at all, this issue would immediately punch me in the dick in DX12 titles (and possibly Vulkan too) People on mid-range hardware often keep their CPUs through multiple GPU upgrades.
 

Rbk_3

Member
Silly scenario? What are you on? This doesn't just affect the top tier cards you know. I'm still rocking Devil's Canyon...and I'm not ready to upgrade just yet. Right now I have a 1070, if I upgrade to an RTX 3070, or even a 3060, which is not an unreasonable upgrade at all, this issue would immediately punch me in the dick in DX12 titles (and possibly Vulkan too) People on mid-range hardware often keep their CPUs through multiple GPU upgrades.

This dude really running a 7 year old, 4 core CPU and complaining it will bottleneck a current mid range GPU? :messenger_tears_of_joy:
 
Last edited:
This dude really running a 7 year old, 4 core CPU and complaining it will bottleneck a current mid range GPU? :messenger_tears_of_joy:
Wouldn't be a problem with a Radeon card...
Also friendly reminder that until 2017 4C8T i7's were the norm and IPC improvements were basically non-existent. Hell, it was still competitive with Zen 2 in single core performance. I might as well have said 7700K for all the difference it'd make.
 

VFXVeteran

Banned
It's easier to start the porting process with AMD because of the hardware built in tracking profiler and the full memory analysing tool. The reason PC AMD GPU-s have a tracker built into the hardware is that Sony and Microsoft requested that and it trickled down to the PC GPUs as well.

Nvidia doesn't have that and they don't release their hardwares documentation either. It's easier to start with AMD hardware so you can map and root out issues in the code.
I've never heard of the industry using AMD hardware to make PC games. Where did you get that from? Nearly every company uses Nvidia boards for their development *IF* it's developed for a PC first then ported to consoles. Film uses nothing but Nvidia, and even my company at Lockheed Martin uses Nvidia. Yes, I did use AMD at some point before because I asked for it personally due to our software crashing with their old OpenGL pipeline when trying to debug using the Nvidia debugger, but since that was fixed in a patch they released to us, we use Nvidia . We love that their their SDK is incoporated into Visual Studio. We also use CUDA for general computing.
 
Last edited:

Marlenus

Member
This dude really running a 7 year old, 4 core CPU and complaining it will bottleneck a current mid range GPU? :messenger_tears_of_joy:

It is a question of degree. It is obvious that an older CPU will bottleneck a newer GPU but knowing how much is important information to know how much to spend to get the best bang for your buck. Even with a CPU bottleneck a GPU upgrade is usually far more cost effective than a CPU upgrade, especially if it means a new motherboard and ram as well as CPU.

Are some people willfully obtuse because this stuff is not that unfathomable.
 

vanguardian1

poor, homeless and tasteless
I've never heard of the industry using AMD hardware to make PC games. Where did you get that from? Nearly every company uses Nvidia boards for their development *IF* it's developed for a PC first then ported to consoles. Film uses nothing but Nvidia, and even my company at Lockheed Martin uses Nvidia. Yes, I did use AMD at some point before because I asked for it personally due to our software crashing with their old OpenGL pipeline when trying to debug using the Nvidia debugger, but since that was fixed in a patch they released to us, we use Nvidia . We love that their their SDK is incoporated into Visual Studio. We also use CUDA for general computing.
Just to clarify, where PC format for a game is priority (especially exclusively), nvidia focus is the norm. Where console is priority, using AMD is normal (but not dominant like nvidia and pc-only games) and has been for several years now since the amd-based ps4/x1 became the focus and continues with ps5/x1x.
 
Last edited:

VFXVeteran

Banned
Just to clarify, where PC format for a game is priority (especially exclusively), nvidia focus is the norm. Where console is priority, using AMD is normal (but not dominant like nvidia and pc-only games) and has been for several years now since the amd-based ps4/x1 became the focus and continues with ps5/x1x.
The response to the other poster was correcting his statement as if it were fact that all game companies use AMD GPUs for their development of the PC iteration of a game. That's simply not true. And basically you just concurred with my statement as I prefaced "if it were developed for a PC first".
 
Last edited:

vanguardian1

poor, homeless and tasteless
The response to the other poster was correcting his statement as if it were fact that all game companies use AMD GPUs for their development of the PC iteration of a game. That's simply not true. And basically you just concurred with my statement as I prefaced "if it were developed for a PC first".
Then I apologize for misreading your statement. I do that on occasion, sorry. :)
 
Forza 4 seems to be an extreme example:

fzLingG.jpg
 

sertopico

Member
Yeah at 1080p Medium settings.. aka the settings not a single 3070 owner runs their games at.

In the real world use at 2560x1440 an rtx 3070 owner lowers a setting or two and voila 60 fps locked, meanwhile a 5700 xt owner already dreaming about a new gpu upgrade.

mCoxfwV.jpg
You forgot to add dlss to the equation, which actually internally renders at a lower resolution than the one chosen. This means that the cpu will have to work more to keep pace with the gpu. That is why in games which use rt and dlss you also need a powerful cpu to take fully advantage of those features.

Anyway, isn't also Resizable BAR a thing that will reduce this gap? Problem is, the AMD solution is HW implemented, nvidia's/intel's on the other hand is software handled (gpu firmware+drivers). If you understand some german there is a video from igor's lab that talks about it.
 
Last edited:

Bluntman

Member
The response to the other poster was correcting his statement as if it were fact that all game companies use AMD GPUs for their development of the PC iteration of a game. That's simply not true. And basically you just concurred with my statement as I prefaced "if it were developed for a PC first".

Okay, I'll be more clear then. Anyone who cares about getting the most performance out of their own engine will use AMD hardware because of the toolset they provide. Without the hardware tracking profiler and full memory analysing tool you won't ever see the performance problems that are lurking there. After fixing those and knowing where the problems lie exactly, you can take that and do the optimisation on GeForce as well if you care enough.

But I'll cut this short.

Please someone do these kinds of tests with the Strange Brigade as it's one of the few games where I know the devs actually did this. That game is properly profiled, analysed and optimised with AMD and then based on that properly optimised for GeForce as well.

This will prove that this driver overhead theory is nonsense. If it doesn't prove it, I buy you all beers.
 
Last edited:

Bluntman

Member
(I obviously cheat here with the beers, because I know full well that explicit APIs like DX12 don't have kernel drivers, they are basically just a runtime environment with a shader compiler, so there physically can't be more than a very very very minimal overhead on the drivers side)
 

VFXVeteran

Banned
Okay, I'll be more clear then. Anyone who cares about getting the most performance out of their own engine will use AMD hardware because of the toolset they provide. Without the hardware tracking profiler and full memory analysing tool you won't ever see the performance problems that are lurking there. After fixing those and knowing where the problems lie exactly, you can take that and do the optimisation on GeForce as well if you care enough.

But I'll cut this short.

Please someone do these kinds of tests with the Strange Brigade as it's one of the few games where I know the devs actually did this. That game is properly profiled, analysed and optimised with AMD and then based on that properly optimised for GeForce as well.

This will prove that this driver overhead theory is nonsense. If it doesn't prove it, I buy you all beers.
How is AMD's hardware toolset superior to Nvidia's? And you are still making sweeping statements. I will assume you are talking about already having a console as the top priority for the game and then releasing a port to the PC. However, most games begin with PC hardware spec due to it's agnostic nature and the fact that it has much more powerful hardware where artists can realize their vision (i.e. larger texture sizes, more accurate AO, etc..) and well flushed out drivers and then they scale down to consoles. But it is true that some games target the console hardware first and in that situation I can agree with you that having AMD hardware on PC is paramount. Then porting to PC later on.
 
Interesting. Even 3700X is affected by 18%. Where is this data from?


The promisse of these next generation APIs was better performance on lower end CPUs, remember?
 
Last edited:
Oh, I do! Looks like it was true only for systems with AMD GPUs. :messenger_tears_of_joy:

The gains with the change to DX12 and Vulkan were higher on AMD GPUs because AMD's DX11 driver had worse performance. But were are talking about Nvidia's GPUs here. Forget that AMD (and Intel) exists, looks only the Geforce results on those benchmarks. Now answer, why the top tier GPUs are regressing in performance in some scenarios?


2OEuVdP.png
 

Marlenus

Member
Okay, I'll be more clear then. Anyone who cares about getting the most performance out of their own engine will use AMD hardware because of the toolset they provide. Without the hardware tracking profiler and full memory analysing tool you won't ever see the performance problems that are lurking there. After fixing those and knowing where the problems lie exactly, you can take that and do the optimisation on GeForce as well if you care enough.

But I'll cut this short.

Please someone do these kinds of tests with the Strange Brigade as it's one of the few games where I know the devs actually did this. That game is properly profiled, analysed and optimised with AMD and then based on that properly optimised for GeForce as well.

This will prove that this driver overhead theory is nonsense. If it doesn't prove it, I buy you all beers.

Proof select the FX 4300 as the CPU then select the 1080Ti, 1080, Vega 64 GPUs

In DX12 Ultra at 1080p the 1080Ti and 1080 are GPU bound on NV hardware at 128 FPS but Vega 64 hits 135 FPS. Up it to 1440p and the 1080Ti gets 122 FPS and Vega 64 drops to 100.

In Vulkan Ultra the 1080Ti is fastest at each resolution.
 
That site ran the cpu test with 8xMSAA and were gpu capped on 6900xt from 3600x and up, so we can't even see real difference between Nvidia and Amd here. [the gap would be wider]

nvm: was looking at horizon 4 benches on that site when I quoted you :messenger_grinning_sweat:
 
Last edited:

Md Ray

Member
Curiously if we take RDR2 we'll see Amd gpu performing better in Vulkan, but Nvidia better in Dx12.
I went from GTX 970 to RTX 3070. For me, both of these GPUs are faster in Vulkan in RDR2. Games Nexus' tests also show that Ampere is faster in Vulkan than DX12 in that game.

And there's a slight perf regression on RDNA 2 with Vulkan.
 
Last edited:

yamaci17

Member
This is a problem they really should fix... I see people disregarding it as "dont match weak cpu with muh strong gpu"

Oh, so you people thought a %30 increase over zen 2 makes zen 3 a whole different level?

Nope:



a 5600x with a 3070. High end CPU, huh?

Can't lock to 60 FPS. Bottleneck occurs. Below 60 FPS. Overhead. Got it?

There are and will be tons of CPU bound games like this. No CPU is safe and secure from this issue, unless you're eligible to shell out extra money for CPU every 2 years.

Due to this overhead, a potential RX 6700XT will perform better and probably lock to 60 fps with a 5600x.
 

yamaci17

Member
What? In Watch Dogs Legion with RT game hits CPU hard as fuck, drops to ~40 something on 3600 while driving. Or try Crysis remaster with detail setting above high (draw distance) and with RT, even 5600x drops below 60fps and that's entirely on CPU. MANY games are CPU limited - fucking Far Cry New Dawn drops below 60 fps on 5600x and GPU is bored...
Finally, a sensible, actual experience driven post.

Games, majority of them, became horribly CPU bound last years. This is a serious issue they really need to fix.

I bet 6700xt, even with RTX enabled, will have much, much better CPU bound performance when paired with a 5600x.

It was already weird seeing all CPUs struggle in Legion and Cyberpunk when RTX is enabled (ray tracing is extra taxing on cpu, as a reminder). I just thought it was mostly due to horrible optimization (it still is, to an extent). But the bigger offender here is Nvidia's drivers, it seems like. A %20 worse performance is horrible.

Either that, or you will be happy to play at 45 fps at 1440p with %99 usage with a rtx 3070 and forget 60 FPS.
 
Last edited:

Bluntman

Member
Proof select the FX 4300 as the CPU then select the 1080Ti, 1080, Vega 64 GPUs

In DX12 Ultra at 1080p the 1080Ti and 1080 are GPU bound on NV hardware at 128 FPS but Vega 64 hits 135 FPS. Up it to 1440p and the 1080Ti gets 122 FPS and Vega 64 drops to 100.

In Vulkan Ultra the 1080Ti is fastest at each resolution.

Thanks!

Also these benchmarks were done when the game came out I presume? It would be even more of a proof to see the actual, fully updated version as even more Nvidia optimisation came post-launch.
 
Last edited:
Thanks!

Also these benchmarks were done when the game came out I presume? It would be even more of a proof to see the actual, fully updated version as even more Nvidia optimisation came post-launch.

that website is not reliable. Word is they aproximate most of the results. Which is plausible, the amount of work required to test all those cpu's and gpu's that span multiple generations would take a month. All in all, dont use that website for anything. And also dont use sketchy youtube videos that just have some random videos or slides and are apparently comparing the most unusual components. Most of those channels are fake - low effort videos with invented or stolen results for a quick buck on youtube.
 

Bluntman

Member
How is AMD's hardware toolset superior to Nvidia's? And you are still making sweeping statements. I will assume you are talking about already having a console as the top priority for the game and then releasing a port to the PC. However, most games begin with PC hardware spec due to it's agnostic nature and the fact that it has much more powerful hardware where artists can realize their vision (i.e. larger texture sizes, more accurate AO, etc..) and well flushed out drivers and then they scale down to consoles. But it is true that some games target the console hardware first and in that situation I can agree with you that having AMD hardware on PC is paramount. Then porting to PC later on.

It's not that AMD's toolset is superior, it's that Nvidia doesn't have profiler using hardware built-in tracker, and doesn't have full memory analysing tool either. So they are not worse on the Nvidia side, they are non-existent.
 

Bluntman

Member
that website is not reliable. Word is they aproximate most of the results. Which is plausible, the amount of work required to test all those cpu's and gpu's that span multiple generations would take a month. All in all, dont use that website for anything. And also dont use sketchy youtube videos that just have some random videos or slides and are apparently comparing the most unusual components. Most of those channels are fake - low effort videos with invented or stolen results for a quick buck on youtube.

I see. Then I'm still waiting for someone to do (pretty please) a benchmark with the latest version of Strange Brigade using an older CPU and newer GPUs to put this driver overhead nonsense behind us.

I don't know if we are talking about the same thing, but I agree that people should stop watching Hardware Unboxed, Linus, TechJesus and whatever hardware YouTube celebs are out there, because they don't have a clue what the fuck they are talking about.
 
Last edited:

yamaci17

Member
I see. Then I'm still waiting for someone to do (pretty please) a benchmark with the latest version of Strange Brigade using an older CPU and newer GPUs to put this driver overhead nonsense behind us.

I don't know if we are talking about the same thing, but I agree that people should stop watching Hardware Unboxed, Linus, TechJesus and whatever hardware YouTube celebs are out there, because they don't have a clue what the fuck they are talking about.
With what settings? I have a 2700x and 3070 and would be happy to assist (2700x is a low end cpu too, since everyone are too happy to call 2600x low end, 2700x has the exact clocks and ipc with 2600x, and since extra cores do not matter for games yet)

jk jk with heavily oc'ed ram, i'm having a good time, surprised actually!!



 

Bluntman

Member
With what settings? I have a 2700x and 3070 and would be happy to assist (2700x is a low end cpu too, since everyone are too happy to call 2600x low end, 2700x has the exact clocks and ipc with 2600x, and since extra cores do not matter for games yet)

jk jk with heavily oc'ed ram, i'm having a good time, surprised actually!!





Well, the problem is, you would also need and AMD card to compare :messenger_grinning_sweat:
 

yamaci17

Member
Well, the problem is, you would also need and AMD card to compare :messenger_grinning_sweat:
Haha, if i can grab a 6700xt when it comes to stocks, i will experiment on this.

If it really gets "more" fps than 3070 with my setup, i will sell my 3070 for profits and carry on with amd card.

I'm already content with performance I'm getting, and I enjoy RTX, but CPU bound situations are very often, especially with low ipc cpus such as zen/zen+ (zen 2 to an extent, only zen 3 is actually an actual improvement, for me)
 

Marlenus

Member
Thanks!

Also these benchmarks were done when the game came out I presume? It would be even more of a proof to see the actual, fully updated version as even more Nvidia optimisation came post-launch.

Igors Lab have tested with Horizon Zero Dawn. They did it a bit differently by sticking with a 5800X and disabling cores.

They think it mat be Async compute related but say more testing is needed.

As far as Strange Brigade goes seeing how it scales with cores is hard if you don't trust gamegpu results.
 

Bluntman

Member
Igors Lab have tested with Horizon Zero Dawn. They did it a bit differently by sticking with a 5800X and disabling cores.

They think it mat be Async compute related but say more testing is needed.

As far as Strange Brigade goes seeing how it scales with cores is hard if you don't trust gamegpu results.

Thanks for the link!

He is basicly saying what I've been saying. It's not a driver overhead but a software issue.

His line of thinking seems correct. Development and optimisation done on AMD hardware (again, because Nvidia has no tools to properly do this) and then they didn't bother to tune it specifically to how Nvidia hardware works.
 

yamaci17

Member
Exposition is good, but I have very low hopes that Nvidia would actually fix this. Maybe they "can" do a remedy to fix this, but I bet they "won't". They will slap a new hardware scheduler or something on their nextgen cards and call it a day... Smh
 

Bluntman

Member
So they still have no fucking clue what they're talking about.

For the 100th time then. On an explicit API (DX12, Vulkan) there is no fucking kernel driver. Nvidia can't do anything about this via driver updates because this is not a driver proplem, the driver doesn't have a clue what the hardware is doing on an explicit API. The god damned software (game) manages the hardware including the thread scheduling.

The only ones who can fix this are the game devs themselves, because this is a game engine issue.
 
Last edited:
So they still have no fucking clue what they talk about.

For the 100th time then. On an explicit API (DX12, Vulkan) there is no fucking kernel driver. Nvidia can't do anything about this via driver updates because this is not a driver proplem, the driver doesn't have a clue what the hardware is doing on an explicit API. The god damned software (game) manages the hardware including the thread scheduling.

The only ones who can fix this are the game devs themselves, because this is a game engine issue.
The irony here is palpable. You're completely and utterly wrong. For starters DX12 is not bare-metal, it's closer to it than DX11, but it isn't and it can't be. Secondly WDDM handles GPU scheduling, not the game...unless hardware accelerated GPU scheduling is enabled, in which case hardware on the graphics card itself does it.
 

Bluntman

Member
The irony here is palpable. You're completely and utterly wrong. For starters DX12 is not bare-metal, it's closer to it than DX11, but it isn't and it can't be. Secondly WDDM handles GPU scheduling, not the game...unless hardware accelerated GPU scheduling is enabled, in which case hardware on the graphics card itself does it.
You're right on this, I stand corrected.

But doesn't really matter in this case because this still isn't a driver issue.
 

Armorian

Banned
Finally, a sensible, actual experience driven post.

Games, majority of them, became horribly CPU bound last years. This is a serious issue they really need to fix.

I bet 6700xt, even with RTX enabled, will have much, much better CPU bound performance when paired with a 5600x.

It was already weird seeing all CPUs struggle in Legion and Cyberpunk when RTX is enabled (ray tracing is extra taxing on cpu, as a reminder). I just thought it was mostly due to horrible optimization (it still is, to an extent). But the bigger offender here is Nvidia's drivers, it seems like. A %20 worse performance is horrible.

Either that, or you will be happy to play at 45 fps at 1440p with %99 usage with a rtx 3070 and forget 60 FPS.

I jumped ship from R9 290 to 970 because my poor 2600k was wasted by AMD DX11 drivers (plus card was HOT and LOUD) in CPU moments in games. For everyone - check out first village in FC4, still kills CPUs after all these years :messenger_grinning_smiling: Now the roles are reversed in DX12 titles, my 5600x keeps me above this, for now...
 

yamaci17

Member
I jumped ship from R9 290 to 970 because my poor 2600k was wasted by AMD DX11 drivers (plus card was HOT and LOUD) in CPU moments in games. For everyone - check out first village in FC4, still kills CPUs after all these years :messenger_grinning_smiling: Now the roles are reversed in DX12 titles, my 5600x keeps me above this, for now...
You may give Cyberpunk 2077 a shot, with rtx enabled, and high crowd density setting, it will be very hard to keep locked 60 in busy streets.





Drops to... 40-45 range...

funny thing is, we cant compare it with amd because amd cards' rt performance seem very bad, so they will run into gpu bound at those FPS values anyways...

i tried to find some wd:legion videos, but couldn't find it sadly,it is usually compared with 3080

someone should compare rtx 3070+5600x vs rx 6800xt+5600x in watch dogs legion in cpu bound places, so that we can see a bigger picture

resolutions must be 720p for both cards so that none of the cards run into gpu bound bottleneck at 45-50 fps (3070 can already achieve this with dlss quality)
 

ethomaz

Banned
So finally people started to understand this is not a driver issue? The joke is take what that biased YouTubers says as right... he has no ideia what he is testing.
 

yamaci17

Member
Some people will always say it's not nVidia's fault... The lengths people go to to defend nVidia is crazy to me. If they had to take a bullet for either Jensen Huang or their mother, it looks like their mother would die.
Even though I like Nvidia's software more and found them more useful, and even being an actual fanboyish of Nvidia brand, even I cannot comprehend some people justifying Nvidia in this.

This is not a low end cpu issue. It can happen with any cpu with any configuration.

Not everyone has to be content with gpu bound 45 fps in cyberpunk and legion. someone will drive their dlss setting one tick more to get that 60 fps. but with the current state of nvidia's situation, that does not seem plausible, at all. rtx games are running into huge cpu bottlenecks, and funnily, it's nvidia itself that promoted this ray tracing tech and tried to market it.

Honestly, i will give it a 3-6 months, and if it's not fixed, i will sell my nvidia gpu and change to an amd card. that's how serious this implication is for me. they will lost a lot of customer if reviewers keep exposing it more, especially in the case of even higher end cpus (i hope they do). a 5600x+ 3070 at 1440p dlss quality mode would do the trick for heavy cpu bottlenecking and would prove a big point
 

onunnuno

Neo Member
This is not a low end cpu issue. It can happen with any cpu with any configuration.

This is the crazy part about this thread to me. Either people say "this is normal" or say "why would you match a $400 GPU with a #300 CPU? The only logic is to match a $700 cpu with any nvidia gpu", maybe some people don't swim in money? And having a better GPU should lead you to better performance not the other way around. Or maybe nvidia just wants to sell CPUs 🤷‍♂️
 

Rikkori

Member
Personally I keep my CPU for 3-4 GPU gens easily, so this is very important. I bought mine (i7 6800K) since just before Polaris launched and I'm now on RDNA 2, with no CPU upgrade plans until 2023 most likely (AM5 + DDR5).
 
Top Bottom