• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[HUB] NVIDIA Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

Ascend

Member
Hardware Unboxed at it again...

And then when Nvidia next tells them to fuck off due to their disingenuous takes they will start crying and plead innocence.
Exactly! Because nVidia should always be free of any criticism whatsoever!

What is it with all the Nvidia stans in here? This is clearly a Nvidia driver issue, which they should fix.
nVidia is always the best and should never be challenged or questioned. AMD is TRASH!. TRASH I TELL YOU.

kyle mooney trash GIF by Saturday Night Live


Seems stupid on its face to point this performance oddity out.

Also seems relevant because the 3600 is a popular CPU (as were previous 6-core ryzen).

For the haters; the worst that can happen is NVIDIA looks into it and identifies / fixes an overhead or other driver quirk and gamers benefit from a better product in the future.
nVidia has nothing to fix! Everything they do and release is already perfect!

If you watch the video you'll see he uses more than one game. Also demonstrates the increased NVidia driver overhead when using an Intel cpu towards the end of the video.
It's not allowed to agree with Hardware Unboxed! You shall take the side of nVidia, ALWAYS!



Not news at all. nVidia dropped their hardware scheduler a long time back, since Maxwell if I'm not mistaken. This is what allowed them the superior power consumption over AMD, but that came at a cost. All the scheduling needs to be done on the CPU. With DX12, the situation basically flipped between AMD and nVidia.

In DX11, AMD had high overhead because their driver basically limited everything to one thread, meaning CPU bottlenecks were a lot more prevalent on AMD cards, since nVidia could use more threads with command lists.

DX12 is closer to the hardware, and everything not done by hardware is done in software. This means that AMD's hardware scheduler that was previously a disadvantage is now an advantage for DX12.

AMD wasn't pushing low level APIs for nothing. Everyone praises nVidia but have no idea what is happening behind the scenes.
 

Techies

Member
Please, don't tell me that 2600x + 5700XT beating 2600x + 3070 is from "I'd expect hat" territory.

The weak part here is that only one game has been demonstrated to have that issue.



What a lame excuse...


Let's pretend 2600x is a bad CPU, shall we.

2600x will bottleneck 3700
2600x has lower single core rating then the intel 9th gen. I'm running a 9700k with 1080ti and I pass it's bottleneck by just like 1%
a 2600x is a cpu for 1070ti/1080 or equivalent GPU.
 
Last edited:

Ascend

Member
But yeah for Vulcan and DX12 I noticed higher driver overhead for nvidia few years ago in benchmarks but so far no one was talking about this much...
If it was AMD, it would have been a daily discussion. But we have yet another point that proves nVidia getting a pass for things that AMD doesn't. We have the same drones still defending nVidia here, while it's the user that will ultimately get degraded performance in certain scenarios... 🤷‍♂️

Hardware unboxed still searching for that radeon performance.
Cpu limited. I play at 4k so not gonna happen... And 1080p low settings? Yeah that's why I spent 2kusd. I better have performance where it matters rather than in some stupid try hard scenario
white family football GIF


That should be the signature .gif for this thread.
 
Last edited:

Marlenus

Member
2600x will bottleneck 3700
2600x has lower single core rating then the intel 9th gen. I'm running a 9700k with 1080ti and I pass it's bottleneck by just like 1%
a 2600x is a cpu for 1070ti/1080 or equivalent GPU.

People will often buy a cpu, mobo and ram platform they expect to last for several years with just GPU upgrades so it is common to end up with a mis matched cpu/gpu pair by the time you upgrade the entire PC. Knowing that in CPU bound scenes in DX12 titles the Radeon card will get higher fps is really useful.

The 6700XT is coming out and for someone who targets 1080p 120hz this data might swing them towards it rather than the 3070 (availability notwithstanding) if they are in that mid platform life gpu upgrade phase.
 

Shai-Tan

Banned
People will often buy a cpu, mobo and ram platform they expect to last for several years with just GPU upgrades so it is common to end up with a mis matched cpu/gpu pair by the time you upgrade the entire PC. Knowing that in CPU bound scenes in DX12 titles the Radeon card will get higher fps is really useful.

The 6700XT is coming out and for someone who targets 1080p 120hz this data might swing them towards it rather than the 3070 (availability notwithstanding) if they are in that mid platform life gpu upgrade phase.
Yeah, I know my current amd cpu is not ideal but I didn’t want to upgrade into the end of line that’s current on amd and intel. For many they would need a new mobo just to do that if they have the wrong b450 or are on 300 series.. then next new cpu need a new mobo and memory on top of that.
 
2600x will bottleneck 3700
2600x has lower single core rating then the intel 9th gen. I'm running a 9700k with 1080ti and I pass it's bottleneck by just like 1%
a 2600x is a cpu for 1070ti/1080 or equivalent GPU.

You're just confirming the findings.
Nvidia GPU's demand stronger CPUs to perform at a minimum while a weaker GPU from the competition will perform better with the same CPU.
 
Last edited:

Xdrive05

Member
I have a Ryzen 2600 that I run at 4.2Ghz. I'm wanting to get a 3060 for 1080P gaming. From youtube benchies, it looks like they're not a bad match, but you will get CPU bottlenecking in a few games especially. GPU intensive games (like Horizon: ZD, RDR, etc.) will typically NOT be bottlenecked by a 2600 at 1080P, even at a mere 3.7Ghz. But several games will have the 3060 hovering around 80% utilization, and a couple extremely CPU heavy hitters will have it around only 60%-70% utilization.

Presumably if Nvidia had less driver overhead, it would be not bottlenecked at all, or not nearly as much. Hope they can do something about that.

Edit:



Here's an example. Graphically intensive games have the 3060 being fed adequately by a 2600, while more e-sportsy titles are being CPU bound to some degree. Interesting that the OC to 4.1Ghz here is squeezing a proportional amount of utilization out of the GPU! (between 5% and 10% more from that slight OC) Far Cry: New Dawn gives a whopping 10%-15% GPU utilization boost just from that slightly faster CPU clock!

Older CPU people out there really ought to overclock if you can do it safely and you have a modern Nvidia to benefit from it.
 
Last edited:
Also, note that 3600X + 3070 is slower than 3600X + 5700 XT which, IMO, should not be happening.

Yeah at 1080p Medium settings.. aka the settings not a single 3070 owner runs their games at.

In the real world use at 2560x1440 an rtx 3070 owner lowers a setting or two and voila 60 fps locked, meanwhile a 5700 xt owner already dreaming about a new gpu upgrade.

mCoxfwV.jpg
 

Md Ray

Member
omg are you for real now? you think a 3090 + a 1600X at 1080p is a sane test?
i think you should get the fuck out
with your BS fake narrative
Yep, it's a sane test to highlight NVIDIA's driver overhead issue. The issue gets highlighted even with mid-range components like 3600X and 3070 where the weaker 5700 XT is outperforming a stronger GPU when it shouldn't.
 

Md Ray

Member
Yeah at 1080p Medium settings.. aka the settings not a single 3070 owner runs their games at.

In the real world use at 2560x1440 an rtx 3070 owner lowers a setting or two and voila 60 fps locked, meanwhile a 5700 xt owner already dreaming about a new gpu upgrade.

mCoxfwV.jpg
This is potentially problematic for future CPU-bound titles even at 1440p. Cyberpunk 2077 already dips down to 40fps at 1440p in the Tom's Diner section on my 3070 + 3700X rig at Med, High, Ultra preset due to this driver overhead issue.
 
This is potentially problematic for future CPU-bound titles even at 1440p. Cyberpunk 2077 already dips down to 40fps at 1440p in the Tom's Diner section on my 3070 + 3700X rig at Med, High, Ultra preset due to this driver overhead issue.
If you mean the tiny ms dip or momentary stutter that happen whilst you traversing fast on a vehicle then that happens on every single cpu + gpu combo out there in cp77 and it is easily replicable and happens in same places. Maybe they will rectify this with a future update.
 

Kenpachii

Member
I just said it, and I am the authoritative source on where people should click, so you better click on their video and watch it-to the end!

And i tell you that clown is grasping at straws wherever he can, to stay relevant and bend the reality whatever way he wants it in order to stir up shit to stay relevant or to keep his donkey farm of subscribers called fans happy.

If anybody wants to proof those gpu's have overhead in whatever api's.

U need to bench a metric ton of games, at all kinds of different settings, with lots of gpu's and different gpu architectures and different cpu architectures with different cores ( cpu's ) etc etc, with different drivers of every card and different patches of the games themselves that u are benching at different settings with drivers itself and different hardware setups like memory / motherboards. and base your conclusions on it.

He won't do that, because u actually have to use your brain at this point and actually start to work for a living instead of just slamming a few cards with the same brand cpu's on the same boards most likely on the same bios most likely etc etc u get it by now.

All i see in the OP is some shitter AMD cpu's that never worked for shit, like the 1600/2600 cpu's that camped massively always with shit tier performance output and stuttering garbage, no company ever optimized on top of it for ( let alone nvidia of all company's ). The 3600 however was finally a half decent cpu but still falls flat on its ass with support in games versus intel for pretty much 90% of its library because no dev actually cared until recently for supporting amd even half decent. but it made some changes on newer titles and with the 5000 series and piss poor intel revisions the sky is finally starting to turn a bit more brighter for amd.

I also would like to see him and those other tech clowns called linus actually bench some actual games that require cpu performance to see how that holds up. instead of all those gpu pushing titles.

Sadly the clown probably just realized today what driveroverhead actually was even while for a decade long he was happily advertising all those years budget cards on insanely expensive cpu setups ( to remove bottleneck guys rofl, because all those brainlets universally agreed with eachother that was the right thing to do ) but now suddently he saw the light after years of feeding into that bullshit.

Dude is just a train wreck.
He said that and I second that.

U agree with every single thing on the planet that favors AMD. Look i like to read your posts from time to time as its refreshing to read some different stuff and i always appropriate different takes on things so don't get it the wrong way.
 
Last edited:

Marlenus

Member
So if I'm using a 2700X am I gonna still be fucked with an Nvidia GPU over Radeon?

2700X is not so bad because it has extra cores. If games start utilising them though you might run into issues.

HUB only tested 6c12t ryzen parts and a 4c8t intel part so extrapolating it to 8c16t parts is not straight forward because the NV driver is multi-threaded so you have additional headroom the 6c and lower parts don't have.
 
U need to bench a metric ton of games, at all kinds of different settings, with lots of gpu's and different gpu architectures and different cpu architectures with different cores ( cpu's ) etc etc, with different drivers of every card and different patches of the games themselves that u are benching at different settings with drivers itself and different hardware setups like memory / motherboards. and base your conclusions on it.
I wonder how you ever know anything about any piece of hardware, because honestly you could not get any conclusion on anything (not just computer hardware) if you held anyone to this kind of "strict" standard.

I mean, sure you can spend years investigate a very specific subject like that, and not only do you hold other to strict "standards" of testing every single MB/RAM/CPU/GPU/HDD configuration -- which would be completely irrelevant in this case -- to show something you can observe with a relatively small sample size (6GPUs + 6 CPUs). But you can't even bother to watch the video before you criticize their work. The observation is there, there is what seems to be a driver overhead that makes nVidia's GPUs useless on 6/12t or less CPUs (as you seem to ignore, they also tested 4/8 Intel CPUs, which was their high end consumer offering until very recently).

How is this relevant?
Well, a lot of people did not feel the need to update for almost 10 years because Intel stagnated so much it was irrelevant for the longest time, so pairing a CPU what would still be perfectly fine with a GPU that requires more horse power to shine would be a bad decision. Cutting nVidia's 3000 series GPUs as a relevant choice for owners of these CPUs.

All i see in the OP is some shitter AMD cpu's that never worked for shit, like the 1600/2600 cpu's that camped massively always with shit tier performance output and stuttering garbage, no company ever optimized on top of it for ( let alone nvidia of all company's ). The 3600 however was finally a half decent cpu but still falls flat on its ass with support in games versus intel for pretty much 90% of its library because no dev actually cared until recently for supporting amd even half decent. but it made some changes on newer titles and with the 5000 series and piss poor intel revisions the sky is finally starting to turn a bit more brighter for amd.
What? Where di you even get your tech "news"? Vice? I can get you don't like AMD for x reason, and that you have an axe to grind, but I have not seen the stuttery mess you are talking about.

I don't think you can make a good faith argument, you are way too emotional about these things.
 

Bluntman

Member
Ohhhh dear God, such stupidity.

So let's get this straight. The few games that were shown by HWU were all running in DirectX 12. This is an explicit API which whole purpose is to get jobs away from the driver and give it to the software (the game) itself.

It's not like there is zero driver overhead using DirectX 12, but there is minimal overhead in the classic sense so this whole argument is bullshit.

This means that a games performance is mostly dependent on the game code itself, ie. how the game code manages the hardware and the memory, etc...

This is all well, because this helps us beat many limitations that were present before. But it also has some drawbacks, because the performance is now almost wholy dependent on the game code, so the hardware that the game was first developed on will always have an advantage. This is the hardware that the developers have the most experience with.

And most modern games are developed on AMD hardware first, because that makes making the Xbox/PS code later on easier or vicaversa, it's easier to port from Xbox/PS to an AMD hardware first.

Also, AMD provides tools for PC developers with hardware tracking and a tool for analysing memory usage. Optimisiation and fixing is done on AMD hardware and later checked if it works well on GeForce. If it works but with poorer performance the developers can always choose to optimise further for GeForce but this is not something that every dev will do.

So what we see here in the HWU video is NOT a driver issue, it's a software issue. it could be fixed with a patch to the game, but makes no sense really for the devs to work on this, because this is not a real world realistic scenario anyways.

This whole thing is STUPID.
 
Last edited:

Marlenus

Member
Ohhhh dear God, such stupidity.

So let's get this straight. The few games that were shown by HWU were all running in DirectX 12. This is an explicit API which whole purpose is to get jobs away from the driver and give it to the software (the game) itself.

It's not like there is zero driver overhead using DirectX 12, but there is minimal overhead in the classic sense so this whole argument is bullshit.

This means that a games performance is mostly dependent on the game code itself, ie. how the game code manages the hardware and the memory, etc...

This is all well, because this helps us beat many limitations that were present before. But it also has some drawbacks, because the performance is now almost wholy dependent on the game code, so the hardware that the game was first developed on will always have an advantage. This is the hardware that the developers have the most experience with.

And most modern games are developed on AMD hardware first, because that makes making the Xbox/PS code later on easier or vicaversa, it's easier to port from Xbox/PS to an AMD hardware first.

Also, AMD provides tools for PC developers with hardware tracking and a tool for analysing memory usage. Optimisiation and fixing is done on AMD hardware and later checked if it works well on GeForce. If it works but with poorer performance the developers can always choose to optimise further for GeForce but this is not something that every dev will do.

So what we see here in the HWU video is NOT a driver issue, it's a software issue. it could be fixed with a patch to the game, but makes no sense really for the devs to work on this, because this is not a real world realistic scenario anyways.

This whole thing is STUPID.

That is an awful lot of words to say "I don't understand anything'.
 

FireFly

Member
Ohhhh dear God, such stupidity.

So let's get this straight. The few games that were shown by HWU were all running in DirectX 12. This is an explicit API which whole purpose is to get jobs away from the driver and give it to the software (the game) itself.

Surely the stack is API ----> driver ----> hardware? Something must translate the API calls into hardware instructions. And if what you are saying was true, we wouldn't see big boosts in performance in specific titles with new driver releases, which is commonplace even with DirectX 12.
 
Last edited:

Bluntman

Member
Surely the stack is API ----> driver ----> hardware? Something must translate the API calls into hardware instructions. And if what you are saying was true, we wouldn't see big boosts in performance in specific titles with new driver releases, which is commonplace even with DirectX 12.

The driver still matters, yes, just a lot less than before.

I'm saying this specific issue (it's a nonissue really) is most likely not driver related, and most certainly not driver overhead related.
 
Last edited:

FireFly

Member
I'm saying this specific issue is most likely not driver related, and most certainly not driver overhead related.
How do we know this? (If the CPU overhead of the driver is at least partly independent of the game workload, I don't see why it couldn't "steal" enough CPU cycles to impact performance).
 
Last edited:

Rbk_3

Member
It's interesting, but not really relevant when it comes to practical use. 99% of the time you won't be running into a CPU limit.

If you play Warzone you will be CPU bound a very high % of the time with a 3090 if you pair it with anything less than a 10900k or 5050x with b-die memory.
 
Last edited:

Woo-Fu

Banned
I think a more interesting question would be what is Nvidia using that overhead for? Sure, it's easy to simply jump to "driver problem" but it could presumably be intentional and/or a feature.


DISCLAIMER: My pc has an intel CPU and an AMD GPU and in general I'm disgusted by Nvidia's pricing.
 
Last edited:

Marlenus

Member
I think a more interesting question would be what is Nvidia using that overhead for? Sure, it's easy to simply jump to "driver problem" but it could presumably be intentional and/or a feature.


DISCLAIMER: My pc has an intel CPU and an AMD GPU and in general I'm disgusted by Nvidia's pricing.

In DX11 titles that overhead is used to reduce load on the main CPU thread by splitting the workload to more cores meaning more draw calls can be issued. This results in more FPS in DX11 titles when CPU bound on that main thread. This can be seen in Far Cry 5, AC:O and other titles at 1080p and even 1440p.

In DX12 titles this is not necessary.
 
I'm saying this specific issue (it's a nonissue really) is most likely not driver related, and most certainly not driver overhead related.
That is a lot of programmers causing the same kind of CPU scaling problems on the nvidia side.

It might be something with their whole architecture then.
 

Shai-Tan

Banned
I think a more interesting question would be what is Nvidia using that overhead for? Sure, it's easy to simply jump to "driver problem" but it could presumably be intentional and/or a feature.


DISCLAIMER: My pc has an intel CPU and an AMD GPU and in general I'm disgusted by Nvidia's pricing.
That’s a good idea for an article although I doubt Nvidia would comment if it didn’t make them look good. Enthusiast press dont cultivate enough industry connections to be able to quickly get to the bottom of more technical issues. I would have thought Beyond3d would have discussion that could shed light on it but it seems more technical people there arent posting anymore.
 

llien

Member
(Since DirectX 12 something something EXPLICIT API something something...)

So what we see here in the HWU video is NOT a driver issue, it's a software issue.
Amazing stuff, dear god. Are you guys on crack or something?
 
Last edited:

Armorian

Banned
The absolute state of Nvidia haters.

You have very clear results from HU and DF when they tested X1 APU. In some scenarios this driver overhead issue is present, much less so (or non existant) on high end CPUs. I think this should be on the news and maybe nv will do something about this (if possible).
 
Yeah, let's ignore the clear advantage NVIDIA GPU's have over the rest and let's find a silly scenario in which they perform worse.

Ignorance is bliss.
Small nitpicking vs the obvious big advantages. You can easily spot the ones who celebrate over the smallest wins, without seeing the bigger picture. And I'm not giving Nvidia a pass on this either, but at the same time, do people really play on extremely low resolution and lower settings, on a GPU like that? Kinda makes you wonder if there's an agenda of some sort? Oh wait, it's HUB...
 

Marlenus

Member
Yeah, let's ignore the clear advantage NVIDIA GPU's have over the rest and let's find a silly scenario in which they perform worse.

Ignorance is bliss.

Why is it always zero sum with some people?

Knowing that in DX12 titles you need a more powerful CPU to get the most out of an NV GPU vs an AMD GPU is useful data for people doing a mid gen refresh on an older CPU.

If a person wants NV features then go NV but it might mean they go with a lower tier NV card because they know the CPU will be a bottleneck so it saves them some money.
 

hlm666

Member



I imagine that Nvidia's own APIs used on the Switch doesn't have this problem?


That still doesn't exactly validate driver overhead. The way he had to set that up the ram has less bandwidth than the console (68GB/sec vs 14GB/sec) and the gpu is short on bandwidth aswell because he had to run it through the nvme port with an adapter, maybe like a bad pcie riser cable is caused issues for the rtx gpu.
 
Top Bottom