• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[HUB] NVIDIA Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

Armorian

Banned
It doesn't invalidate the findings. Dx12 is here to stay, so it must be addressed.

But is seems that it has more to do with specyfic games than drivers I think. In CP2077 differences are very minimal (aside Ryzen 1400), same for Death Stranding (no difference in this case), while in some games they are quite big.

Look at CPU usage here, almost the same for most games:

 

yamaci17

Member
But is seems that it has more to do with specyfic games than drivers I think. In CP2077 differences are very minimal (aside Ryzen 1400), same for Death Stranding (no difference in this case), while in some games they are quite big.

Look at CPU usage here, almost the same for most games:


i don't think this has anything to do with actual cpu usage
 

yamaci17

Member
TChCuNw.png


EqR1nnn.png


XkvMRsy.png



R6 built-in benchmark

As you can see, there's no GPU bound limitation, completely CPU bound situation. CPU usages are completely same (%70 vs %71, it's margin of error)

It's clear that at the completely same CPU usage, Vulkan renders %21 more frames. It's clear that this game has %21 more overhead in Dx11 mode for my Nvidia GPU.

Overheads are something else, they may not directly show themselves as pure CPU/GPU usages
 
Last edited:

Ascend

Member
TChCuNw.png


EqR1nnn.png


XkvMRsy.png



R6 built-in benchmark

As you can see, there's no GPU bound limitation, completely CPU bound situation. CPU usages are completely same (%70 vs %71, it's margin of error)

It's clear that at the completely same CPU usage, Vulkan renders %21 more frames. It's clear that this game has %21 more overhead in Dx11 mode for my Nvidia GPU.

Overheads are something else, they may not directly show themselves as pure CPU/GPU usages
Just to elaborate on this... It's quite simple...
If you're completely CPU bound in both cases, you will see the same usage in both cases but lower performance in the less optimized setup.
If you're not CPU bound in either case, you will see a lower CPU usage on the setup that has the lower overhead, but get the same performance.
There is also the scenario where you are CPU bound in one and not the other, which means you will get low CPU and better performance in one, and higher CPU and worse performance in the other.

I should also say (although I suspect the majority here already know this) that obviously being CPU bound does not mean 100% CPU usage. 70% is not the max of your CPU, but it is the max that the software was programmed to use for your CPU. If you have a 2700X, it's either 6 or 12 threads for these results, depending on whether SMT was turned on or not.
 

spyshagg

Should not be allowed to breed

Hoddi

Member
But is seems that it has more to do with specyfic games than drivers I think. In CP2077 differences are very minimal (aside Ryzen 1400), same for Death Stranding (no difference in this case), while in some games they are quite big.

Look at CPU usage here, almost the same for most games:

snip
For what it's worth, CPU utilization numbers are a rather poor metric to the point of almost being useless for things like driver performance. It's fine for a broad overview (if your CPU hits 100% then it's time to upgrade) but when you add in things like SMT/HT then even 50% can easily mean that your CPU has fully exhausted all of its execution units. If anything, the bigger tell is that all GPUs report roughly the same CPU wattage which is probably a better metric in this case.

I don't really have anything else to add but I want to say that it's shameful how bad this supposed tech channel is at their job. There is such a thing as a performance profiler which makes it absolutely trivial to check for things like driver overhead and threading. They're even quoting that idiotic NerdTechGasm video which was debunked like 5 minutes after it came out.
 

spyshagg

Should not be allowed to breed
If anything, the bigger tell is that all GPUs report roughly the same CPU wattage which is probably a better metric in this case.

The best metric is to measure actual game performance. If both cpu's are having the same wattage and Nvidia cards are performing 20% worse, it implies the CPU is giving all it can on both GPUs, but that on Nvidia more cpu cycles are required than the ones available.

Whats troubling, is that the issue is not just core-count related. The HUB video tested the 1600X with 6/12 threads and the Nvidia GPUs still chocked. Whatever Nvidia "needs" for their GPUs to perform optimally in DX12, it also requires a fast cpu core, ideally not being shared with any other processes (this is when core-count comes into play)

The implication for the future is this: - Games will increase core-count utilization regardless of what Nvidia is doing in their driver. If the situation remains unresolved, people who today have a 3080/3090 paired with a 5600x (!!!), will see their games using all their cores, and leave Nvidia GPUs hungry for resources, as it is happening with the 4/8 thread CPUs today.

This will happen under one condition: - Only if you play games that require high refresh rates. 1080p 144hz/240hz/360hz.
 

Vae_Victis

Banned
It's there. But I think anyone with the right mind buying RDNA2 or Ampere will be at least on Zen2 level of core performance:

1h9C30c.png
rXmFm0j.png
I think there is not much of a question of this being a serious issue for high-end PCs right now (it isn't), the question is what happens in the next 3-5 years, and what CPU do you need to be properly future-proofed if you are going Nvidia.

If games start being much more CPU-heavy as a general rule (not unlikely given the specs of the new consoles - last gen CPU was the big bottleneck, this time around it definitely won't be), we could start seeing some mid-to-good CPUs of today getting maxed out here and there, and then this Nvidia problem with DirectX12 all of a sudden is a MUCH bigger deal.
 

Ascend

Member
So the most recent DX11 game:

oezA2jF.png


lpOW6q1.png



I wonder if AMD will fix this in their drivers...

Reminds me of Heavy Rain

z3zWTqb.png
Let's all cherrypick a game, where framerate at 1080p is exactly the same as at 4k, just so that we can make nVidia look better than AMD.

How desperate must you be to keep hammering on something that has been known for years. Showing AMD's DX11 overhead isn't somehow magically gonna fix nVidia's DX12 overhead.
 

Armorian

Banned
Let's all cherrypick a game, where framerate at 1080p is exactly the same as at 4k, just so that we can make nVidia look better than AMD.

How desperate must you be to keep hammering on something that has been known for years. Showing AMD's DX11 overhead isn't somehow magically gonna fix nVidia's DX12 overhead.

LOL, "cherrypick"

This is the newest game released... And hyped by people too, stuck on PS3 gen for so long. And DX11 games will be released for years
 

Marlenus

Member
LOL, "cherrypick"

This is the newest game released... And hyped by people too, stuck on PS3 gen for so long. And DX11 games will be released for years

Anything that is on Series X/S will be DX12 so DX11 issues, while entirely genuine and worth considering if they severely impact a game you are going to play, are going to be generally less interesting and impactful over the next 5 years than DX12 issues.

Also a new DX11 game that does not make proper use of command lists is just crap.
 

yamaci17

Member
So the most recent DX11 game:

oezA2jF.png


lpOW6q1.png



I wonder if AMD will fix this in their drivers...

Reminds me of Heavy Rain

z3zWTqb.png
these tests are weird tbh, 2700x will never ever perform equal to a 10900k, considering 3090 must be cpu bound at 1080p

these seem to be highly artificial
 

Armorian

Banned
Anything that is on Series X/S will be DX12 so DX11 issues, while entirely genuine and worth considering if they severely impact a game you are going to play, are going to be generally less interesting and impactful over the next 5 years than DX12 issues.

Also a new DX11 game that does not make proper use of command lists is just crap.

I'm not arguing that this is a good port or anything, I had to work with it myself to bring it to support 21:9. But CPU wall is real here with current drivers on AMD. GPU performance in general is actually quite normal when there are below this wall (actually slighty better than normal, 5700xt is weaker than 2070S in most games)

j0cvd0V.png



these tests are weird tbh, 2700x will never ever perform equal to a 10900k, considering 3090 must be cpu bound at 1080p

these seem to be highly artificial

There were few rare games where Z2 was on par with Intel, this is one of them

Why are games still using dx11? It is a 10 year old api.

DX11 is not the big problem, it has multicore support if devs want to use it (Crysis 3 and W3 still scales with more cores/threads). In UE4 games performance of DX11 and DX12 renders is pretty much the same.

CPU wall in DX11 crash 4

GIQOKLD.png


Why there are such differences in some games :pie_thinking:
 

Marlenus

Member
I'm not arguing that this is a good port or anything, I had to work with it myself to bring it to support 21:9. But CPU wall is real here with current drivers on AMD. GPU performance in general is actually quite normal when there are below this wall (actually slighty better than normal, 5700xt is weaker than 2070S in most games)

j0cvd0V.png





There were few rare games where Z2 was on par with Intel, this is one of them



DX11 is not the big problem, it has multicore support if devs want to use it (Crysis 3 and W3 still scales with more cores/threads). In UE4 games performance of DX11 and DX12 renders is pretty much the same.

CPU wall in DX11 crash 4

GIQOKLD.png


Why there are such differences in some games :pie_thinking:
Command lists. It is a DX11 feature that allows for splitting threads. NV implemented it such that for almost any DX11 game it will effectively use them regardless of if the devs bothered to or not. AMD do what the devs say and if that overloads the main thread then that is what happens.

AMD will not update their drivers for this now because it is not worth it anymore. Back when it would have been worth doing it AMD were in dire straits so did not have the opportunity to do so.

The bottom line is really on game devs. NV did a lot of work to improve the experience for their customers when devs did not do the work themselves. This is 100% a positive thing and if you play a lot of DX11 games go for NV without doubt.

Going forward DX12 is going to be more important and with the CPU power new consoles have you might see the NV overhead issue impact faster CPUs. This is an evolving issue though so it is just worth keeping an eye on when you go to buy a GPU/CPU.
 
Top Bottom