• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[HUB] NVIDIA Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

Bluntman

Member
so nvidia either has no issues or has an hardware issue?
No issues, this is a software issue.

Some people will always say it's not nVidia's fault... The lengths people go to to defend nVidia is crazy to me. If they had to take a bullet for either Jensen Huang or their mother, it looks like their mother would die.
We should call out every big manufacturer, but I don't see the point in blaming them for something that's not their fault. If it feels good venting on Nvidia then by all means, but I'm more interested in the truth.
 

Ascend

Member
We should call out every big manufacturer, but I don't see the point in blaming them for something that's not their fault. If it feels good venting on Nvidia then by all means, but I'm more interested in the truth.
I never heard anyone saying that it was the API's fault rather than AMD's fault for the overhead issues lowering the performance of their cards in DX11. It's always AMD that supposedly had crappy drivers.
So why should we suddenly excuse nVidia's issues? It's their hardware. It's their responsibility to make it work properly.

The double standard once again comes to light.
 

VFXVeteran

Banned
It's not that AMD's toolset is superior, it's that Nvidia doesn't have profiler using hardware built-in tracker, and doesn't have full memory analysing tool either. So they are not worse on the Nvidia side, they are non-existent.
Some of the larger studios (i.e. ND) develop their own hardware profiler.
 

Bluntman

Member
I never heard anyone saying that it was the API's fault rather than AMD's fault for the overhead issues lowering the performance of their cards in DX11. It's always AMD that supposedly had crappy drivers.
So why should we suddenly excuse nVidia's issues? It's their hardware. It's their responsibility to make it work properly.

The double standard once again comes to light.

Well, it's very tiring to repeat in every second reply that on an explicit API there is no kernel driver, the driver doesn't know what the hardware does, the game manages the hardware, so if it performs worse then it's the game devs fault not Nvidias.

Nvidia is at fault for not giving proper profiling and analysing tools for the devs, so devs work on AMD hardware first and then they do or don't bother properly optimising for Nvidia hardware. Some do, some don't.
 
Last edited:

Bluntman

Member
Some of the larger studios (i.e. ND) develop their own hardware profiler.

Yes they can do that on console (AMD) hardware because in the previous generation MS and Sony asked AMD to build in tracking/profiling hardware inside the GPU, so they know exactly what's happening and why. And then AMD kept that for their PC GPUs later on and provided a tool for devs to use it.
 
Last edited:

Ascend

Member
Well, it's very tiring to repeat in every second reply that on an explicit API there is no kernel driver, the driver doesn't know what the hardware does, the game manages the hardware, so if it performs worse then it's the game devs fault not Nvidias.

Nvidia is at fault for not giving proper profiling and analysing tools for the devs, so devs work on AMD hardware first and then they do or don't bother optimising for Nvidia hardware. Some do, some don't.
If there is no hardware for something, like scheduling, it must be implemented through software, either by the developer, or, by nVidia through a driver. And if this is the limit, it's 100% in nVidia's hands.

Developers use primarily nVidia on PC for games.
 
Around the 27 minute mark on the new video was interesting. They showed a chap that had uploaded a video regarding their upgrade using BFV running DX11, went from an R9 390 to a GTX 1660ti on an overclocked i7 3770k. Should have been a decent upgrade really, but instead he was getting around 30fps less as the CPU utilisation went from 70-80% to 90-100% and left the GTX 1660ti choking at half utilisation. The side by side he does is really interesting as you'd think watching those R9 390 numbers there'd be enough in the CPU's tank to upgrade when getting 120+fps on a 5 year old GPU, but there's obviously something going on with the CPU utilisation when it leaves your shiny new Nvidia GPU unable to keep up at least with your old AMD one.

Gaming at 4K using a 2080ti means this all doesn't affect me, but it still makes for an interesting discussion. Maybe I should upgrade my CPU to be on the safe side :messenger_grinning_sweat:
 

Neo_game

Member
Nvidia pretty much own laptops market. This is where it may matter as I think most laptop user are into 1080P and esports games. Unless we have proper benchmark for VR, esports titles which can benefit from 60+fps this is pretty insignificant IMO.
 
Around the 27 minute mark on the new video was interesting. They showed a chap that had uploaded a video regarding their upgrade using BFV running DX11, went from an R9 390 to a GTX 1660ti on an overclocked i7 3770k. Should have been a decent upgrade really, but instead he was getting around 30fps less as the CPU utilisation went from 70-80% to 90-100% and left the GTX 1660ti choking at half utilisation. The side by side he does is really interesting as you'd think watching those R9 390 numbers there'd be enough in the CPU's tank to upgrade when getting 120+fps on a 5 year old GPU, but there's obviously something going on with the CPU utilisation when it leaves your shiny new Nvidia GPU unable to keep up at least with your old AMD one.

Gaming at 4K using a 2080ti means this all doesn't affect me, but it still makes for an interesting discussion. Maybe I should upgrade my CPU to be on the safe side :messenger_grinning_sweat:

 
Igor tested this "badly".
He limited the cores but he still used a CPU with strong cores.
He just made guesses without much technical backing. Interestingly while testing he noticed that the system with GeForce GPU suffered from pop-in and texture more than the system with Radeon GPU.
 

Bluntman

Member
If it were a driver overhead issue it would show up on the synthetic benchmarks designed to expose driver overhead. But for a game to perform this much worse on Nvidia hardware we should see at least double the overhead in synthetics, not just the few % that are there.

Meanwhile the problems on the software side can be numerous. Starting with the fact that in many cases Nvidia recommends the exact opposite of what Microsoft recommends for the DX12 operations.

Microsoft designed the API for a typical resource management. Basicly memory heaps go to root signatures, and puffers and constants go to the memory heaps.

Nvidia says the opposit: for them the constants and the constants puffers should go directly to the root signatures.

* Constants that sit directly in root can speed up pixel shaders significantly on NVIDIA hardware – specifically consider shader constants that toggle parts of uber-shaders
* CBVs that sit in the root signature can also speed up pixel shaders significantly on NVIDIA hardware


This works, yeah, but it can disrupt the management designed for the API and limits the managament of puffers and also the barriers can run in to dependencies that are not real more often, if the puffer is directly inside the root signature.

It makes sense for Nvidia to recommend these if it leads to better performance on their hardware. But it has to be managed on the software side (the game) and although it's not a difficult task to move the CBVs and constants to the root signature, but it is a bigger task to optimise the engine for this kind of operation, which is only ideal for GeForce GPUs. That's one part where some of the devs just don't have the time or resources to do it, they just let the CPU and GPU power through it and call it a day.

The way Nvidia's hardware works is in a lots of ways not properly alignes with what Microsoft recommends and with how AMD hardware works. And because Nvidia doesn't give documentation on their hardware, neither have deep profiling and full memory analysing tools, it just worsens the problem.

It can be worked around, it's just more work for the developers. Meanwhile AMD cards function very much like DX12 expects to, and also RDNA1-2 were designed to basicly eat everything, even badly written code pretty well, and sort it out in hardware.

Async compute shouldn't be a problem in most cases. Nvidia hardware still not completely stateless and somewhat inferior to AMD in this regards, but unless an engine does some truly exotic stuff with async compute on AMD, then it should pretty much work on Nvidia with just a few % worse performance here and there.
 
Last edited:
This is a problem they really should fix... I see people disregarding it as "dont match weak cpu with muh strong gpu"

Oh, so you people thought a %30 increase over zen 2 makes zen 3 a whole different level?

Nope:



a 5600x with a 3070. High end CPU, huh?

Can't lock to 60 FPS. Bottleneck occurs. Below 60 FPS. Overhead. Got it?

There are and will be tons of CPU bound games like this. No CPU is safe and secure from this issue, unless you're eligible to shell out extra money for CPU every 2 years.

Due to this overhead, a potential RX 6700XT will perform better and probably lock to 60 fps with a 5600x.


Wow new AMD cpu's still choking in CP77? :messenger_astonished: How long they've been out already. I would think the fix would have been out by now.
 

vanguardian1

poor, homeless and tasteless
Thats bececause cyberpunk uses old bulldozer code in it.
I wish I could say I'm suprised by that statement, but.... sheesh!

The only possible positive position is that they were just trying to make it so bulldozer processors could play the game and felt ryzen processors were fast enough without optimization. The only other two possibilities I can think of are either negligence or sabotage...... unless.... Someone correct me if I'm wrong, but aren't the ps4 and x1 series "jaguar" series apu processor both enhanced versions of the bulldozer architecture?
 

Bluntman

Member
Someone correct me if I'm wrong, but aren't the ps4 and x1 series "jaguar" series apu processor both enhanced versions of the bulldozer architecture?
Nope, they're completely different.

Bulldozer went with a very strange cluster based multithreading architecture, which sounded good on paper, but never delivered in practice. It could never reach the clocks it needed to have and consumed lots of power. So it wasn't ideal at all for a console.

The problem is, the only other architecture at the time available from AMD was Jaguar, which is a completely traditional architecture, made specifically for low-power use (ie. low-end laptops). That wasn't ideal for a console either because of the slow performance, but a better fit still than Bulldozer.

Usually, a GPU is lot more important in a console because of the very low overhead on the CPU, but heck, even the Xbox 360s CPU has higher theoretical performance than the Jaguar we got with the PS4/Xbox One.
 
Last edited:

vanguardian1

poor, homeless and tasteless
Nope, they're completely different.

Bulldozer went with a very strange cluster based multithreading architecture, which sounded good on paper, but never delivered in practice. It could never reach the clocks it needed to have and consumed lots of power. So it wasn't ideal at all for a console.

The problem is, the only other architecture at the time available from AMD was Jaguar, which is a completely traditional architecture, made specifically for low-power use (ie. low-end laptops). That wasn't ideal for a console either because of the slow performance, but a better fit still than Bulldozer.

Usually, a GPU is lot more important in a console because of the very low overhead on the CPU, but heck, even the Xbox 360s CPU has higher theoretical performance than the Jaguar we got with the PS4/Xbox One.
I see, thanks for the correction, Bluntman. Knowing that however, essentially removes any positive view on the bulldozer code in 2077.... Makes me really wonder what the programmers were up to. :-/
 

PhoenixTank

Member
Thats bececause cyberpunk uses old bulldozer code in it.
The max thread count? That got patched a while ago. At least for 6 cores & lower.

[AMD SMT] Optimized default core/thread utilization for 4-core and 6-core AMD Ryzen(tm) processors. 8-core, 12-core and 16-core processors remain unchanged and behaving as intended. This change was implemented in cooperation with AMD and based on tests on both sides indicating that performance improvement occurs only on CPUs with 6 cores and less.
We're derailing a bit here, though.
 
Last edited:

spyshagg

Should not be allowed to breed
You can't count one game and declare that the entire driver set for the Nvidia graphics boards are at fault. You would have to give multiple examples of games exhibiting the same result. Perhaps it's the game.
You didn't even watch the video. He clearly said it was visible in multiple games.

your posts age like milk.
 

Armorian

Banned
Hitman 3 CPU wall for nvidia:

5900x

tNEb0ZF.png


10900k

6ifw0rA.png


3700x

TKymETP.png
 

Bluntman

Member
Hitman 3 is a very nice example of what I said above about DirectX 12.

This is a title heavily funded by Intel to be optimised for their GPUs. As I said above, Microsoft has a recommendation guideline about how to use DX12, for which the manufacturers have the following reactions:
  • AMD recommends the devs to follow Microsofts guidelines.
  • Nvidia recommends the devs to differ from Microsoft guidelines in many places.
  • Intel mandates the devs to follow Microsofts guidelines (when they pay for the title).

This is a very difficult situation for a developer, because the three different manufacturers have different recommendations, but while with Nvidia and AMD they're just a recommendation, Intel mandates it, and only signs a contract if the developer strictly follows Microsofts guidelines, because that's best for Intels own GPUs.

Obviously keeping or differing from these guidelines strongly influences not just the shader performance, but also how often the game runs into different resource limits on different manufacturers GPUs.

In this particular case, Intel got its way because they paid a lot, but this a very good situation for AMD as well, because GCN/RDNA are fully memory based architectures, they don't give a fuck about for example where the CBVs and puffers are, in the memory heap or in the root signature or in the pub.

And because Intel funded the title, they also don't care about if getting their way causes limits on GeForce.
 
Last edited:

Marlenus

Member
Hitman 3 is a very nice example of what I said above about DirectX 12.

This is a title heavily funded by Intel to be optimised for their GPUs. As I said above, Microsoft has a recommendation guideline about how to use DX12, for which the manufacturers have the following reactions:
  • AMD recommends the devs to follow Microsofts guidelines.
  • Nvidia recommends the devs to differ from Microsoft guidelines in many places.
  • Intel mandates the devs to follow Microsofts guidelines (when they pay for the title).

This is a very difficult situation for a developer, because the three different manufacturers have different recommendations, but while with Nvidia and AMD they're just a recommendation, Intel mandates it, and only signs a contract if the developer strictly follows Microsofts guidelines, because that's best for Intels own GPUs.

Obviously keeping or differing from these guidelines strongly influences not just the shader performance, but also how often the game runs into different resource limits on different manufacturers GPUs.

In this particular case, Intel got its way because they paid a lot, but this a very good situation for AMD as well, because GCN/RDNA are fully memory based architectures, they don't give a fuck about for example where the CBVs and puffers are, in the memory heap or in the root signature or in the pub.

And because Intel funded the title, they also don't care about if getting their way causes limits on GeForce.

With Series X/S being DX12 based I would expect more games and engines to follow the MS guidelines.
 

Bluntman

Member
With Series X/S being DX12 based I would expect more games and engines to follow the MS guidelines.

Most of them already are because the consoles use AMD hardware and AMD has the PC dev tools Nvidia doesn't.

That's why we are seeing Nvidia GPUs running into limits in these games.

It's entirely possible to modify and optimise the renderer for Nvidia, but that's some extra work many devs just won't do.

There are some who do tho. That's why I recommended to run a test with Strange Brigade, because that's a title where I know the devs did do this work (obviously there should be others as well, it's just that in this one I'm sure of).
 
Last edited:

martino

Member
Hitman 3 is a very nice example of what I said above about DirectX 12.

This is a title heavily funded by Intel to be optimised for their GPUs. As I said above, Microsoft has a recommendation guideline about how to use DX12, for which the manufacturers have the following reactions:
  • AMD recommends the devs to follow Microsofts guidelines.
  • Nvidia recommends the devs to differ from Microsoft guidelines in many places.
  • Intel mandates the devs to follow Microsofts guidelines (when they pay for the title).

This is a very difficult situation for a developer, because the three different manufacturers have different recommendations, but while with Nvidia and AMD they're just a recommendation, Intel mandates it, and only signs a contract if the developer strictly follows Microsofts guidelines, because that's best for Intels own GPUs.

Obviously keeping or differing from these guidelines strongly influences not just the shader performance, but also how often the game runs into different resource limits on different manufacturers GPUs.

In this particular case, Intel got its way because they paid a lot, but this a very good situation for AMD as well, because GCN/RDNA are fully memory based architectures, they don't give a fuck about for example where the CBVs and puffers are, in the memory heap or in the root signature or in the pub.

And because Intel funded the title, they also don't care about if getting their way causes limits on GeForce.
Really interesting, so the situation is more subtle than it seems even if it doesn't change the end result and possible effects of what is currently happening.
 
Last edited:

GHG

Gold Member
Igorsk got to the same conclusion with different technique (disabling cores) and different games:


Ah yes, I too disable cores prior to gaming.

What a pointless discussion, never seen anything more ridiculous. Can someone point towards an example where this manifests itself without having to jump through hoops and disable cores or using clearly unbalanced hardware configurations that cause inevitable bottlenecks?

Nvidia should ban these fools again so they can scream into the void.
 
Hardware Unboxed at it again...

And then when Nvidia next tells them to fuck off due to their disingenuous takes they will start crying and plead innocence.

Nvidia should take valid criticisms and not shut them down. It will only make them look bad.
 

spyshagg

Should not be allowed to breed
I agree.

And this isn't one of them.

Why?

A 3090 should never be slower than a 5600XT in any scenario or any computer build. Is this even debatable?

The argument that it is "unlikely" to happen in real life is wrong, because it IS happening in real life. Most PC gamers don't give a damn about 4K. Its all about e-sports settings (1080p 240hz/360hz) and an 3700X was one of the best CPU's you could buy last year to play and stream.

Memes dictate you must own an Intel + Nvidia to have the "best" online competitive performance. Its Bro-Science - the same as having a bucket racing seat on your desk, or having a professional microfone hanging in your face.

The true reality for 2019 and 2020 seems to be: Buy the fastest intel CPU money can buy if you have nvidia (quite the heavy price to pay for!!) Or buy any good CPU if you have Radeon and get better performance.
 
Last edited:

yamaci17

Member
Why?

A 3090 should never be slower than a 5600XT in any scenario or any computer build. Is this even debatable?

The argument that it is "unlikely" to happen in real life is wrong, because it IS happening in real life. Most PC gamers don't give a damn about 4K. Its all about e-sports settings (1080p 240hz/360hz) and an 3700X was one of the best CPU's you could buy last year to play and stream.

Memes dictate you must own an Intel + Nvidia to have the "best" online competitive performance. Its Bro-Science - the same as having a bucket racing seat on your desk, or having a professional microfone hanging in your face.

The true reality for 2019 and 2020 seems to be: Buy the fastest intel CPU money can buy if you have nvidia (quite the heavy price to pay for!!) Or buy any good CPU if you have Radeon and get better performance.


Isn't a 3700x and rtx 3070 at 1440p ultra settings a balanced match?

Yet we see some serious underutilization in this video. Game is badly optimized too, but I wonder how much this "issue" plays a part in this particular footage? Would the CPU render more FPS with an equivalent AMD GPU, I wonder?
 

yamaci17

Member
You're absolutely right and it's completely fair to critise who is responsible for this. The game developers. Not Nvidia.
You really seem to have a point. https://www.pcgameshardware.de/ did a comprehensive test with a 3090 and 6900xt combined with a ryzen 3 3100


It really seems like it differs greatly, game by game
nvidia vs amd
valhalla +%20
legion %12
bf5 %12
hitman 3 %11
anno 1800 %9
cyberpunk %7
horizon zero dawn %5
doom eternal %4
rdr 2 -%2 (nvidia favored)

it seems like some developers can minimize this overhead, and some of them do not care about it. valhalla seems to be worst offender (it really favors rdna architecture)

in dx11, nvidia has the advantage but that is irrevelant at this point, since dx11 is probably abandoned at this point
 
Last edited:

Ascend

Member
You're absolutely right and it's completely fair to critise who is responsible for this. The game developers. Not Nvidia.
If games are created on DX12, and Intel and AMD recommend to comply with the Microsoft guidelines (which are the creators of DX12), how is it not nVidia's problem? They are the only ones that insist on doing things differently for games that are using DX12... It's their hardware that needs to comply with DX12. Why design your hardware differently in the first place...?

Let me take a wild guess... You (nVidia) have more money to throw around to make developers create the games differently so they perform better on your own hardware at the cost of everyone else...
 

yamaci17

Member
If games are created on DX12, and Intel and AMD recommend to comply with the Microsoft guidelines (which are the creators of DX12), how is it not nVidia's problem? They are the only ones that insist on doing things differently for games that are using DX12... It's their hardware that needs to comply with DX12. Why design your hardware differently in the first place...?

Let me take a wild guess... You (nVidia) have more money to throw around to make developers create the games differently so they perform better on your own hardware at the cost of everyone else...

Nvidia is especially adamant on this task, they will gimp vram, they will gimp technologies, they will gimp whatever they can

At best, nvidia will provide special optimizations for 2-3 years for their "newest" cards, then abandon those optimizations, and card will age worse. it was clear that pascal, being different than gcn, had special optimizations or codes fed by nvidia to keep the "relative" performance updated. once those optimizations are out of the picture, rtx 2070 suddenly become superior to 1080ti in cyberpunk (proof: https://www.guru3d.com/articles_pages/cyberpunk_2077_pc_graphics_perf_benchmark_review,5.html )

similar will happen to turing, ampere, when the time comes

this is my point of view, btw, it may not be accurate
 
Last edited:

TriSuit666

Banned
It's their hardware that needs to comply with DX12. Why design your hardware differently in the first place...?
Surely if it says 'DX12 Ultimate' on the box (checks my 3070 box, yep says it right there on the front), then it will have passed the certification to carry the logo?

But Bluntman Bluntman how much could this be a money thing? If AMD has paid to have its logos on the front of a game, then it makes commercial sense they don't want to be seen to be giving the competition a leg up in optimatisations...

(I realise that's slightly conspiratorial)
 
Last edited:

Bluntman

Member
You really seem to have a point. https://www.pcgameshardware.de/ did a comprehensive test with a 3090 and 6900xt combined with a ryzen 3 3100


It really seems like it differs greatly, game by game
nvidia vs amd
valhalla +%20
legion %12
bf5 %12
hitman 3 %11
anno 1800 %9
cyberpunk %7
horizon zero dawn %5
doom eternal %4
rdr 2 -%2 (nvidia favored)

it seems like some developers can minimize this overhead, and some of them do not care about it. valhalla seems to be worst offender (it really favors rdna architecture)

in dx11, nvidia has the advantage but that is irrevelant at this point, since dx11 is probably abandoned at this point

Thank you for the article. So overall, based on all tests by the site we are looking like this:



Which proves my point further and also brings us to another interesting topic:

Why does the 3090 loves Vulkan more, you might ask?

So if we start with my comments from before (DX12 and recommendations), we have to say that it's important because DX12 is a pure bindless API, with which Microsoft jumped beyond every hardwares current capabilites, except for AMD (which is the only pure bindless and stateless architecture currently on the market).This has to be accounted for, although it doesn't really have much use right now. It will have when Shader Model 6.6 hits, because it'll allow devs to easily implement dynamic resource binding. Microsoft actually invested in the future years ago when creating DX12, and that slowly comes to full fruitation.

But managing this from the software (game) is extra work on architectures which are not fully bindless and stateless, like Nvidia and in a different way Intel -- they're still somewhat oldschool "slot-in" architectures. This is the extra work that (as mentioned before) some devs won't do. Most just say it works well enough in most of the conditions (ie. on higher resolutions and with beefier CPUs), and leave it to that.

Meanwhile Vulkan is not yet a full bindless API. It's resource binding model is less modern, and this means it's more of a "one size fits all" API. It fits Nvidia more because it's a little more oldschool, and it fits AMD as well, because as I said before GCN/RDNA will just eat up whatever comes their way. This ofcourse also means that there is a little bit more CPU overhead on Vulkan, but it's really not much.
 
Last edited:
The CPU may be one of the weakest in it's line, but it's still a Zen 2 CPU, they could do worse.
Anyway, by this point it's becoming clear that Nvidia's GPUs are underperforming under DX12, right?

uktKydd.png




Makes me think...
RDNA2 is FRICKING AMAZING! Really, AMD really did a very nice job returning to the high end, but now I'm suspicious that the 6900XT card is toping the GPU performance charts because reviewers started to use AMD CPUs more.
 
Last edited:

Bluntman

Member
Surely if it says 'DX12 Ultimate' on the box (checks my 3070 box, yep says it right there on the front), then it will have passed the certification to carry the logo?

But Bluntman Bluntman how much could this be a money thing? If AMD has paid to have its logos on the front of a game, then it makes commercial sense they don't want to be seen to be giving the competition a leg up in optimatisations...

(I realise that's slightly conspiratorial)

Yes it passed certification as it's fully compatible with every DX12 Ultimate feature. It's just that if a dev doesn't code the renderer in a specific way, Nvidia hardware will run into resource limits more often in certain, some would say "unrealistic" scenarios - like on low resolutions with slower CPU.

It still functions properly, it can still do it's job perfectly fine, just with slower performance in some scenarios.

As for the latter. AMD doesn't really need to pay for optimisation as the devs will do that because they kinda need to anyway... because AMD has the tools for proper deep analysing, and because consoles. The results can then be used to do optimisation on Nvidia as well, if a dev bothers.

And if Nvidia pays the dev to do the renderer in a way what GeForce GPUs like, then that's fine for AMD as well, because as I said RDNA will just eat up everything and doesn't give much fucks in the process.

But the baseline is that Microsofts recommendations are fine for AMD and Intel, and less fine for Nvidia.
 

Bluntman

Member
If games are created on DX12, and Intel and AMD recommend to comply with the Microsoft guidelines (which are the creators of DX12), how is it not nVidia's problem? They are the only ones that insist on doing things differently for games that are using DX12... It's their hardware that needs to comply with DX12. Why design your hardware differently in the first place...?

Let me take a wild guess... You (nVidia) have more money to throw around to make developers create the games differently so they perform better on your own hardware at the cost of everyone else...

I see where you're coming from and it's a matter of point of view really. From a certain viewpoint you are right: Nvidia gimps out on parts of the hardware that would make it fully bindless, stateless and memory based, and yes it can be compensated for in software. They can count on the devs doing this compensation as Nvidia is the market leader anyway, and obviously they throw around money as well.

But from a different point of view it's not Nvidia's fault, neither a hardware issue really, because the hardware works, it's fully compatible with DX12 Ultimate, and in most scenarios this whole thing wouldn't show up anyway.
 
Last edited:

spyshagg

Should not be allowed to breed
AMD has also been doing true DX12 compliant cards since the 2013 while Nvidia was busy designing and paying developers to use software (gameworks) that gimped all cards, but very specially AMD.


AMD was also creating the predecessor of Vulkan (mantle) in 2013 to address industry wide DX11 limitations, while Nvidia fought DX12 and Vulkan every step of the way until they finally had a more capable GPU on the market, because Nvidia's gimped DX12 performance was the result of removing hardware features from its GPUS into the software layer (Driver) thus running in the CPU because what the hell, no reviewer tests high-end GPU's with regular CPU's, no one will ever notice.


Fast forward to 2021 and they are still dependent on the software layer (driver) to do DX12 properly.


Meanwhile between the first introduction of maxwell (900 series) and today, Nvidia's efforts have been in trying to point a gun into every AIB on the market and say: - "hey scum, gimme all your GPU brandings or no GPU's for you", in what was a blatant try to kill AMD GPU market, whilst also pointing that same gun at extremely popular 3D benchmark software makers (Futuremark) and saying: "hey scum, here's some bags if you gimp your DX12 benchmark Async Compute to an extremely basic capability, thus removing the competition advantage in DX12".
That gun sure was versatile, as it was pointed once again at reviewers if they did not change their editorial process to ejaculate all over Raytracing and DLSS, or no more GPU's for you.


None of these companies are your friends, either AMD or NVIDIA. They all try to protect their investors at your expense through either lies, market manipulation, and even AMD's advances to rush DX12 and Vulkan into the market were because they knew their DX12 hardware was better. But between these two corporations, Nvidia is the one who would kill your puppy dog for a half percent rise in stocks.


FWI: I had the 1080ti and the 3080 before my current 6900XT. I also have the i9 9900K because I'm not a cult follower, I want the best when I have to make investments.
 
Last edited:

Bluntman

Member
Fast forward to 2021 and they are still dependent on the software layer (driver) to do DX12 properly.

Well I hoped that we've established in the last few pages (or at least on this page) that this has nothing to do with the driver, but I guess all that typing was for nothing :(
 
Last edited:

spyshagg

Should not be allowed to breed
Well I hoped that we've established in the last few pages (or at least on this page) that this has nothing to do with the driver, but I guess all that typing was for nothing :(

Pardon me but if you have to expect for a developer to design their stack around oficial DX12 guidelines in order to suit your GPU DX12 approach, how is the responsibility on the software maker side? Could they have made a special code path so the issue was less pronounced if they developed for it? sure. How about making sure your DX12 stack is industry compliant to begin with? Where does the fault lie in here...

It also seems very, very evident Nvidia DX12 approach relies on CPU to do certain tasks. If you do not want to call it a "Driver" issue, sure. Lets call it "software-to-offload- resources-from-GPU" then.
 

Ascend

Member
I see where you're coming from and it's a matter of point of view really. From a certain viewpoint you are right: Nvidia gimps out on parts of the hardware that would make it fully bindless, stateless and memory based, and yes it can be compensated for in software. They can count on the devs doing this compensation as Nvidia is the market leader anyway, and obviously they throw around money as well.

But from a different point of view it's not Nvidia's fault, neither a hardware issue really, because the hardware works, it's fully compatible with DX12 Ultimate, and in most scenarios this whole thing wouldn't show up anyway.
I understand your perspective. It's still their hardware though. And even though it works, if it requires additional effort from developers to extract everything from the hardware, even if it wasn't directly driver related, that is still a real disadvantage of the final product that they have the power to address.

Saying it's the developers not doing their job is a copout imo, because that additional job is not only unnecessary on other hardware, that job coincidentally would also potentially hinder others... Not to mention that the devs could spend their time doing something else instead... Like making the games themselves better rather than catering to nVidia. So it makes sense that some of them don't make the effort. nVidia (nor any hardware vendor for that matter) doesn't really deserve special attention for free, if they deviate from standards and guidelines.

But with attitudes that nVidia are never at fault for anything (hyperbolically speaking), they have no incentive to do anything about their faults.
 

Bluntman

Member
Pardon me but if you have to expect for a developer to design their stack around oficial DX12 guidelines in order to suit your GPU DX12 approach, how is the responsibility on the software maker side? Could they have made a special code path so the issue was less pronounced if they developed for it? sure. How about making sure your DX12 stack is industry compliant to begin with? Where does the fault lie in here...

It also seems very, very evident Nvidia DX12 approach relies on CPU to do certain tasks. If you do not want to call it a "Driver" issue, sure. Lets call it "software-to-offload- resources-from-GPU" then.
Nope, it's neither of those.

It's not driver because under DX12 there isn't a kernel driver, and the driver doesn't manages the hardware, it doesn't even have a clue what the hardware is doing -- the game engine manages the hardware. And it isn't relying on the CPU to do magical tasks, nor offloads any extra work to the CPU.

As I said from the beginning, and as we established a few comments earlier, and proven by PCGamesHardware.de tests it's a matter of how a game engine works (chiefly, how it manages, packages and binds resources) -- the standard way, or in a way that's more ideal for GeForce. That's why in some games the 3090 is ahead of the 6900XT and in others not.
 
Last edited:
RDNA2 is FRICKING AMAZING! Really, AMD really did a very nice job returning to the high end, but now I'm suspicious that the 6900XT card is toping the GPU performance charts because started to use AMD CPUs more.

I wouldn't go that far, when you have a $1000 6900 xt = $650 rtx 3080 .

Horrendous RT performance and 6 months later still nothing like DLSS from AMD.
 
Top Bottom