• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

[HUB] NVIDIA Has a Driver Overhead Problem, GeForce vs Radeon on Low-End CPUs

johntown

Banned
Why would someone who can afford a 3090 put it on a low end system? The whole basis for this thread is ridiculous.
 

Vae_Victis

Banned
To all the people saying "this makes no sense as a test". The point of the analysis is not that gamers in the wild will be realistically pairing a 3090 with a 1600X. The point is to show that when Vulkan and DirectX 12 games are CPU-bound, they seem to take a bigger performance hit in configurations with a high-end Nvidia GPU compared to ones with a high-end AMD GPU.

As of now there is no game on the market that can single-handedly put a new, high-end CPU to its knees; so if you need to sample CPU-bound scenarios, you have to go with a mediocre CPU, or the conditions for the test itself are simply impossible to create. The takeaway is "CPU-bound game", not "crap CPU". Games will naturally become more taxing once PS4 and Xbox One are dropped, and this phenomenon could start to affect more titles and better CPUs in the years to come.
 
Last edited:

sn0man

Member
as i see it the downsides are somewhat niché without any big impact for most use-cases
I agree with that assessment. Core counts have ballooned as IPC and frequencies have stuck. As long as there are excess cores and/or you’re playing at GPU-bound resolution I see no real issue. It is a difference to be sure but doesn’t affect many.
 

sn0man

Member
Not sure I understand the NV cope in here. Pretty embarrassing, they just make GPUs, they don't need, and ought not be, worshipped.
It’s strange. People are like that though. I admit I get into a brand and Stan for them sometimes e.g., Noctua.

In fact, I’m a big Nvida fan and usually always pick them. Happy that any information that is truthfully negative on them comes out. It should motivate them to improve. Becoming blind just makes the product worse over the long term.
 

sn0man

Member
When will this be? When the next gen games starting to use 8 cores?
The current gen consoles appear to have reasonable 8-core 16-thread CPUs so I would imagine the transition will occur quite soon. I’m waiting to upgrade my ryzen 3600 right now. Was considering a 5600x but we are mostly GPU constrained. Figured I’ll wait and focus on obtaining a 3080FE (I know fat chance) and wait for AM5 to see what happens on the CPU side.

it’s definitely something to watch for but again; we are in general GPU constrained at all but 1080p. Will RTX 4000 or Radeon 7000 fix that for 1440p.... dunno.
 

Md Ray

Member
That is a issue with his channel... he lose the trust to be watched.

Indeee he looks like a 🤡

BTW the test is weird no matter how you try to defend it.
Weird? How?

Defend what? You think it's ok for a 5700 XT to beat 3070 when paired with a 3600X?
 

Ev1L AuRoN

Member
The tables are turned then because I remember in the RX 480 vs GTX 1060 days that AMD was the one with driver overhead issues that only DX12 could solve.
 

Md Ray

Member
Why would someone who can afford a 3090 put it on a low end system? The whole basis for this thread is ridiculous.
Why does 5700 XT outperform 3070 by nearly 20% with a decent CPU like 3600X? If CPU bound, at worst, both configs should produce near identical results, no?

I've asked this many times now but nobody wants to answer this question.
 
Last edited:

Md Ray

Member
To all the people saying "this makes no sense as a test". The point of the analysis is not that gamers in the wild will be realistically pairing a 3090 with a 1600X. The point is to show that when Vulkan and DirectX 12 games are CPU-bound, they seem to take a bigger performance hit in configurations with a high-end Nvidia GPU compared to ones with a high-end AMD GPU.

As of now there is no game on the market that can single-handedly put a new, high-end CPU to its knees; so if you need to sample CPU-bound scenarios, you have to go with a mediocre CPU, or the conditions for the test itself are simply impossible to create. The takeaway is "CPU-bound game", not "crap CPU". Games will naturally become more taxing once PS4 and Xbox One are dropped, and this phenomenon could start to affect more titles and better CPUs in the years to come.
The Office Reaction GIF


Getting real tired of BS posts like these 👇🏼
Why would someone who can afford a 3090 put it on a low end system? The whole basis for this thread is ridiculous.
 
Last edited:

Shai-Tan

Banned
there are plenty of people who pair, say a 3600 with a high end card. looking at their results it could matter if you do high frame rate gaming or vr. i typically game in 4k which makes it irrelevant but I also use vr (matters less when you turn up supersampling but still)
 

GHG

Gold Member
C'mon now. What do you have to say about the data they've gathered? It's all fake and HUB is conspiring against NVIDIA? Please.

Yeh because someone who can afford any of the GPU's he tested with is still going to be rocking some shitty old CPU that is incidentally on a socket that can still has an upgrade path.

These guys aren't subtle about what they do and Nvidia should have stood their ground.
 

onunnuno

Neo Member
Why does 5700 XT outperform 3070 by nearly 20% with a decent CPU like 3600X? If CPU bound, at worst, both configs should produce near identical results, no?

I've asked this many times now but nobody wants to answer this question.
You're asking the people that would buy a 1050 Ti over a RX570 because: "a 1080 Ti is better than a Vega!!"
 
Last edited:

sn0man

Member
there are plenty of people who pair, say a 3600 with a high end card. looking at their results it could matter if you do high frame rate gaming or vr. i typically game in 4k which makes it irrelevant but I also use vr (matters less when you turn up supersampling but still)
The VR thing is a good point. I have a higher than 1080p display so while I appreciate the article and the thread discussion it felt pretty irrelevant. I assume though that you can super sample before VR output to try and help things.
 

Md Ray

Member
It doesn’t your wrong.
High School No GIF
Lol, what a weird post. The data is right there, in the OP. Click the spoiler tag to reveal one of the bar graphs. Look at 3600X + 3070 fps vs 3600X + 5700 XT.

Which one has higher fps?

There's a lot more data in the video. SMH.🤦🏻‍♂️
 
Last edited:

Md Ray

Member
The last gen consoles had already 8 cores. Why would the developers to start using it now if they didn't use it last gen?
Another weird post. Last-gen consoles had 8 cores/8 threads. They did use all of it, while not all of the 8 cores/threads for gaming since the OS and background apps needed some of the CPU resources - the games had access to up to 7 to 7.5 cores. On PC, games already use more than 8 cores/threads like WD 2, WD Legion, Cyberpunk 2077...

PS5/XSX/S have 8 cores w/ SMT (16 threads) double the thread count of last-gen consoles. And according MS, games can have access to 14 threads leaving 1 core/2 threads for the OS and non-gaming apps.

So yeah, they doubled the thread count for a reason. Because games WILL actually use more than 8 threads going forward as it is already the case on PC these days.
 

rofif

Banned
Hardware unboxed still searching for that radeon performance.
Cpu limited. I play at 4k so not gonna happen... And 1080p low settings? Yeah that's why I spent 2kusd. I better have performance where it matters rather than in some stupid try hard scenario
 
Last edited:

Neo_game

Member
The last gen consoles had already 8 cores. Why would the developers to start using it now if they didn't use it last gen?

5600X is a 6core/12thread CPU and a lot faster than the CPU on these consoles. In fact even a quad core with 8thread is more than enough as long it has good single thread performance. This thread and video is pretty useless IMO. May be for a newbie who is a into esports and thinks high end gfx cards will not get bottlenced by CPU at1080P resolution 🤦‍♂️
 

zeomax

Member
Another weird post. Last-gen consoles had 8 cores/8 threads. They did use all of it, while not all of the 8 cores/threads for gaming since the OS and background apps needed some of the CPU resources - the games had access to up to 7 to 7.5 cores. On PC, games already use more than 8 cores/threads like WD 2, WD Legion, Cyberpunk 2077...

PS5/XSX/S have 8 cores w/ SMT (16 threads) double the thread count of last-gen consoles. And according MS, games can have access to 14 threads leaving 1 core/2 threads for the OS and non-gaming apps.

So yeah, they doubled the thread count for a reason. Because games WILL actually use more than 8 threads going forward as it is already the case on PC these days.
And again the question why would they? Why spend a lot of time and money to do complicated optimization of the code for multithreading if you can simply cut the corner and use the now available more raw power of the cores to achieve the same result by using only two or three cores?
 

Md Ray

Member
And again the question why would they? Why spend a lot of time and money to do complicated optimization of the code for multithreading if you can simply cut the corner and use the now available more raw power of the cores to achieve the same result by using only two or three cores?
Because as the game complexity increases, they have to? And because it's already being done? Don't games like WD Legion not use more than 8 threads on PC? I don't understand your argument. Are you suggesting that next-gen open-world from Ubisoft, Rockstar Games will use just 2-3 cores on next-gen consoles?

Do you know even RT can eat up a lot of CPU cycles?
 
Last edited:

rofif

Banned
5600X is a 6core/12thread CPU and a lot faster than the CPU on these consoles. In fact even a quad core with 8thread is more than enough as long it has good single thread performance. This thread and video is pretty useless IMO. May be for a newbie who is a into esports and thinks high end gfx cards will not get bottlenced by CPU at1080P resolution 🤦‍♂️
Consoles have undervolted 3700x no? 8c16t
 
Last edited:

Md Ray

Member
Consoles have undervolted 3700x no? 8c16t
Not 3700X per se, that's a desktop Zen 2 part. Console's Zen 2 closely match mobile Zen 2 i.e. 4800H. Same 8C/16T but with a significant reduction to L3$ (a cutdown from 32MB to 8MB).
 
Last edited:

zeomax

Member
Because as the game complexity increases, they have to? And because it's already being done? Don't games like WD Legion not use more than 8 threads on PC? I don't understand your argument. Are you suggesting that next-gen open-world from Ubisoft, Rockstar Games will use just 2-3 cores on next-gen consoles?

Do you know even RT can eat up a lot of CPU cycles?
You are trying to blow up this whole Nvidia thing. It was a very bad cpu + gpu combination in the first place. It's like you put tractor tires on your Porsche and then trying to show that Porsche has a problem because it get outperformed by a Golf.
 
Last edited:

Md Ray

Member
You are trying to blow up this whole Nvidia thing. It was a very bad cpu + gpu combination in the first place. It's like you put tractor tires on your Porsche and then trying to show that Porsche has a problem because it get outperformed by a Golf.
3600X + 3070 isn't a bad combination though. Both mid-range CPU + GPU. Yet, 5700 XT, a weaker/cheaper GPU than 3070, is nearly 20% faster than 3070 when CPU-bound. That's significant and should not be happening. Both combinations should be producing identical results at worst. Why is it so hard to comprehend for you people?

Tell me why shouldn't we ask better from NVIDIA as a consumer?
 
Last edited:

Md Ray

Member
Because you are eding an artificial bottleneck to the system and expecting from the gpu somehow magically to bypass this bottleneck.
Good job ignoring the rest of the comment and sidestepping. Almost every NVIDIA defender in this thread has been doing the same.

5700 XT, a weaker/cheaper GPU than 3070, is faster than 3070 with a reasonably strong CPU. Are you ok with that?
 
Last edited:

rofif

Banned
3600X + 3070 isn't a bad combination though. Both mid-range CPU + GPU. Yet, 5700 XT, a weaker/cheaper GPU than 3070, is nearly 20% faster than 3070 when CPU-bound. That's significant and should not be happening. Both combinations should be producing identical results at worst. Why is it so hard to comprehend for you people?

Tell me why shouldn't we ask better from NVIDIA as a consumer?
at the same time, You really have to stretch your imagination to esports levels to be bound by cpu.
Low/medium graphics, 720p/1080p etc
 

Md Ray

Member
at the same time, You really have to stretch your imagination to esports levels to be bound by cpu.
Low/medium graphics, 720p/1080p etc
Today CPU bound = Medium, 1080p.
Tomorrow it will be 1440p (to some extent it already is with bigger GPUs) that will become CPU bound as the game scene complexity increases with next-gen, open-world games and 16 thread CPUs on PS5/XSX are pushed harder.
 

rofif

Banned
Today CPU bound = Medium, 1080p.
Tomorrow it will be 1440p (to some extent it already is with bigger GPUs) that will become CPU bound as the game scene complexity increases with next-gen, open-world games and 16 thread CPUs on PS5/XSX are pushed harder.
I highly doubt it. We will be gpu bound always aside from esports
 

Armorian

Banned
I highly doubt it. We will be gpu bound always aside from esports

What? In Watch Dogs Legion with RT game hits CPU hard as fuck, drops to ~40 something on 3600 while driving. Or try Crysis remaster with detail setting above high (draw distance) and with RT, even 5600x drops below 60fps and that's entirely on CPU. MANY games are CPU limited - fucking Far Cry New Dawn drops below 60 fps on 5600x and GPU is bored...
 

Marlenus

Member
at the same time, You really have to stretch your imagination to esports levels to be bound by cpu.
Low/medium graphics, 720p/1080p etc

1080p 120hz is a pretty reasonable target for a mid range PC instead of 1440p.

Also 1080p + RT in watch dogs legion has the 6900XT as faster than the 3090 if your CPU is a few gens old. On mobile but find the gamegpu.ru test of it and you can see how different CPUs impact performance across GPUs.

Link. If you have an older 4c i5 or 2200G or similar then the CPU bottleneck on the 3090 with 1080p + RT is lower than the GPU bottleneck on the 6900XT.
 
Last edited:

llien

Member
There is no problem, except for bad CPU/GPU pairing.

Please, don't tell me that 2600x + 5700XT beating 2600x + 3070 is from "I'd expect hat" territory.

The weak part here is that only one game has been demonstrated to have that issue.


Nvidia has a greater feature set
What a lame excuse...

It was a very bad cpu + gpu combination in the first place.
Let's pretend 2600x is a bad CPU, shall we.
 
Last edited:

llien

Member
monstrous amounts of cudacores they have in Ampere
The claimed number of cores is two times higher, than actual number.
It's just Huang thought that since on top of fp+int the cores can now do fp+fp, he can get away claiming double the number.
 

FireFly

Member
I highly doubt it. We will be gpu bound always aside from esports
It depends completely on the CPU performance level developers are targeting. Stick in a netbook class CPU and suddenly you are CPU bound on everything. Well, since developers are going to be targeting Zen 2 class performance for 60 FPS, if you have a Zen 2 processor and you want to do 120 FPS, you are going to want some low overhead GPU drivers!
 
Last edited:

zeomax

Member
Again sidestepping and ignoring the fact that 5700 XT is producing better results than a more expensive GPU, when they should be identical in CPU bound scenario. What do you have to say about that?
Why should a gpu produce better results than another if the limiting factor the cpu is?
 
In case anyone still have doubts that there's a CPU problem with Nvidia's driver, the problem can be seems even on extreme GPU-bound situations like Ultra+RTX+DLSS:

Watch-Dogs-Legion-9900k-RTX.png

i.ibb.co/Wcr4sHK/Watch-Dogs-Legion-9900k-RTX.png

Watch-Dogs-Legion-R5-1400-RTX.png

i.ibb.co/KjZFH7g/Watch-Dogs-Legion-R5-1400-RTX.png

 
Top Bottom