• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia's Gameworks nerfing AMD cards and its own previous-gen GPU's?

DonMigs85

Member
You guys probably missed the post showing Nvidia cards having low image quality compared to an AMD card with the exact same settings. There was a post exposing that months ago. I forget which Nvidia cards they compared.

I may be imagining, but I think shadows in some games like Sonic Racing Transformed look slightly worse/fuzzier on Maxwell cards versus Fermi and Kepler. Maybe due to the compression technology?

I think there were also some cases where newer drivers actually decreased performance a bit on Kepler cards in some games compared to the previous driver.
 
I purchased a 970 last year. I forget exactly what I paid but it was more than $300. I don't like the idea of that card being made even slightly obsolete for 1080p gaming within the next three years.

Slightly?

It's gonna be HUGE. If DX12 actually become real the hardware difference between the new generation and 9XX is going to be way, WAY bigger than the gap there is now between 7XX and 9XX.

Within the next three years hardware requirements will climb higher than they have in the last 10 years, and it's going to affect more than just videocards. There's going to be a major big push for new hardware.
 

cheezcake

Member
That Witcher 3 hair example is seemingly damning, or at least illustrative of some pointless effects.

Someone correct me if I'm wrong but I believe he's interpreted that completely incorrectly. Degree of tessellation when it comes to hair rendering only effects the number of apparent vertices you get in a strand of hair (i.e. points of motion). It doesn't have anything to do with strand count, the difference is in the quality of simulation and motion because you effectively have more points in each hair strand which move independently. You can't see that effect in a still image.
 

Spinifex

Member
I'm not much for these conspiracy theories. GameWorks is shit for reasons other than conspiracy nonsense. Look at what collaboration resulted in with TressFX / PureHair vs the GPU killer that is HairWorks.
 

DonMigs85

Member
Slightly?

It's gonna be HUGE. If DX12 actually become real the hardware difference between the new generation and 9XX is going to be way, WAY bigger than the gap there is now between 7XX and 9XX.

Within the next three years hardware requirements will climb higher than they have in the last 10 years, and it's going to affect more than just videocards. There's going to be a major big push for new hardware.

I think the GTX 760/950 and up will still at least be capable of surpassing the consoles for the rest of their lifespans. Although having just 2GB VRAM may become an issue.
 
I think the GTX 760/950 and up will still at least be capable of surpassing the consoles for the rest of their lifespans. Although having just 2GB VRAM may become an issue.

The 760 already can't match Xbox on the last Tomb Raider. And we are supposedly far from that lifespan you mentioned.
 
Still rockin my msi 7970 3gb twin frozr. (2013)
Play witcher 3 at ~60fps 1080p with a mix of low and high settings. Looks great, runs great and its been the best value card.
Next card will also be AMD.
 

Locuza

Member
Slightly?

It's gonna be HUGE. If DX12 actually become real the hardware difference between the new generation and 9XX is going to be way, WAY bigger than the gap there is now between 7XX and 9XX.

Within the next three years hardware requirements will climb higher than they have in the last 10 years, and it's going to affect more than just videocards. There's going to be a major big push for new hardware.
If you could be more precise, at least I would understand what you are talking about.
Why should the gap, at which point, be way bigger in the future?
And why should hardware requirements climb higher?

The consoles and with that the common dominator is fixed for the upcoming years.
DX12 helps with performance.
You need extra effects for PC to be more taxing.
 

DonMigs85

Member
The 760 already can't match Xbox on the last Tomb Raider. And we are supposedly far from that lifespan you mentioned.
Hmm, you're right. In most older games the 760 is just a bit behind the 960 and easily exceeds the PS4 as well. There may indeed be some shenanigans or a simple lack of driver optimization after all.
 
Of course, this isn't necessarily an unexamined externality. It could be just as much by design as from neglect.

Certainly works well in favour of Nvidia's marketing machine anyway.

There is a way to tell, which is to do a suite of benchmarking of games on Kepler, Maxwell and AMD cards with various historical drivers. This will instantly highlight whether Kepler performance actually "got worse" or whether GCN and Pascal merely overtook them. I've been calling for such a comprehensive benchmark for half a year now and I hope that Techreport or someone is working on one. Doubt it though. It's a lot of effort.

Someone correct me if I'm wrong but I believe he's interpreted that completely incorrectly. Degree of tessellation when it comes to hair rendering only effects the number of apparent vertices you get in a strand of hair (i.e. points of motion). It doesn't have anything to do with strand count, the difference is in the quality of simulation and motion because you effectively have more points in each hair strand which move independently. You can't see that effect in a still image.

It's either ignorant or dishonest to compare the quality of the simulation in still screenshots. I suppose it could just be lazy since it seems he's getting a lot of material from forum posts and such. He's certainly got other highly questionable information in the video, my favorite being where he blames Gameworks for Batman: Arkham Knight being broken on PC.
 
There is a way to tell, which is to do a suite of benchmarking of games on Kepler, Maxwell and AMD cards with various historical drivers. This will instantly highlight whether Kepler performance actually "got worse" or whether GCN and Pascal merely overtook them. I've been calling for such a comprehensive benchmark for half a year now and I hope that Techreport or someone is working on one. Doubt it though. It's a lot of effort.



It's either ignorant or dishonest to compare the quality of the simulation in still screenshots. I suppose it could just be lazy since it seems he's getting a lot of material from forum posts and such. He's certainly got other highly questionable information in the video, my favorite being where he blames Gameworks for Batman: Arkham Knight being broken on PC.

its the latter. kepler performance is still fine in older games on newer drivers.
 
My experiences so far with graphics cards have taught me that you should buy amd if you mostly play games months after their original release and nvidia if you mostly buy games day one.
 
its the latter. kepler performance is still fine in older games on newer drivers.

Which is not remotely surprising. The AMD range outperforming Kepler like it didn't used to at launch is also something that to my recollection only started happening when AMD made their big Omega driver push. AMD has been tweaking the GCN architecture since 2011, while Nvid squeezed Maxwell in between.
 

Henrar

Member
Newer games will be optimized for newer architectures. That's normal. The question is what happens with newer driver releases and performance in older games optimized for older GPU architectures. If it drops, then it's planned obsolescence. If not then it's not IMO.
 

dex3108

Member
I stopped watching when he started talking about tessellated water in Crysis 2. I will probably watch complete video later. But i may be wrong but as far as i know as soon as you start new project in CryEngine you start with noting but water. That means everything you build you build above water. And many engines do similar things. Whenever you fall through map in games you can see tons of things below. That doesn't mean that those things are being rendered all time.

If you want to make video about GameWorks issue then you need to have concrete evidences.
 
I used to be annoyed by the desperate Benghazi-esque conspiracy theories that now substitute for actual competition coming from AMD, but these days I realize that if even a few people buy AMD cards out of misguided outrage over the latest AMD astroturfed conspiracy theory about is happening at Nvidia that might still save AMD from bankruptcy and that would prevent any government anti-trust action against Intel and Nvidia.

So I'm fine with it. I hope enough other people buy AMD cards to keep them afloat so I can continue to peacefully enjoy my Nvidia and Intel products.
 
As someone who's bought AMD since it was ATI, it's a great time to switch. I don't think the price gap has ever been larger, and they're making aggressive strides on the software side too. Crimson was buggy, but I also got 10-15 fps increase in some games. That's ludicrous for a software update.

And also the pointed opposite of NVidia's strategy, it would seem.
 
If you could be more precise, at least I would understand what you are talking about.
Why should the gap, at which point, be way bigger in the future?
And why should hardware requirements climb higher?

Because these days a new video driver comes out along with a new game.

This is because Nvidia is optimizing code for THAT game, specifically. They do this, mainly, for the latest architecture they are selling. 9XX right now.

The moment new videocards are coming out, those engineers will focus on squeezing the most out of the new hardware, while the old hardware will still run on a sort of compatible mode, that works, but it's not going to be optimized down to assembly code and fast workarounds.

The consoles and with that the common dominator is fixed for the upcoming years.
DX12 helps with performance.
You need extra effects for PC to be more taxing.

Nothing is "fixed", because this is stuff that people code. That's why the longer the life of a console, the better the game engines can squeeze out of it. You learn how to do things better and efficiently.

On a console the hardware is always the same. On PC every new hardware tiers has its own issues. Optimization is way more bound to architecture.

DX12 is moving the tiny optimization phase AWAY from Nvidia engineers and this will be DISASTROUS for actual game performance. Game companies have never done before the type of optimization Nvidia does in-house at the compiler lever. That source code is completely closed and NO software house in existence has access to it. Whether or not under NDA. Only Nvidia engineers have access to detailed hardware specs, and only them have experience with that kind of code.

When new hardware, specifically built for DX12, comes out, it will be very hard even to make the same features *be compatible* on older hardware. So it will take a huge effort only to simply have the games compatible, and it's a completely new order of problem to have old hardware actually perform decently.

That's why it's 2016 and no DX12 game is currently out. It's a trainwreck that is only going to be slowly fixed with new hardware.
 

Zaptruder

Banned
There is a way to tell

By design or neglect - I meant more neglect by design, rather than neglect by neglect... That is, they may have the time and resources to ensure that performance doesn't drop too much... but they purposefully choose to not update, because in addition to it providing little value to their company, it makes their newer cards look better.

Without inside information, it is impossible to discern one way or another what their policy is... indeed, even without an explicit policy, the overall points still stands - Nvidia focuses on their newest stuff at the expense of their older stuff, and it hurts us as consumers - because we buy into the idea that they provide superior performance... without realizing this is only true for as long as Nvidia is focusing on that generation of cards.

And to be fair... this is just how the industry works... but the relative performance drop doesn't seem to be as severe with AMD - meaning that as gamers, we should want to factor in how long we want to hold onto these cards for and how much performance we want to retain when considering our next purchases.
 

Locuza

Member
There is a way to tell, which is to do a suite of benchmarking of games on Kepler, Maxwell and AMD cards with various historical drivers. This will instantly highlight whether Kepler performance actually "got worse" or whether GCN and Pascal merely overtook them. I've been calling for such a comprehensive benchmark for half a year now and I hope that Techreport or someone is working on one. Doubt it though. It's a lot of effort.
Some reddit gem:
UUoY1nv.jpg


https://www.reddit.com/r/hardware/comments/3ikn7c/all_gtx_670_drivers_compared_2012_2015/

The games are fairly old, sometimes the results differ from driver to driver noticeable, but nothing special overall.
 

cheezcake

Member
Some reddit gem:
UUoY1nv.jpg


https://www.reddit.com/r/hardware/comments/3ikn7c/all_gtx_670_drivers_compared_2012_2015/

The games are fairly old, sometimes the results differ from driver to driver noticeable, but nothing special overall.

Damn that would have taken a while to do. But yeh I don't think there's any malicious intent here by NVIDIA. The obvious reality is that newer bits of hardware can do certain things better, NVIDIA build software based on the ability of said hardware to do certain things better, older architectures can't match up because they do those certain things much worse.

More to GPU performance than just flops.
 
Also another technical thing people usually don't know/don't consider: fairly often an instruction performs really fast on certain hardware and very slow on different hardware.

So it's quite possible, and probable, that an engine that performs well by relying on certain instructions that are very fast on, say, 9XX hardware, then performs very poorly on 7XX hardware, because 7XX hardware would perform much more efficiently if you used a different set of instructions.

Do you understand? This isn't malicious behavior. It's just that there's low level optimization that makes a certain card go faster and another go slower. If you don't have time or money to finely tune to every tier of hardware out there, you're going to have games that perform "fine" only on the most common tier that is being sold at that moment.

Understood?
 
NVIDIA build software based on the ability of said hardware to do certain things better, older architectures can't match up because they do those certain things much worse.

NOPE.

Older hardware still can technically outperform newer hardware with lower specs. The problem is that code is being written whose flow is only optimized for newer hardware.

The older hardware CAN match that, but it requires being specifically written for. Nvidia stops bother writing that code, hence old hardware starts to perform worse (on new games).
 

Cerity

Member
So it's pretty much just maxwell being that much better at tesselation than kepler. AMD get around this by optimising (lowering) tesselation levels to compete.

The expectation that cards like the Titan should be able to compete with the relevant AMD cards (which it probably does given the same parameters) is more than reasonable though given their pricing. Nvidia should absolutely allow users to control tesselation without the help of third party software given the impact on perf that hairworks and other tess effects have.
 

Renekton

Member
I used to be annoyed by the desperate Benghazi-esque conspiracy theories that now substitute for actual competition coming from AMD, but these days I realize that if even a few people buy AMD cards out of misguided outrage over the latest AMD astroturfed conspiracy theory about is happening at Nvidia that might still save AMD from bankruptcy and that would prevent any government anti-trust action against Intel and Nvidia.

So I'm fine with it. I hope enough other people buy AMD cards to keep them afloat so I can continue to peacefully enjoy my Nvidia and Intel products.
Not that anti-trust would help, the ones against Microsoft and Intel did nothing (except a tiny fine for the latter).
 
I had such a terrible time with ATI and AMD that I'd never come back. What Nvidia is doing isn't right, but there's no way I'll buy an AMD card now.
 

wazoo

Member
So it's pretty much just maxwell being that much better at tesselation than kepler. AMD get around this by optimising (lowering) tesselation levels to compete.

Yes, for years, the AMD tesselation optimized setting in the Control panel was just lowering as much as possible the tesselation in the game. "Optimized", yes.
 
there's a difference between code that runs faster on newer architectures and code that is intentionally made to run slower on older architectures

i don't know the facts of this particular story, but it's almost always the case for newer games to show a much wider performance gap between architectures than older games

when I optimize something, I assume that future architectures will have more in common with current hardware rather than last gen hardware
if I tune for maxwell, then i assume the optimizations will be somewhat helpful for the next few architectures as well
if I tune for kepler, then my optimizations may not be optimal for the next architectures and when I reuse my code for the next game I'm at a disadvantage compared to if I had optimized for maxwell

if there's a new memory access method which speeds some operation up by 200% for current and future hardware, i'm using it
if i didn't use that method, the old hardware wouldn't necessarily get faster

if someone else is using an umbrella, that doesn't make it rain more on you
 

Locuza

Member
DX12 is moving the tiny optimization phase AWAY from Nvidia engineers and this will be DISASTROUS for actual game performance. Game companies have never done before the type of optimization Nvidia does in-house at the compiler lever. That source code is completely closed and NO software house in existence has access to it. Whether or not under NDA. Only Nvidia engineers have access to detailed hardware specs, and only them have experience with that kind of code.

When new hardware, specifically built for DX12, comes out, it will be very hard even to make the same features *be compatible* on older hardware. So it will take a huge effort only to simply have the games compatible, and it's a completely new order of problem to have old hardware actually perform decently.

That's why it's 2016 and no DX12 game is currently out. It's a trainwreck that is only going to be slowly fixed with new hardware.
Fable Legends and Ashes of the Singularity didn't look disastrous, since it's of course not true.

And what should somebody expect from hardware "built for DX12"?
Intels Gen 9 supports the whole specification.
Maxwell v2 supports the highest Feature-Level 12.1, maybe some features are not fast on every vendor, but that's the usual stuff.

DX12 still abstracts many of the details and is an standard, it shouldn't be common to have broken games for older or newer hardware.

The DX12 situation is not a trainwreck and the reasons for the lack of DX12 titles is the huge change which is needed for the whole pc ecosystem.
And why should new hardware fix anything related to it?
 
Not that anti-trust would help, the ones against Microsoft and Intel did nothing (except a tiny fine for the latter).

Huh? Anti-trust dramatically nerfed MS. The post anti-trust MS has had a lot of trouble competing with the likes of Apple and Google for instance. Now they are being pushed around by Sony. When there is meaningful potential competition out there, anti-trust action can absolutely do something against a monopolist.

The reason anti-trust didn't do anything against Intel was that ultimately Intel didn't actually have meaningful competitors in x86. The government can't make AMD become competitive in x86 after all. However the rise of mobile running on ARM architecture has created competition in another form and Intel is actually in a bad position right now because the x86 desktop/laptop PC world they control is declining and they have not been able to break into the new mobile world in any significant way.
 
if someone else is using an umbrella, that doesn't make it rain more on you

But this IS the case.

Why it's so hard to grasp it?

Methods that are fast on a 7XX might not be fast or ideal on 9XX. That means that if you target the 9XX you'll use code techniques that WILL hamper 7XX performance.

The same happens in the other case. You could write fast code on 7XX that would be inefficient on 9XX.
 

DonMigs85

Member
But this IS the case.

Why it's so hard to grasp it?

Methods that are fast on a 7XX might not be fast or ideal on 9XX. That means that if you target the 9XX you'll use code techniques that WILL hamper 7XX performance.

The same happens in the other case. You could write fast code on 7XX that would be inefficient on 9XX.
Also I believe there wasn't a tremendous decrease going from Fermi to Kepler since the underlying architectures were much closer than Kepler to Maxwell.
 

Joni

Member
I remember the topic where the EU was sueing Intel for similar shady practices. People where saying it didnt matter because they are the best. As it turns out, it is much cheaper to sabotage the rest.
 
Which is not remotely surprising. The AMD range outperforming Kepler like it didn't used to at launch is also something that to my recollection only started happening when AMD made their big Omega driver push. AMD has been tweaking the GCN architecture since 2011, while Nvid squeezed Maxwell in between.

its a combination of improved drivers and gcn being in the consoles. every game is designed on the most basic level for amds architecture.
 

cheezcake

Member
NOPE.

Older hardware still can technically outperform newer hardware with lower specs. The problem is that code is being written whose flow is only optimized for newer hardware.

The older hardware CAN match that, but it requires being specifically written for. Nvidia stops bother writing that code, hence old hardware starts to perform worse (on new games).

This isn't true at all, you have completely ignored hardware architecture. Of course older hardware can outperform newer hardware in some circumstances, but there are other circumstances in which certain certain hardware features are implemented in newer GPU's which means GEN 2 GPU can do 'x' feature much faster than GEN 1 GPU given the same clock speeds/memory etc. It's a hardware difference. No magical optimisation can fix that. Also you have to get that it's also an issue of resource allocation. It's only logical that more of NVIDIAs resources go into creating software which works best with the newest hardware.
 

cheezcake

Member
its a combination of improved drivers and gcn being in the consoles. every game is designed on the most basic level for amds architecture.

GPU architecture is something that's going to be completely abstracted from the vast, vast majority of developers by the middleware they use.
 

Renekton

Member
Huh? Anti-trust dramatically nerfed MS. The post anti-trust MS has had a lot of trouble competing with the likes of Apple and Google for instance. Now they are being pushed around by Sony. When there is meaningful potential competition out there, anti-trust action can absolutely do something against a monopolist.
The browser case went nowhere and Microsoft established IE as de-facto browser at that time. The ruling was described as a slap on the wrist.

Mobile (Apple) and Search (Google) were due to their own blindside, cue the famous laughing dismissal of iphone by Ballmer :)

I don't expect DOJ to get anything from Intel or Nvidia outside of a minor settlement.
 
there are other circumstances in which certain certain hardware features are implemented in newer GPU's which means GEN 2 GPU can do 'x' feature much faster than GEN 1 GPU given the same clock speeds/memory etc. It's a hardware difference.

Well, of course, but this is the norm, not something that needs to be specified again.

We know for example the 970 is about 30% faster than a 770. The problem of inefficient code begins when that 30% is expanded to MUCH bigger values.

We are talking specifically about those cases where the gaps widens way past the hardware disparity.
 
GPU architecture is something that's going to be completely abstracted from the vast, vast majority of developers by the middleware they use.

even those middlewares will be primarily optimized for amds architectures given that its in both consoles. but this gen more and more developers are using their own engines so its kind of a moot point anyway. the days of half of all big games being on unreal engine are over.
 

Marlenus

Member
There are two ways to stop this, neither of which will happen on a wide enough scale to make any difference.

1) Stop buying NV hardware and buy AMD hardware (GPUs, their CPUs suck ass at the moment).

2) Stop buying GameWorks games at launch and wait for a few months until patches and driver updates have stopped/fixed the shenanigans.

We know most of this stuff runs perfectly fine on AMD hardware because it runs fairly well on consoles. When Fallout 4 requires a more powerful AMD GPU than what is in the PS4 or Xbox 1 as the minimum spec you know something fishy is going on.
 
This would be interesting theory if AMD cards weren't working like crap on neutral games like GTA V too.
They have problems with draw calls under DX11 (I've got no idea if this is driver or architecture error or combination of both) and their cards suffer in any kind of game with bigger scenes.
 
And again: AMD does exactly the same.

If the end results are different it's because it may be that the different hardware tiers AMD uses are more "compatible" between each other, and so require less specifically tuned code and exceptions.

Add to this the fact that AMD engineers are LESS talented when it comes to find driver solutions to optimize code, so the absence of that fine-grained optimization has less of an impact to make newer cards run better.

But that's not exactly something to praise. Since AMD drivers are usually less optimized, then older hardware suffers less from advances in drivers optimizations.
 
This would be interesting theory if AMD cards weren't working like crap on neutral games like GTA V too.
They have problems with draw calls under DX11

...Yes, but that's plays a big part on the reason why Nvidia hardware is doing really poorly on DX12.

Nvidia removed from the chip a lot of stuff, in order to make it efficient, while doing that work right at the software level, in the drivers. That means that in order to keep that SPEED, they need to write custom controller code for EVERY game, basically.

The moment new hardware comes out is the moment Nvidia will stop writing specific controller code for 9XX, and that will be the time when 9XX will begin to perform really poorly compared to current AMD videocards.

The dependence of 9XX on drivers optimization to obtain good performance is much higher than 7XX. When Nvidia engineers will stop optimizing for 9XX (aka: when new hardware comes out and they want to sell it) then performance is going to be very bad.

It's long-term ugly.
 
The browser case went nowhere and Microsoft established IE as de-facto browser at that time. The ruling was described as a slap on the wrist.

Mobile (Apple) and Search (Google) were due to their own blindside, cue the famous laughing dismissal of iphone by Ballmer :)

I don't expect DOJ to get anything from Intel or Nvidia outside of a minor settlement.

I wouldn't expect DoJ to pursue Intel or Nvidia for anything at this time. There's nothing to pursue, neither Intel nor Nvidia have done anything wrong.

The anti-trust ruling against MS dramatically changed the way they acted regarding Windows specifically. There is no longer wide-scale integration of unrelated software functions into Windows the way they specifically integrated IE in order to take control of browsers. IE was already the de-facto browser before the anti-trust action started and it merely maintained momentum until meaningful competitors like Firefox and Chrome showed up, it's been declining ever since.
 
Top Bottom