• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Nvidia's Gameworks nerfing AMD cards and its own previous-gen GPU's?

These threads hurt my head. The personification of companies, the cherry-picking of results ignoring both context and ones directly contradicting the prevailing agenda, it's all a bit much.

This garbage has been debunked but always comes back around when we get close to new GPUs launching...
Garbage always comes back, and at some point the debunkers get tired of debunking.

Emotionally laden conspiracy theories are exciting and self-replicate (and earn money for our youtube channel), while sober analysis doesn't. Just as an example, there have been plenty of measurements which attribute a lower relative cost to HBAO+ on AMD cards than NV ones.

You can't mention astroturfing here with a straight face. Astroturfing on nVidias behalf is legendary. Read the comments on any GPU article at TechReport, anandtech, etc. Pure fanboy dreck.
I often do read those comments, and while lots of them betray a lack of understanding (just like in technical threads on GAF and everywhere on the internet really) I'd be hard-pressed to identify them as slanted in any particular direction overall.
 
I think I remember reading that it wasn't a simple sudden switch, but a progressive one.

Maxwell is WAY more efficient compared to Kepler, so it makes sense that they stripped it out even more.

It's Maxwell specifically that gained so much power efficiency compared to AMD. Not Kepler.

I'm just speculating with logic, though. I'm not 100% sure that Kepler works differently.

a huge chunk of maxwells power efficiency came from the new smm
 
Back when arkham asylum came out I could play with the nvidia physx on and hit 30fps with a gtx 260, now on a gtx 950 it can't even hold 30 but I don't know to either blame the old nvidia physx code or my cpu, even though it's a gpu dedicated task.
 
Making things like HairWorks that are just very inefficient effects meant to hobble their old cards and AMD cards more than their new cards isn't something I support. They could support things like Purehair that does it far more efficiently.
What data are you basing this claim on? Do you have eperience using both APIs/libraries in similar situations?

Guess why we aren't seeing a HairWorks option in ROTTR?
Because implementing a completely different hair simulation module in your game when you already have a working one which does a good job is a ton of unnecessary work? That would be my first guess. Not ery exciting, I know.

One new thing to come to my attention is the lack of DX12 titles so far. What I've heard so far about DX12 patches is, "The performance is not there yet, DX11 still performs better". This is completely against what was touted when DX12 was released.
That completely depends on what you read about DX12. Did you read marketing statements, exaggerations posted by people who had something to gain, or impartial analysis? Because going by the latter, e.g. games which are GPU-limited in DX11 were never going to see much performance improvement from simply switching to 12. Similarly, merely changing the rendering backend of an existing engine without changing the entire way it manages data might not even avail you much of a CPU benefit.

We've seen AMD has a ton to gain using DX12, but what we've also seen is one other company struggling to make DX12 work right.
Can you show anything to support this pretty wild claim that one GPU vendor is "struggling to make DX12 work"? Because in all the benchmarks I've seen, it doesn't happen. Yes, a vendor who has very good DX11 drivers has less to gain than one who doesn't, but that hardly equates to "struggling to make it work".
 
are people even seeing the video?



Jump to the fallout4 part and make sure you understand what happened. Nvida got caught red handed.


AMD cards gained 30% of performance when the 1.3 patch was released with an alternate renderer (which nvidia didn't bother to interfere with at studio level). Puts a lot of shame on bethesda as well, allowing 20% of the market to suffer.

I don't understand this. Patch 1.3 was to add Nvidia's Gameworks and it runs -a lot- better on AMD cards? I thought Gameworks was tanking AMD performance? Why is this new renderer killing Nvidia's performance when they changed it specifically to add Nvidia's features? I don't get it.
 
Interesting, not really noticed any major problems with my MSI GTX 760, it's still going pretty strong, well as strong as a 760 can be at this point anyway. I can still run games like The Witcher 3 on High Settings and get between 40 - 60fps but I mostly lock it to 30fps for less judder, although Mad Max runs at a steady 60fps on it still.

I've had more than my moneys worth from this GPU now but I'm waiting for Pascal before I upgrade at this point.
 
What data are you basing this claim on? Do you have eperience using both APIs/libraries in similar situations?

Any game with HairWorks works as an example that's enough for me. You can hide behind lack of conclusive tests since those can't happen atm, but it doesn't directly refute the claim either way.

Because implementing a completely different hair simulation module in your game when you already have a working one which does a good job is a ton of unnecessary work? That would be my first guess. Not ery exciting, I know.

So if HairWorks was faster, there would be no reason to add it? If a game has an AO offering that's a good one, say the improved AO in a game like ROTTR, there would be no reason to have HBAO+ there because that would just add to the work like you said.

Can you show anything to support this pretty wild claim that one GPU vendor is "struggling to make DX12 work"? Because in all the benchmarks I've seen, it doesn't happen. Yes, a vendor who has very good DX11 drivers has less to gain than one who doesn't, but that hardly equates to "struggling to make it work".

As I said, just food for thought and we'll see how things pan out. If you or a dev with actual DX12 experience has better knowledge on the matter I'm certainly eager to hear it. Unfortunately these aren't things we're likely to get straight answers for. Devs will only release their DX12 titles/patches once they can show improvements for both sides.
 
I don't understand this. Patch 1.3 was to add Nvidia's Gameworks and it runs -a lot- better on AMD cards? I thought Gameworks was tanking AMD performance? Why is this new renderer killing Nvidia's performance when they changed it specifically to add Nvidia's features? I don't get it.

The implication was that it was a patch designed to tank performance on Kepler and Maxwell cards to make the upcoming Pascal more attractive.

I guess they stopped trying to hurt the competition though which was kind hearted of them. Unless of course they were comparing Gameworks features on with Nvidia to Gamework features off on AMD, in which case the comparison is ludicrous.
 
What data are you basing this claim on? Do you have eperience using both APIs/libraries in similar situations?

Because implementing a completely different hair simulation module in your game when you already have a working one which does a good job is a ton of unnecessary work? That would be my first guess. Not ery exciting, I know.

That completely depends on what you read about DX12. Did you read marketing statements, exaggerations posted by people who had something to gain, or impartial analysis? Because going by the latter, e.g. games which are GPU-limited in DX11 were never going to see much performance improvement from simply switching to 12. Similarly, merely changing the rendering backend of an existing engine without changing the entire way it manages data might not even avail you much of a CPU benefit.

Can you show anything to support this pretty wild claim that one GPU vendor is "struggling to make DX12 work"? Because in all the benchmarks I've seen, it doesn't happen. Yes, a vendor who has very good DX11 drivers has less to gain than one who doesn't, but that hardly equates to "struggling to make it work".

ill agree with him that the hairworks implementations weve gotten so far seem subpar compared to purehair
 
Edit: on mobile and cant watch video...

Didn't they do the same thing with Batman's cape in Arkham Asylum or City with regard to unnecessarily complex tessellation? I seem to recall this. It's not a new practice by them at all. It just always seems to get pushed under the rug as soon as the new card hype engine starts turning.
 
These conspiracy theory threads really hurt my game developer heart.
Mine too, the majority of this already debunked, stupid conspiracy theories from people who lack very basic understanding of game rendering. Like seriously, it's absolutely ridiculous. That's not to say there aren't bad practices within the GPU-industry or that the Gameworks can't have bad effects on consumers, but these theories are so far from reality it's not even funny.
 
Witcher 3 runs great on AMD, just turn off the shitty nvidia stuff.

Technically no, DX:MD will use it and is coming out this year. But it's an extention and improvement to TressFX, which was used in... um Tomb Raider 1 I think. Honestly I'm not sure if it got used anywhere else at all.

Yeah pretty sure it was jsut tomb raider.

its a shame these vendors cant just work together to produce a massive games effects library it would benefit the industry loads.
 
Nvidia trying to force PC gamers' hands. With this console generation especially, there is very little need for people to upgrade their graphics cards very often.
 
Have there been games other than ROTR which use purehair?

PureHair is SE's own customization over the TressFX used in the first game, so it in itself probably won't be accessible to other devs, but the newer versions of TressFX should be now that GPUOpen is a thing. What we've seen so far is TressFX (or a customized version of it) used in several titles and on console, while HairWorks we've only seen tank performance in a few PC titles, and I don't believe I've seen a single console title use it.

What my issue with HairWorks is its use of tessellation to achieve the effect. Who can with a straight face say what Nvidia has implemented in HairWorks is the best and most consumer friendly way of providing us with more value in our games? My opinion on the matter is it's simply a cumbersome feature Nvidia puts extra effort in to push sales of their new cards, and devs/publishers eat it up because it's just free extra for them.
 

I wouldn't called the PureHair implementation in ROTR better than the Hairworks implementation in Witcher 3.

Only Lara has dynamic movement in the hair whereas the Witcher applies dynamic movement to Geralt as well as the Monsters, which likely have a much, much larger strand count than any of the human characters.

Also, technical talk aside, I just think it looks much better.
 
Lets be real here, all of you saying "Im going AMD next generation" are going to see the benchmarks and price performance charts favoring NVidia and you'll just go "Welp I wish AMD was more competitive I guess I'm going NVidia this generation"

There is a gap both in hardware performance and software support (as in developers getting additional support straight from NVidia) that AMD isn't filling, its only getting worse.
 
Also, technical talk aside, I just think it looks much better.

It depends I think. Pure Hair on Laura I think has much better shading in general, but if you ahve seen that mod which thins the strands on geralts head, then you can see how a number of complaints about Hairworks' aesthetics are from an art perspective and less about shading. The problem with that mod of course is that it makes the strands so thin that they are VERY VERY sub-pixel at normal gameplay distance and even in some closer sequences... making for tons of aliasing and disappearing hair even with 8XMSAA.

Then again, Pure Hair is only on one character in RoTR. While in TW3 it was mapped to dozens of characters and has to deal with the performance considerations of dozens of these models appearing on screen at once (HELLO WOLF PACK!).

----

This thread makes my brain hurt.
 
I wouldn't called the PureHair implementation in ROTR better than the Hairworks implementation in Witcher 3.

Only Lara has dynamic movement in the hair whereas the Witcher applies dynamic movement to Geralt as well as the Monsters, which likely have a much, much larger strand count than any of the human characters.

Also, technical talk aside, I just think it looks much better.

im just comparing geralt only hairworks v lara. purehair looks and moves so much better, is free of aliasing without having to run msaa on all those tessellated polygons, and performance is better.

Lets be real here, all of you saying "Im going AMD next generation" are going to see the benchmarks and price performance charts favoring NVidia and you'll just go "Welp I wish AMD was more competitive I guess I'm going NVidia this generation"

There is a gap both in hardware performance and software support (as in developers getting additional support straight from NVidia) that AMD isn't filling, its only getting worse.

if nvidia has the faster GPU i absolutely will go nvidia again. ill only care about power efficiency if all other factors are equal
 
Are we sure PureHair isn't just a slightly tweaked and rebranded TressFX named so due to the NVIDIA promotion?

Hmm, that's an interesting thought. However, it's more likely that the tech (or at least performance) has improved enough that they thought a new name would help with marketing their technology. Weird though, since I think TressFX is a great name for the technology. Much better than the _____ Works nomenclature that Nvidia is going with for every little tech they develop.

Anyway, my reasoning is that I vaguely remember seeing slides straight from AMD (or was it Eidos?) about the new Deus Ex and they refered to it as PureHair.

It's lovely though. I'm constantly impressed playing Rise of the Tomb Raider just how far the tech has come since the last game.
 
I wouldn't called the PureHair implementation in ROTR better than the Hairworks implementation in Witcher 3.

Only Lara has dynamic movement in the hair whereas the Witcher applies dynamic movement to Geralt as well as the Monsters, which likely have a much, much larger strand count than any of the human characters.

Also, technical talk aside, I just think it looks much better.

PureHair is going to be used for other (all?) characters in Deus Ex (even on console, I think, but I'm not certain there).
 
im just comparing geralt only hairworks v lara. purehair looks and moves so much better, is free of aliasing without having to run msaa on all those tessellated polygons, and performance is better.

I'm looking at videos and screenshots of it again to try and be objective. I do have to admit now that I think PureHair looks much better in stills and has more realistic shading but I prefer the motion in Geralts hairworks implementation. Also the tessellation isn't adding polygons? The hair (I think) is rendered using line primitives, the tessellation adds vertices in the lines which introduce more points for the hair to bend at.
 
I'm looking at videos and screenshots of it again to try and be objective. I do have to admit now that I think PureHair looks much better in stills and has more realistic shading but I prefer the motion in Geralts hairworks implementation. Also the tessellation isn't adding polygons? The hair (I think) is rendered using line primitives, the tessellation adds vertices in the lines which introduce more points for the hair to bend at.

NVIDIA Hairworks
Compute based simulation.
Use isoline tessellation with tessellation factors up to 64 (maximum possible value) to generate curvature and additional strands.
Use Geometry Shader for extruding segments into front-facing polygons.
Renders hair strands onto MSAA 8x render target to obtain smooth edges.
No OIT solution, non-edge hair strand pixels are fully opaque.
 
heh that was also a thought.
Not saying he is wrong but I always was under the impression that Nvidia only looked at their latest flagship card and did some worthless* optimisation after those specs and the rest be damned ^^

*Well not like they get the source code and/or have time to work on it ^^
 
I mean if your GPU can only do x amount of polygons it can only do x amount of polygons. Adding tessellation is adding a ton more polygons to each scene so the performance will suffer, especially on a GPU where that amount of extra polys creates a bottleneck.

I'm not sure how this is a conspiracy.
 
This is so depressing. I feel like they are holding the PC market hostage. I love gaming more than any other hobby but I can't just keep upgrading all the time. I'm close to quitting the PC market.
 
This is so depressing. I feel like they are holding the PC market hostage. I love gaming more than any other hobby but I can't just keep upgrading all the time. I'm close to quitting the PC market.

Took me 3 years to finally upgrade my system, and after buying Pascal later this year I probably won't be upgrading for another 3.
 
These threads hurt my head. The personification of companies, the cherry-picking of results ignoring both context and ones directly contradicting the prevailing agenda, it's all a bit much.

Garbage always comes back, and at some point the debunkers get tired of debunking.

Emotionally laden conspiracy theories are exciting and self-replicate (and earn money for our youtube channel), while sober analysis doesn't. Just as an example, there have been plenty of measurements which attribute a lower relative cost to HBAO+ on AMD cards than NV ones.

I often do read those comments, and while lots of them betray a lack of understanding (just like in technical threads on GAF and everywhere on the internet really) I'd be hard-pressed to identify them as slanted in any particular direction overall.

I'll take Durante's word for it. You know your shit. The gameworks crap always killed my performance on a single 970. Now on an SLI rig it runs much better. Nvidia's fault IMO was promoting gameworks with their 970 and then basically releasing specs later on saying you better turn most of the features off if you don't own a 980 or above.

Edit: I didn't go SLI for gameworks lol, I did it for VR mostly and I was unimpressed with the single card performance in many games.
 
anyone who has followed Nvidia over the last 10-15 years knows exactly what they capable of doing. I own a Nvidia card currently but I wouldn't put anything past them to cripple old cards.
 
The Fallout 4 might be the renderer optimizations breaking, just like the PS dev said.

However, there's no doubt that Nvidia actively manipulates this stuff to try to get more sales. I also have no doubt Nvidia pays (via dev support or actual $) developers directly to actively optimize developers' code for Nvidia over AMD.

It's kind of a given. It's why I didn't even seriously consider an AMD card for my new build. They're already an effective monopoly (for the PC gaming market) as far as I'm concerned.
 
Interesting, not really noticed any major problems with my MSI GTX 760, it's still going pretty strong, well as strong as a 760 can be at this point anyway. I can still run games like The Witcher 3 on High Settings and get between 40 - 60fps but I mostly lock it to 30fps for less judder, although Mad Max runs at a steady 60fps on it still.

I've had more than my moneys worth from this GPU now but I'm waiting for Pascal before I upgrade at this point.

This has been my exact experience, but in Witcher 3 I completely turn off all the hair stuff and probably have a mix of max, high, and medium settings with grass turned down to low I think. Fallout 4 I think is mostly medium or high settings and I just let it bounce between 40 and 60fps.

I'm just praying DX12 and Bohemia's updates get me better CPU optimization in ArmA 3.
 
Top Bottom