• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Oxide: Nvidia GPU's do not support DX12 Asynchronous Compute/Shaders.

frontieruk

Member
eikI2ko.png


This pic should be on OP, imo.


Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD.

So they worked with Nvidia having as many site visits as AMD and if you can trust his word they've been communicating more with NVidia which wouldn't surprise me seeing as they were surprised at the performance drop under DX12 so would want to work out what they were breaking or give NVidia a chance to fix whatever was going wrong.

The publisher has a deal not the dev...
 

tokkun

Member
So you can just program async, with a penalty on nvidia hardware?
Maybe the performance gains can nullify the performance penalty.

Yes. The way asynchronous programming works is that you should be able to take any asynchronous call and execute it synchronously and the program will work exactly the same from a correctness standpoint.

As for how much the performance difference is, it is pretty hard to know. Personally I do not trust the Ashes of Singularity benchmark to be representative of what we will see in the majority of games. Nor do I think you can directly extrapolate from the performance increase on a PS4 title to what you would see in a high-end PC; the PC will be using much more powerful hardware and will likely be running at a higher resolution, so it may not face the same bottlenecks.
 
they dont seem to be nearly as beneficial as async compute. also, consoles not supporting them pretty much makes them useless. you will see the occasional 2 titles a year use them to do almost nothing to enhance the visuals. they will probably become the new tessellation.

It has yet to be proven that async compute is drastitically important for something like Maxwell 2 archichtere or even if Maxwell 2 does not support it. Even the supposed "dev" does not know. We KNOW async compute helps GCN in a signifcant fashion, but who is to say async compute would help Maxwell 2 the same amount (assuming it does not have it). This thread has too much chicken head cut off syndrom already going on, let's lessen it abit.

And I do not think one can scoff at the DX12 hardware features NV has... just so sweepinly. Just like one cannot scoff at how much better NV's tesselation is.
 
What does this prove, exactly?

That we are talking about some biased developer trying too hard to favor one brand above other. They are openly admiting that they are using one resource on AMD that they have completely shutdown on Nvidia because latter one doesn't even have the driver support unlocked for them. And even then, they can't surpass GTX performance.

And then, we are talking about how Nvidia lacks Asynchronous Compute™ when that's the commercial name AMD gives to that feature. For your info, Nvidia calls it Dynamic Parallelism, and has been among us for several generations with CUDA.

Fud sprinkler.
 

TSM

Member
“People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what we’re seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.”

I like this quote, where Nvidia's drivers gave them big boosts before, they're losing in DX12 because drivers aren't nearly as important.

Pretty much what AMD has been on about for the past 3 years. While Nvidia focused on the market as-is, AMD was getting in early on DX12 and it's cost them dearly in their already low market share. Now that DX12 is the current focus, I'm sure Pascal won't have these problems and it honestly can't if they want to compete. When Pascal comes around, we'll probably start seeing the first DX12 games hit the market and if their top-tier card is competing or even winning, that's all anyone is going to be talking about, not Maxwell performance.

This actually sounds like a terrible situation for AMD. DX12 seems like the company that is willing to spend the most money and manpower with dev support will have far and away the best performance on their cards. We've seen how this has played out with DX10/11 and it should be obvious how this will play out with DX12. Nvidia will create middleware tuned for their cards and spend a lot of time assisting devs and their cards will have by far the best optimized versions of these games and AMD will not be able to correct the situation through driver updates. AMD just isn't going to spend the money and resources to provide the dev support Nvidia does routinely. Project Cars is a recent example of this going completely wrong for AMD.

This is before we even get into claims that one or the other company is "sabotaging" it's rival though "help" provided to a dev.
 

Kezen

Banned
Apparently Maxwell 2 does support this feature but have asked Oxide because on those cards it made the bench ran slower for some reason.
 
It has yet to be proven that async compute is drastitically important for something like Maxwell 2 archichtere or even if Maxwell 2 does not support it. Even the supposed "dev" does not know. We KNOW async compute helps GCN in a signifcant fashion, but who is to say async compute would help Maxwell 2 the same amount (assuming it does not have it). This thread has too much chicken head cut off syndrom already going on, let's lessen it abit.

And I do not think one can scoff at the DX12 hardware features NV has... just so sweepinly. Just like one cannot scoff at how much better NV's tesselation is.

i think history allows ample scoffing. tessellation has been completely useless. and what vendor specific features that dont work on consoles have ever not been useless?

reading around b3d, the dx12 features only nvidia supports require deep changes to the engine to properly utilize. that is never going to happen. every engine will be designed around consoles feature set and strengths.
 

Flai

Member
Pardon my ignorance but what main feature are you talking about? I thought G-Sync monitors just eliminated screen-tearing by syncing your GPU to the monitor's refresh rate?

OP can't use G-Sync yet since he still has an AMD GPU. But it seems like it's pretty good monitor anyways :)
 

tuxfool

Banned
And then, we are talking about how Nvidia lacks Asynchronous Compute™ when that's the commercial name AMD gives to that feature. For your info, Nvidia calls it Dynamic Parallelism, and has been among us for several generations with CUDA.

For many generations it was just CUDA or just Graphics. Kepler and earlier cannot execute CUDA/compute operations in parallel to the graphics pipe. It has to switch between them.
 
This actually sounds like a terrible situation for AMD. DX12 seems like the company that is willing to spend the most money and manpower with dev support will have far and away the best performance on their cards. We've seen how this has played out with DX10/11 and it should be obvious how this will play out with DX12. Nvidia will create middleware tuned for their cards and spend a lot of time assisting devs and their cards will have by far the best optimized versions of these games and AMD will not be able to correct the situation through driver updates. AMD just isn't going to spend the money and resources to provide the dev support Nvidia does routinely. Project Cars is a recent example of this going completely wrong for AMD.

This is before we even get into claims that one or the other company is "sabotaging" it's rival though "help" provided to a dev.

I see it the other way around. Now optimization relies on devs instead of drivers. AMD doesn't have the resources, or the will, to do said optimizations through drivers. So now we have the chance of having properly optimized games running on AMD cards, thing we never had on DX10/11.

Let me be optimistic.
 
That we are talking about some biased developer trying too hard to favor one brand above other. They are openly admiting that they are using one resource on AMD that they have completely shutdown on Nvidia because latter one doesn't even have the driver support unlocked for them. And even then, they can't surpass GTX performance.


And then, we are talking about how Nvidia lacks Asynchronous Compute™ when that's the commercial name AMD gives to that feature. For your info, Nvidia calls it Dynamic Parallelism, and has been among us for several generations with CUDA.

Fud sprinkler.

Yeah, no. A lot of completely unfounded accusations there.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD.

So they worked with Nvidia having as many site visits as AMD and if you can trust his word they've been communicating more with NVidia which wouldn't surprise me seeing as they were surprised at the performance drop under DX12 so would want to work out what they were breaking or give NVidia a chance to fix whatever was going wrong.

The publisher has a deal not the dev...

Interesting. Thanks for the info.
 
i think history allows ample scoffing. tessellation has been completely useless. and what vendor specific features that dont work on consoles have ever not been useless?

Tesselation has never been useless, numerous games use and have used it. Everytime it is used (even for simple terrain generation), it means less perf degregation on NV and basically comes "free".

Likewise, consoles never supported a number of DX10 and DX11 features and yet even PC ports grabbed and used these features, for years. Some even had hugely different PC versions (battlefield 3, the crysis games, metro games). Even less-popular ports took advantage of non-console features (batman games, red faction guerilla).

You are overreacting IMO and downplaying a lot of things.
 

Renekton

Member
Apparently Maxwell 2 does support this feature but have asked Oxide because on those cards it made the bench ran slower for some reason.
From what I read in B3D, all DX12 cards will "support" it by running the same execute and produce the same result.

The question is whether there is actual native hardware implementation.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.

WFT.
 

tuxfool

Banned
Tesselation has never been useless, numerous games use and have used it. Everytime it is used (even for simple terrain generation), it means less perf degregation on NV and basically comes "free".

Things like terrain generation typically use low tessellation factors. These are essentially free on AMD hardware too. It is only when you get up to 32x and beyond that you start to see clear differences.

Though I do agree it has its uses. The relative value of good performance on high tessellation factors is somewhat minor at this point.
 

Steel

Banned
What I get from this article is to not buy any Nvidia GPUs for awhile. I mean, there was that DX 12 benchmark article before that said about the same thing.

That we are talking about some biased developer trying too hard to favor one brand above other. They are openly admiting that they are using one resource on AMD that they have completely shutdown on Nvidia because latter one doesn't even have the driver support unlocked for them. And even then, they can't surpass GTX performance.

And then, we are talking about how Nvidia lacks Asynchronous Compute™ when that's the commercial name AMD gives to that feature. For your info, Nvidia calls it Dynamic Parallelism, and has been among us for several generations with CUDA.

Fud sprinkler.

Look at this post:

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD.

So they worked with Nvidia having as many site visits as AMD and if you can trust his word they've been communicating more with NVidia which wouldn't surprise me seeing as they were surprised at the performance drop under DX12 so would want to work out what they were breaking or give NVidia a chance to fix whatever was going wrong.

The publisher has a deal not the dev...
 

Kezen

Banned
From what I read in B3D, all DX12 cards will "support" it by running the same execute and produce the same result.

The question is whether there is actual native hardware implementation.

The same "results"? I was under the impression AMD had a commanding lead in terms of asynchronous compute/shading due to their architecrual choices. I would not expect Kepler or perhaps even Maxwell to be quite on par with that.

That does not mean Maxwell can't use that feature but chances are it's not as elegant/efficient. I expect Kepler to suffer in D3D12 games which use that feature, Maxwell to a much lesser extent.

What is true is that if you already have a capable GCN card you're are not as enticed to upgrade, while those with Kepler will have to at some point.
 
Tesselation has never been useless, numerous games use and have used it. Everytime it is used (even for simple terrain generation), it means less perf degregation on NV and basically comes "free".

Likewise, consoles never supported a number of DX10 and DX11 features and yet even PC ports grabbed and used these features, for years. Some even had hugely different PC versions (battlefield 3, the crysis games, metro games). Even less-popular ports took advantage of non-console features (batman games, red faction guerilla).

You are overreacting IMO and downplaying a lot of things.

mind listing some particular examples where you think tessellation made a worthwhile improvement to the visuals?
 

Damerman

Member
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.
Look at you, trying to create a meme.
 
mind listing some particular examples where you think tessellation made a worthwhile improvement to the visuals?

The metro games, Crysis 2 and 3, batman games, COD ghosts, Dragon Age: Inq, lost planet 2, Ryse, HAWX games, Total War Shogun 2, Watch Dogs (water).

And those are just the ones that I can suddenly think of. It is used in tons of rendering pipelines. And in the ones above, signifcantly.
 

gaming_noob

Member
This is all so interesting to read. I know current gen consoles are barely 2 years old but when do we typically start speculating/start hearing rumors of the power of next gen consoles?
 
The metro games, Crysis 2 and 3, batman games, COD ghosts, Dragon Age: Inq, lost planet 2, Ryse, HAWX games, Total War Shogun 2, Watch Dogs (water).

And those are just the ones that I can suddenly think of. It is used in tons of rendering pipelines. And in the ones above, signifcantly.

maybe we have different ideas of what constitutes a worthwhile visual increase. where is tessellation used n ryse and DA out of curiosity? ive personally gone around trying to find tessellation in c3, and the very rare cases that you see it used, it might as well not even be there. the difference is minuscule.
 
The metro games, Crysis 2 and 3, batman games, COD ghosts, Dragon Age: Inq, lost planet 2, Ryse, HAWX games, Total War Shogun 2, Watch Dogs (water).

And those are just the ones that I can suddenly think of. It is used in tons of rendering pipelines. And in the ones above, signifcantly.

superman-dad-death-reaction-Kevin-Costner-1387241798Q.gif
 

DieH@rd

Banned
Nice to see my 7850 still has a some life left in it, it will be interesting to see how this turns out.

7850 is older, it has reduced number of ACEs than newer GCN [and PS4] cards have.

This is all so interesting to read. I know current gen consoles are barely 2 years old but when do we typically start speculating/start hearing rumors of the power of next gen consoles?

First rumours, about 2 years before consoles are released, way more rumours at ~1 year mark [when devs start getting their devkits]. It's still to early to know exactly what will be in PS5's generation, but it will most probably again be made by AMD.
 
What I get from this article is to not buy any Nvidia GPUs for awhile. I mean, there was that DX 12 benchmark article before that said about the same thing.

That DX12 benchmark was the benchmark from those same guys.

It went from this:

71450.png


to this:

ashes-6700k.png


Please, keep advising people not to buy some brand with higher DX12 feature level because some alpha state benchmark.

Colors and that.
 
That DX12 benchmark was the benchmark from those same guys.

It went from thia
to this:

Please, keep advising people not to buy some brand with higher DX12 feature level because some alpha state benchmark.

Colors and that.
Do have shares in NV or something?

The benchmark and statements from the dev are quite straightforward and mesh with what is known already in that NV has the best driver optimisations and that AMD has spent more die area on async. Yet you feel the need to allege that the same publisher comarketing deals that NV employs somehow mean that figures from a dev with an AMD deal are not to be trusted? So are the UE benchmarks untrustworthy, is Metro trash? Give me a break.

Thus seems straightforward enough and frankly I would expect the NV team to get their drivers into better shape and close some of the gap soon. Talk about shooting the messenger.
 

Crisium

Member
It may be a bit too early to believe everything on DX12, but it is important for all consumers to take note of this moving forward. Nvidia will be on Pascal by the time DX12 games really start taking over. If we see another case of AMD aging more gracefully than a Nvidia architecture it would almost be funny if not for the fact that Nvidia outsells 5:1. We shall see.
 

Damerman

Member
“People wondering why Nvidia is doing a bit better in DX11 than DX12. That’s because Nvidia optimized their DX11 path in their drivers for Ashes of the Singularity. With DX12 there are no tangible driver optimizations because the Game Engine speaks almost directly to the Graphics Hardware. So none were made. Nvidia is at the mercy of the programmers talents as well as their own Maxwell architectures thread parallelism performance under DX12. The Devellopers programmed for thread parallelism in Ashes of the Singularity in order to be able to better draw all those objects on the screen. Therefore what we’re seeing with the Nvidia numbers is the Nvidia draw call bottleneck showing up under DX12. Nvidia works around this with its own optimizations in DX11 by prioritizing workloads and replacing shaders. Yes, the nVIDIA driver contains a compiler which re-compiles and replaces shaders which are not fine tuned to their architecture on a per game basis. NVidia’s driver is also Multi-Threaded, making use of the idling CPU cores in order to recompile/replace shaders. The work nVIDIA does in software, under DX11, is the work AMD do in Hardware, under DX12, with their Asynchronous Compute Engines.”

I like this quote, where Nvidia's drivers gave them big boosts before, they're losing in DX12 because drivers aren't nearly as important.

Pretty much what AMD has been on about for the past 3 years. While Nvidia focused on the market as-is, AMD was getting in early on DX12 and it's cost them dearly in their already low market share. Now that DX12 is the current focus, I'm sure Pascal won't have these problems and it honestly can't if they want to compete. When Pascal comes around, we'll probably start seeing the first DX12 games hit the market and if their top-tier card is competing or even winning, that's all anyone is going to be talking about, not Maxwell performance.
Ur ignoring the fact that costs to nvidia are going to be higher. Amd can keep putting out the same card... As more people switch to dx12, amd's costs don't rise as much as Nvidia's costs. And as far as we know, pascal is the same thing maxwell save for the fabrication process and vram.
 

tokkun

Member
Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD.

So they worked with Nvidia having as many site visits as AMD and if you can trust his word they've been communicating more with NVidia which wouldn't surprise me seeing as they were surprised at the performance drop under DX12 so would want to work out what they were breaking or give NVidia a chance to fix whatever was going wrong.

The publisher has a deal not the dev...

I think the main concern is not that the developer is blocking access to Nvidia, but rather that they set out to design their engine in such a way that it plays to AMD's strengths. We know the Nitrous engine was originally designed to show off the advantages of Mantle before DX12 came about. They make this clear in their own marketing materials:
https://www.youtube.com/watch?t=40&v=6PKxP30WxYM

Now, there is nothing wrong with that on its own, IF what we seeing is representative of what other games/engines will do in the future. On the other hand, if it is not representative, then it only serves to act as misleading propaganda. So is it representative? We know that Star Swarm / Ashes of Singularity is different from other games in that it focuses on having a large number of independent objects on screen at once and issues a very large number of draw calls, and that this is the reason it sees such a big performance increase from Mantle / DX12.

Did Oxide set out to build the best engine possible, or did they set out to build an engine that would be advantageous to AMD's architecture? Will other engines go down the same path? And if not, will they still see the same improvements? That is what remains to be seen, and I don't think there's anything wrong with being skeptical about it. Oxide has no track record that we can use to gauge their credibility.
 
maybe we have different ideas of what constitutes a worthwhile visual increase. where is tessellation used n ryse and DA out of curiosity? ive personally gone around trying to find tessellation in c3, and the very rare cases that you see it used, it might as well not even be there. the difference is minuscule.

Hundreds if not thousands of surfaces in Crysis 3 use tesselation, and tesselation enables the vegetation to interact and look the way it does, the particles to look the way they do, the shadows to look the way they do, the water to look the way it does. Any and every rock in the game, considering entire levels are made of rocks (last level is basically all rocks and tesselation rounded alien geometry, second last level, tons of other parts of other levels). I would say it is a big deal.

Off
On

off
on

off
on

off
on


Not sure what you mean?
 

Steel

Banned
Please, keep advising people not to buy some brand with higher DX12 feature level because some alpha state benchmark.

Colors and that.

What? I own an Nvidia shield, so I'm solidly locked into Nvidia's ecosystem. I was just saying it'd probably be better to wait for a new generation of video cards from Nvidia than not. But whatever.
 

Marlenus

Member
So, is this situation similar to the whole Geforce FX debacle? Anyone remember that one?

Nothing like it.

This is more that NV tend to produce cards for the now but AMD produce more future proof products. By the time dx12 is more mainstream NV will have pascal out and possibly the next arch along. The main reason NV can do this is much higher budgets, they can afford to produce a new architecture every 2 years but AMD cannot so need to think longer term to stay relevant.

All it really means is that those who have slower upgrade cycles are likely to be able to hang on to their AMD cards longer than those with NV equivalents.
 

Crisium

Member
Well, it may sort of like GeForce FX. The GeForce 6800 series was out by the time Half-Life 2 launched, but many users still on the year old FX series were left with crap DX9 in that game so that had to use DX8. We shall see though if DX12 plays true to this, but probably not this bad as that was the worst GPU architecture Nvidia ever launched while Maxwell is still nice overall.
 
Hundreds if not thousands of surfaces in Crysis 3 use tesselation, and tesselation enables the vegetation to interact and look the way it does, the particles to look the way they do, the shadows to look the way they do, the water to look the way it does. Any and every rock in the game, considering entire levels are made of rocks (last level is basically all rocks and tesselation rounded alien geometry, second last level, tons of other parts of other levels). I would say it is a big deal.

Off

On


off

on


off

on


off

on




Not sure what you mean?
Not seeing the difference in these shots to be quite honest, the curves don't seem massively impacted by tessellation. Are there specific points in any of those images that look substantially worse? I mean the usual candidates (perfect circles) like the weapon sights or the curves on that big rock seem identical. Crysis 3 is a bit of a pain in the arse for showing this anyway as the screenspace effects like CA are completely washing out details.

All it really means is that those who have slower upgrade cycles are likely to be able to hang on to their AMD cards longer than those with NV equivalents.
Seems like best summation thus far
 
Not seeing the difference in these shots to be quite honest, the curves don't seem massively impacted by tessellation. Are there specific points in any of those images that look substantially worse? I mean the usual candidates (perfect circles) like the weapon sights or the curves on that big rock seem identical. Crysis 3 is a bit of a pain in the arse for showing this anyway as the screenspace effects like CA are completely washing out details.

the first and last comparisons show the biggest diff, but those pictures do more to hurt his argument than mine imo
 
This is more that NV tend to produce cards for the now but AMD produce more future proof products.

But then the first Nvidia series that allows DX12 compatibility (Fermi) is more than a year older than the first AMD card that does the same (Southern Islands).

And the current top end Nvidia card is several feature levels above the current top end AMD card.

Replying every untenable opinions I read here makes me look like a Nvidia fanboy. Just too many.
 
Top Bottom