• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Oxide: Nvidia GPU's do not support DX12 Asynchronous Compute/Shaders.

Arkanius

Member
Wow, there are lots of posts here, so I'll only respond to the last one. The interest in this subject is higher then we thought. The primary evolution of the benchmark is for our own internal testing, so it's pretty important that it be representative of the gameplay. To keep things clean, I'm not going to make very many comments on the concept of bias and fairness, as it can completely go down a rat hole.

Certainly I could see how one might see that we are working closer with one hardware vendor then the other, but the numbers don't really bare that out. Since we've started, I think we've had about 3 site visits from NVidia, 3 from AMD, and 2 from Intel ( and 0 from Microsoft, but they never come visit anyone ;(). Nvidia was actually a far more active collaborator over the summer then AMD was, If you judged from email traffic and code-checkins, you'd draw the conclusion we were working closer with Nvidia rather than AMD wink.gif As you've pointed out, there does exist a marketing agreement between Stardock (our publisher) for Ashes with AMD. But this is typical of almost every major PC game I've ever worked on (Civ 5 had a marketing agreement with NVidia, for example). Without getting into the specifics, I believe the primary goal of AMD is to promote D3D12 titles as they have also lined up a few other D3D12 games.

If you use this metric, however, given Nvidia's promotions with Unreal (and integration with Gameworks) you'd have to say that every Unreal game is biased, not to mention virtually every game that's commonly used as a benchmark since most of them have a promotion agreement with someone. Certainly, one might argue that Unreal being an engine with many titles should give it particular weight, and I wouldn't disagree. However, Ashes is not the only game being developed with Nitrous. It is also being used in several additional titles right now, the only announced one being the Star Control reboot. (Which I am super excited about! But that's a completely other topic wink.gif).

Personally, I think one could just as easily make the claim that we were biased toward Nvidia as the only 'vendor' specific code is for Nvidia where we had to shutdown async compute. By vendor specific, I mean a case where we look at the Vendor ID and make changes to our rendering path. Curiously, their driver reported this feature was functional but attempting to use it was an unmitigated disaster in terms of performance and conformance so we shut it down on their hardware. As far as I know, Maxwell doesn't really have Async Compute so I don't know why their driver was trying to expose that. The only other thing that is different between them is that Nvidia does fall into Tier 2 class binding hardware instead of Tier 3 like AMD which requires a little bit more CPU overhead in D3D12, but I don't think it ended up being very significant. This isn't a vendor specific path, as it's responding to capabilities the driver reports.

From our perspective, one of the surprising things about the results is just how good Nvidia's DX11 perf is. But that's a very recent development, with huge CPU perf improvements over the last month. Still, DX12 CPU overhead is still far far better on Nvidia, and we haven't even tuned it as much as DX11. The other surprise is that of the min frame times having the 290X beat out the 980 Ti (as reported on Ars Techinica). Unlike DX11, minimum frame times are mostly an application controlled feature so I was expecting it to be close to identical. This would appear to be GPU side variance, rather then software variance. We'll have to dig into this one.

I suspect that one thing that is helping AMD on GPU performance is D3D12 exposes Async Compute, which D3D11 did not. Ashes uses a modest amount of it, which gave us a noticeable perf improvement. It was mostly opportunistic where we just took a few compute tasks we were already doing and made them asynchronous, Ashes really isn't a poster-child for advanced GCN features.

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?

In the end, I think everyone has to give AMD alot of credit for not objecting to our collaborative effort with Nvidia even though the game had a marketing deal with them. They never once complained about it, and it certainly would have been within their right to do so. (Complain, anyway, we would have still done it, wink.gif)

--
P.S. There is no war of words between us and Nvidia. Nvidia made some incorrect statements, and at this point they will not dispute our position if you ask their PR. That is, they are not disputing anything in our blog. I believe the initial confusion was because Nvidia PR was putting pressure on us to disable certain settings in the benchmark, when we refused, I think they took it a little too personally.

AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1200#post_24356995


Parallel me if old.
My G-Sync monitor upgrade is starting to bite me in the ass (I still have an AMD GPU, was waiting for Pascal)
 

Ushay

Member
Some one needs to break this down for me, what does this mean for DX12 games going forward, are we going to see a trend of AMD cards performing more efficiently?
 

Flai

Member
Doesn't async timewarp use this in Oculus Rift..? Although nVidia GameWorks VR has async timewarp so I guess there is some other way to do this.

My G-Sync monitor upgrade is starting to bite me in the ass (I still have an AMD GPU, was waiting for Pascal)

Why did you decide to buy G-Sync monitor +9months before you can actually use the main feature?
 

Alej

Banned
Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC.

But everyone here said asynchronous compute is irrelevant and a marketing gimmick. 30% more perf? What?
I don't understand anymore.
 

Arkanius

Member
Some one needs to break this down for me, what does this mean for DX12 games going forward, are we going to see a trend of AMD cards performing more efficiently?

In a very simple way, which of course I'll get corrected down the thread:

AMD has been betting on parallel computing and Asynchronous shading since the 7XXX series with GCN. This architecture is the one used in consoles nowadays as well (Xbox One and PS4). Most console games benefit from low-overhead programming and are used closed to spec of their hardware.

Enter Mantle, the low overhead API that AMD wanted to bring to the PC side, this made a huge burn on Microsoft which decided to make DX12 close to what Mantle and AMD were offering. AMD also gave Mantle for free, which is now Vulkan (The future of OpenGL)

So now we have two future API's (DX12 and Vulkan) which benefit a lot from how AMD do things in their Architectures.

If games are programmed to use the full DX12 API's for example, they will use Async shaders, which is AMD only as of right now. Hence why in the Oxide benchmarks, you have a 290X trading blows with a 980Ti

TL;DR

Nvidia has wonderful DX11 performance due their Serial Architecture
AMD has wonderful DX12 performance, due their Parallel Architecture

Why did you decide to buy G-Sync monitor +9months before you can actually use the main feature?[/QUOTE]

I bought the XB27HU, its' the best IPS monitor right now. I'm using the 144Hz right now and the IPS (Came from an old TN monitor)
The G-sync upgrade later on would just be a bonus. I couldn't splurge money for a high end GPU and a monitor at the same time. I decided to do it on a phased roll-out
 

Arkanius

Member
How useful is DX12's async-compute for gaming?

If useful, Maxwell will get Kepler-ed.

From the OP :)

Our use of Async Compute, however, pales with comparisons to some of the things which the console guys are starting to do. Most of those haven't made their way to the PC yet, but I've heard of developers getting 30% GPU performance by using Async Compute. Too early to tell, of course, but it could end being pretty disruptive in a year or so as these GCN built and optimized engines start coming to the PC. I don't think Unreal titles will show this very much though, so likely we'll have to wait to see. Has anyone profiled Ark yet?
 

sinnergy

Member
But everyone here said asynchronous is irrelevant and a marketing gimmick. 30% more perf? What?
I don't understand anymore.

Everyone? async is pretty much a thing... it's what saved PS3.. as it was the only way to leverage the spu and spe's.

And when they are going to proper use it on next-gen consoles and pc's it will deliver more performance.
 

OBias

Member
What games are currently supporting DX12 besides Ashes of Singularity? What future releases this year are going to support it?
 
I feel interested, but it's hard for an ignorant like me to decode a wall of text without more context..
Nv driver says it can do feature X. Devs who want to use it find out it destroys performance. Nv suggests devs to ignore what driver says when it's their one.
 
Is this why those first DX12 scores were so favourable to AMD? So it's architecture based and not something nvidia will ever be able to fix until they move to a new architecture?
 

Alej

Banned
Everyone? async is pretty much a thing... it's what saved PS3.. as it was the only way to leverage the spu and spe's.

And when they are going to proper use it on next-gen consoles and pc's it will deliver more performance.

Some guys mocked people for using asynchronous compute in an argument about PS4 performance in the past here on GAF and multiple times. If true, it deserves a pretty big wall of shame. Would be hilarious.

But wait. I don't want to be involved in any of this. Leave me alone.
 
AFAIK, Maxwell doesn't support Async Compute, at least not natively. We disabled it at the request of Nvidia, as it was much slower to try to use it then to not.

Weather or not Async Compute is better or not is subjective, but it definitely does buy some performance on AMD's hardware. Whether it is the right architectural decision for Maxwell, or is even relevant to it's scheduler is hard to say.

The question is whether maxwell even needs it... Also, it does have it as far as I know..
It reads a bit like the engine was designed completely around a feature that may or may not even be necessary or relevant on anything other than GCN. ANd for some reason, even the engine dev does not know.
Some guys mocked people for using asynchronous compute in an argument about PS4 performance in the past here on GAF and multiple times. If true, it deserves a pretty big wall of shame. Would be hilarious.

But wait. I don't want to be involved in any of this. Leave me alone.

No. Please do. Continue please.
 
The question is whether maxwell even needs it... Also, it does have it as far as I know..

There are always times during your frame where the GPU is under utilised, like during shadow map rendering or potentially gbuffer filling. Async compute allows you to fills those gaps nicely in a lot of cases.
Even if you have a super fast GPU, it's always nice to use it efficiently.

Btw, I also thought maxwell had suport for this, but I don't really work with DX12 yet, stuck on DX11 for a while.
 

wbEMX

Member
What games are currently supporting DX12 besides Ashes of Singularity? What future releases this year are going to support it?

Fable Legends, which is supposed to come out later this year. Also upcoming are DX12 support for ARK: Survival Evolved (got delayed to next week) and earlier next year there will be Deus Ex: Mankind Divided. I also heard rumors about Battlefield 4 getting DX12 after the fact.
 

Alej

Banned
No. Please do. Continue please.

No no, I'm not pursuing any kind of personal vendetta. I just said this in order to explain my surprise. But you do know there was indeed some amount of sarcasm about people bringing that feature while talking about PS4. You were there.
 

Renekton

Member
https://docs.unrealengine.com/lates...ing/ShaderDevelopment/AsyncCompute/index.html

A quick google showed UE4 doesn't seem to do AsyncCompute except on Xbox One? Some correct me :(

There are always times during your frame where the GPU is under utilised, like during shadow map rendering or potentially gbuffer filling. Async compute allows you to fills those gaps nicely in a lot of cases.
Even if you have a super fast GPU, it's always nice to use it efficiently.

Btw, I also thought maxwell had suport for this, but I don't really work with DX12 yet, stuck on DX11 for a while.
Nice to know.

Even a 10% improvement is welcome.
 

Majukun

Member
a classic.. I have a AMD card, nvidia are miles better.. I pass to nvidia, now amd will have an edge with dx12...
 
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.
 
There are always times during your frame where the GPU is under utilised, like during shadow map rendering or potentially gbuffer filling. Async compute allows you to fills those gaps nicely in a lot of cases.
Even if you have a super fast GPU, it's always nice to use it efficiently.

Btw, I also thought maxwell had suport for this, but I don't really work with DX12 yet, stuck on DX11 for a while.
The most confusing thing is trying to find information about how germane it is to Maxwell 2:
confusion_reignsiduwo.png


No no, I'm not pursuing any kind of personal vendetta. I just said this in order to explain my surprise. But you do know there was indeed some amount of sarcasm about people bringing that feature while talking about PS4. You were there.
I gotchya.
a classic.. I have a AMD card, nvidia are miles better.. I pass to nvidia, now amd will have an edge with dx12...

It is not exactly clear cut yet as to how necessary certain tasks are at all on Maxwell 2 hardware. Regarding Maxwell 1 or Kepler, then yeah, GCN is superior.
 

bj00rn_

Banned
Nv driver says it can do feature X. Devs who want to use it find out it destroys performance. Nv suggests devs to ignore what driver says when it's their one.

I finally managed to research the context, and my conclusion ended up being slight different and hopefully more dynamic.

- AMD current GPUs are built for Async Compute, which is efficient, but DX11 doesn't use it, so..
- Nvidias current GPUs are more efficient in DX11 for various reasons, despite not having Async Compute, one reason being driver optimizations (and of course because DX11 doesn't support AC..)
- Async compute doesn't really matter right now, but will however matter in up to a couple of years time when DX12 ramps up, AMD is already using it, Nvidia will also most likely have it in their next gen (?)
- Current architecture AMD GPUs will age relatively good, current Nvidia architecture GPUs will perhaps not

Does that look correct?

Edit: No, after having read 11 pages of this thread it's obvious that some of what I wrote is not correct..
 

mrklaw

MrArseFace
Nothing magical will happen. If things stay as they are, AMD GPUs will have a better bang for buck compared to their DX11 performance, and Nvidia GPUs a little worse.
 

Renekton

Member
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.
Look man, I have a below-average IQ (took the test).

I don't need to lose more braincells T__T
 

ps3ud0

Member
Ah so what we derived from a previous thread. Really surprised this wasn't discovered much earlier around when DX12 was announced

AMDs activities into consoles and Mantle seems to be ultimately useful to PC gamers.

Things like this make a Freesync/Gsync monitor purchase so difficult

ps3ud0 8)
 

Tripolygon

Banned
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.
This has to be a joke.
 

Sini

Member
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.

Nice
 

Renekton

Member
Ah so what we derived from a previous thread. Really surprised this wasn't discovered much earlier around when DX12 was announced
Because Nvidia announced super early that their GPUs were fully DX12.

See this older GAF topic: you can see that everybody some people were convinced that Nvidia was fully compliant natively including Async Compute.

(sorry triple edits I'm on coffee!)
 
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.

Kinda this,......


:'(
 

tuxfool

Banned
The most confusing thing is trying to find information about how germane it is to Maxwell 2:
confusion_reignsiduwo.png



I gotchya.


It is not exactly clear cut yet as to how necessary certain tasks are at all on Maxwell 2 hardware. Regarding Maxwell 1 or Kepler, then yeah, GCN is superior.

The thing here is that pre-Maxwell it is going to be a bloodbath. Another thing to consider is that in that article they're talking about Queues and it should be noted that in the GCN context the ACEs are much more capable than simple Queues (Maxwell could have similar structures but this isn't documented anywhere).

Also given Nvidia's lack of strategic focus on Async Compute, it could be that the drivers simply do not expose that functionality in the appropriate manner for game development (I do believe CUDA and OpenCL performance is a lot better on Maxwell).
 

wildfire

Banned
Ah so what we derived from a previous thread. Really surprised this wasn't discovered much earlier around when DX12 was announced

AMDs activities into consoles and Mantle seems to be ultimately useful to PC gamers.

Things like this make a Freesync/Gsync monitor purchase so difficult

ps3ud0 8)

If you were exposed to AMD's advertising efforts you would know about their support for Async shaders since February. The problem is that DX12 came out 5 months later and the previous Mantle tests never gave any indication AMD gpus would benefit from improved multithreading like their CPUs did.

Regardless it's good to know that with these architectural changes we can see GPUs from both companies offering more performance jumps in the next 2 years than they offered in the last 2.
 

Arkanius

Member
Wait, so Maxwell is fully DX12 compliant but does not have async compute like AMD cards have? Does this mean that PS4 is almost DX13 levels then due to having this feature as well as hUMA and a supercharged PC architecture which DX12 does not have? If so I can easily see PS4 competing with the next gen Xbox which will assumedly be based on DX13 further delaying the need for Sony to launch a successor. Woah. If this is true I can easily see PS4 lasting a full ten years. Highly interesting development, I can't wait to see what Naughty Dog and co do with this new found power.

That's not...
what

I don't even
 
TL;DR

Nvidia has wonderful DX11 performance due their Serial Architecture
AMD has wonderful DX12 performance, due their Parallel Architecture

Nope, AMD had crappy DX11 performance cause of really poor DX11 drivers with no MT support at all. Nvidia already cared of maxing DX11 capabilities on their drivers, including multi threading suppport, so most of the gains DX11 has on AMD cards is just from having proper drivers to start with coming from poop.

That wall of text only lacks the AMD logo on it. Except it has on the original link.
 

Renekton

Member
Nope, AMD had crappy DX11 performance cause of really poor DX11 drivers with no MT support at all. Nvidia already cared of maxing DX11 capabilities on their drivers, including multi threading suppport, so most of the gains DX11 has on AMD cards is just from having proper drivers to start with coming from poop.

That wall of text only lacks the AMD logo on it. Except it has on the original link.
That's the Oxide Dev, is the company aligned with AMD?
 
But everyone here said asynchronous compute is irrelevant and a marketing gimmick. 30% more perf? What?
I don't understand anymore.

Well that entirely depends on whether you trust the developers and industry vets themselves who are actually making these games, or if you trust what are largely console warriors that simply know their way around an anandtech article and possess enough knowledge (even pretend knowledge) on a particular subject to unintentionally, or intentionally, mislead people into accepting their position on a topic.
 

Renekton

Member
Why would Nvidia do this? why are they being dumb.
Either they are a couple driver updates away from Async Compute support, or they expect this feature to be only relevant when Pascal is out.

Regardless, this has not hurt them (or current Nvidia users) any.
 

ps3ud0

Member
Sums it up
But the future is Parallel

Current Nvidia GPU's will not age well.
Pascal is a utter uncertainty on what architecture it will employ.
Yeah that's exactly what I'm excited to find out - I naively consider Async Compute to be the future with DX12 especially with the consoles APU supporting it - nVidia not considering it that important would be a blow (over generalisation !).

Ultimately until nVidia get their DX12 plans sorted AMD are more competitive as a result - that's a good thing, but may not matter that much once we've gone past that transitional period of DX11 to DX12 games...

ps3ud0 8)
 
Top Bottom