• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Oxide: Nvidia GPU's do not support DX12 Asynchronous Compute/Shaders.

Lololol, butthurts everywhere.

From the very start of the thread:


Some of you really have to learn how to read and write on a discussion board, beyond your heathen beliefs, and understand than brand loyalism isn't that strong on the PC sphere as it is on the console or even phone market.

Then, the CPU thing is important to note because the people who buys a 980ti/Fury usually is sitting on top of a high end i7 with threads to spare. Telling said people that their current cards are obsolete for DX12 because they don't support an AMD feature that isn't part of DX12 is unchaste and tendentious.

I understand that some of you have some sort of agenda, being GPU brand or console rooted doesn't matter, but some of us doesn't care about it. You keep saying that Maxwell is doomed for DX12 even when, async, serial or broken, Maxwell keeps being faster in a worst case scenario than current AMD offerings.

Most people who have a clue will tell you the same, this is pretty good for PS4, not that good for XOne, and meaningless for PC gaming.

I enjoy techie stuff and try to learn how things works, don't care so much about people feelings about brands tbh. I would enjoy you teaching me where was I wrong in this thread, so I could learn something useful. But remember, I don't care at all about your religion substitutes.

There is a reason for Horse Armour being that funny in this forum

async compute is not "an amd feature not even part of dx 12"
 

bj00rn_

Banned
It's not that complex. If you are on/moving to Windows 10 and plan on playing new games, then from next year AMD will probably offer you the best bang for your buck. They may well "win" in many benchmarks for the next year or two until NVidia get Async worked out.

It's not complex..? That's a completely different consensus than what I've extracted from the information and atmosphere in this and other forums when it comes to probable and relatively certain practical scenarios. F.ex. there is also a strategy and market driven factor inside the industry itself that will have to find its own least resistance way here. So what do you know to draw these conclusions that nobody else knows.
 

frontieruk

Member
It's not complex..? That's a completely different consensus than what I've extracted from the information and atmosphere in this and other forums when it comes to probable and relatively certain practical scenarios. F.ex. there is also a strategy and market driven factor that will have to find its own least resitance way here. So what do you know to draw these conclusions that nobody else knows.

That for the next couple of years the status quo will hold, if you are at the top end of CPU the benefits of GPU compute are lower which is what played out with Mantle, that the most benefits for AMD cards were seen with lower end CPUs.

Games using DX12 don't have to use Async compure, and early games that do will use it on pc will use it for post process effects.
 
async compute is not "an amd feature not even part of dx 12"

From this very same post:

original.jpg
 
From this very same post:

original.jpg

so because its not on that chart its not a part of dx12? and classifying something as broad as async compute as an amd feature is pretty silly. async compute is going to be much more useful than CR and ROV because the consoles have it. CR and ROV also dont actually enable things we cant already do from what several devs on b3d have said, nor do we even know if they are performant. if history is a lesson, they arent.
 
yeah the r200 series was pretty good for your money ( i have one), but the gtx970 is really coming down in price now. and its hard to recommend the older, less efficient 290 / 290x over that now.

You can get an 8GB 390 for the same or less than a 970.
 
so because its not on that chart its not a part of dx12? and classifying something as broad as async compute as an amd feature is pretty silly. async compute is going to be much more useful than CR and ROV because the consoles have it. CR and ROV also dont actually enable things we cant already do from what several devs on b3d have said, nor do we even know if they are performant. if history is a lesson, they arent.

Not even part of DX12?
I wonder how the devs use it then...

Oh I know, through Multiengine, which is a part of DX12:
https://msdn.microsoft.com/en-us/library/dn899217%28v=vs.85%29.aspx

Can you understand that certain API could uncover certain HW feature without that feature being an obligatory (in this case not even optional) part of said API?

So far we now DX11 was unable to use this HW feature, or at least it was difficult enough for AMD not bothering to implement it.

All DX12 cards are capable of running both compute and rendering tasks at the same time. Currently, it is believed that only GCN cards can run compute tasks without slowing down rendering tasks. That's the point and no other.
 

bj00rn_

Banned
That for the next couple of years the status quo will hold, if you are at the top end of CPU the benefits of GPU compute are lower which is what played out with Mantle, that the most benefits for AMD cards were seen with lower end CPUs.

Games using DX12 don't have to use Async compure, and early games that do will use it on pc will use it for post process effects.

Yeah, this scenario is closer to the consensus from what I've read, and shows factors that makes the practical world of consumers a little bit more complex than just "does this GPU have async compute or not".
 

KKRT00

Member
so because its not on that chart its not a part of dx12? and classifying something as broad as async compute as an amd feature is pretty silly. async compute is going to be much more useful than CR and ROV because the consoles have it. CR and ROV also dont actually enable things we cant already do from what several devs on b3d have said, nor do we even know if they are performant. if history is a lesson, they arent.

DICE very much complained about lack of ROV in GCN in their Siggraph presentation. The transparent multilayered geometry are pain in the ass to do without them
 

Vinland

Banned
If such a paradigm shift is occurring on consoles, why not on PC too via Vulkan and DX12? Games engines are game engines. The less reworking required the better.

Oh I agree. If the mind share among developers and tool makers can get it down to a easy to manage process then everyone wins and PC may even start widening the gap.

The comment you quoted tho was in response to dr. Apoc. He is being far to negative and dismissing products others enjoy, I have a 290x 8gb and love it, to defend the other fantastic products made by his vendor of choice.
 
Can you understand that certain API could uncover certain HW feature without that feature being an obligatory (in this case not even optional) part of said API?

So far we now DX11 was unable to use this HW feature, or at least it was difficult enough for AMD not bothering to implement it.

All DX12 cards are capable of running both compute and rendering tasks at the same time. Currently, it is believed that only GCN cards can run compute tasks without slowing down rendering tasks. That's the point and no other.

GTFO, you did not say async compute was not an OBLIGATORY part of dx12, you sait it was NOT a part of DX12. For shame:

Then, the CPU thing is important to note because the people who buys a 980ti/Fury usually is sitting on top of a high end i7 with threads to spare. Telling said people that their current cards are obsolete for DX12 because they don't support an AMD feature that isn't part of DX12 is unchaste and tendentious.

The amount of bullshit that comes out of you is astounding.
 

FrunkQ

Neo Member
It's not complex..? That's a completely different consensus than what I've extracted from the information and atmosphere in this and other forums when it comes to probable and relatively certain practical scenarios. F.ex. there is also a strategy and market driven factor inside the industry itself that will have to find its own least resistance way here. So what do you know to draw these conclusions that nobody else knows.

Nothing!

From the perspective of "I am not sure what is going on here, but should I get a new graphics card?" - which was the query I was answering, it is not complex. The market price for low/mid range hardware keeps things more or less even... you get x bang for y buck with either colour... and it will largely not matter as the rest of the system throws in a lot of unknowns.

But...

From a technical perspective it is a fascinatingly complex interplay of technical knowledge, educated guesswork, marketing bullshit, gfx card fanboys providing bias and a healthy dose of crystal ball gazing.

The truth is we will not know until there are more tests on more varied hardware (with a different CPU/GPU balance), more DX12 games/benchmarks appear and a few driver updates are offered by the vendors. And even then I imagine the answer will probably not be black and white (or red and green :) ) as prices will change and other factors come into play that uncovers YOUR particular system bottleneck.

This is why pretty much every discussion on the matter is kinda pointless... but fascinating nevertheless. AMD potentially turning the tables on NVidia in the DX12 world is an amazing coup... when coupled with a DX12 O/S being "free" meaning a fast, wide adoption rate of DX12, things get VERY serious. Previously with green generally beating red in the high end graphics stakes (and the kudos & sales this brings) this is going to be a fascinating story to follow.

Where I *do* see a lot of value in this discussion is the whole benefits of async GPU/GPGPU which console developers have been working with, for a while. With these new games actively utilising GPGPU we see Cerney's predictions writ large and coders seriously taking advantage of the PS4's potential horsepower... despite people's assertions of it having a "crappy CPU". :/ Again the truth is not that simple, whatever PC-zealots like to believe; hitting lower spec hardware at a lower level does indeed overcome a HUGE number of potential performance problems which normally manifest in more abstracted PC hardware. Here we are seeing a little of what those performance benefits may be.
 

Locuza

Member
Can you understand that certain API could uncover certain HW feature without that feature being an obligatory (in this case not even optional) part of said API?

So far we now DX11 was unable to use this HW feature, or at least it was difficult enough for AMD not bothering to implement it.

All DX12 cards are capable of running both compute and rendering tasks at the same time. Currently, it is believed that only GCN cards can run compute tasks without slowing down rendering tasks. That's the point and no other.
I can understand it, I can't understand to call it an AMD feature, which is not part of DX12.

Not all DX12 cards are capable of running both tasks at the same time.
Intel definitely can't:
Slide 18 said:
Exposes GPU Parallelism
Multi-Engine and Multi-GPU

3D and Compute are not simultaneous
Queues may pre-empt each other

Copy might be simultaneous with 3D and Compute
Driver will make decision based on GPU performance and capabilities
https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CCQQFjAAahUKEwilsty7vdjHAhXEESwKHULwCz0&url=http%3A%2F%2Fportfolio.punkuser.net%2F3d_optimization_for_intel_graphics_gen9_idf2015.pptx&usg=AFQjCNGV85iIcb2QWg4NkBsadeov7tSNmg&bvm=bv.101800829,d.bGg&cad=rja

And from the look of it,Nvidia doesn't effectively support it either.
 

KampferZeon

Neo Member
Nvidia has a serious problem IMHO. Because it's not just console vs gaming PC. Any low power CPU+GPU platform will benefit. Meaning mobile, laptop, console, even low end PC vs high end PC.

The trend towards ever greater degree of parallelism has finally caught up the CPU + GPU plaforms.

Also I would suggest using the Cerny's Async Fine Grain Compute in this thread, without fine grain there is no performance benefit to go async.
 

GRaider81

Member
I just traded my 970 for a 980ti.

Worried ive wasted money.

What I'd like to know in realistic terms when will we see this make a difference in games. Im hapoy to upgrade again next year should I need to whether its AMD or Nvidia I don't care.

Is it worth jumping ship just yet or does nvidias advantage on DX11 negate any amd gains for now?
 
I just traded my 970 for a 980ti.

Worried ive wasted money.

What I'd like to know in realistic terms when will we see this make a difference in games. Im hapoy to upgrade again next year should I need to whether its AMD or Nvidia I don't care.

Is it worth jumping ship just yet or does nvidias advantage on DX11 negate any amd gains for now?

It will not make your framerates worse, but it means you will not enjoy the framerate gains you would have enjoyed if you went for a Fury X instead.
 

GRaider81

Member
It will not make your framerates worse, but it means you will not enjoy the framerate gains you would have enjoyed if you went for a Fury X instead.


And how does a fury x handle dx11 games now?

Im maybe way off base but what im reading is,
For current games and dx11 its best to have nvidia but once we see dx12, AMD is the way to go?

Feels like I got into pc gaming at the wrong time! Though I guess its always like this to an extent
 

Arkanius

Member
And how does a fury x handle dx11 games now?

Im maybe way off base but what im reading is,
For current games and dx11 its best to have nvidia but once we see dx12, AMD is the way to go?

Feels like I got into pc gaming at the wrong time! Though I guess its always like this to an extent

DX11?
Equal or a bit (4-5 frames maybe) to the 980 Ti
 

wachie

Member
The fact that Nvidia has been very silent about this whole episode is quite concerning. Usuallly they are quick to respond to these things, even if it is with arrogance.
 
GTFO, you did not say async compute was not an OBLIGATORY part of dx12, you sait it was NOT a part of DX12. For shame:



The amount of bullshit that comes out of you is astounding.

Chill out buddy. You should work on your working comprehension about what an API is. That statement remains true to this date.

AMD taking advantage of certain DX12 feature for their own benefit doesn't make that stuff part of DX12. Every vendor have to fulfill certain requisites to match each DX version guidelines, that leaves them plenty room to make that on their own way, including supporting it in a lesive way for performance. Async compute is an AMD HW feature that no other vendor enjoy at the moment, not a DX12 feature or requisite.

Skylake iGPU is currently the most complete DX12 compliant GPU buyable with money, that doesn't means it has a brighter future than Titan or Fury. Is that hard to understand? At the end of the day is about who does it faster.

Then, about people criticizing me of being negative, all I have to say is that I'm a PC gamer with a still beefy enough 6c/12t Intel CPU. So it's natural for me not getting that excited about something that won't benefit me as much as WDDM 2.0. Of course, those having a 280X and up can have some hopes into this. But every pc gamer old dog enough will advice you to keep tight your horses. And PS4 users have what they already knew they had from day one.

Early DX12 literature told us it would benefit mostly FX cpus, but then early tests shows how i3s widened their advantage even more.

Don't buy, don't sell and enjoy the show.
 

ZOONAMI

Junior Member
And how does a fury x handle dx11 games now?

Im maybe way off base but what im reading is,
For current games and dx11 its best to have nvidia but once we see dx12, AMD is the way to go?

Feels like I got into pc gaming at the wrong time! Though I guess its always like this to an extent

For the most part a fury x keeps up with a stock 980 ti in dx11. Fury x will be better than a 980 ti in dx12, or at least that's what it's looking like. Of course the 980 ti OCs like a monster, so will probably close the gap on a fury x even in dx12 if you are willing to oc heavily. My 980ti is running at 1500/8000. So a 15% OC. Fury X does not OC very well, with a lot of people lucky to hit 75-100mhz on the core. If AMD unlocks the voltage and memory overclocking, it might do much better.

If I could do it again i would have gotten a fury x instead of a 980 ti.
 

KKRT00

Member
What I'd like to know in realistic terms when will we see this make a difference in games.

Nobody knows that. It could be up to 25% or only 5% when comparing AMD vs Nvidia cards in the same performance bracket, no one knows.
Definitely people should not jump a gun to replace their GPUs, thats just irrational.
 
And how does a fury x handle dx11 games now?

Im maybe way off base but what im reading is,
For current games and dx11 its best to have nvidia but once we see dx12, AMD is the way to go?

Feels like I got into pc gaming at the wrong time! Though I guess its always like this to an extent

It's way to early to make a call on this imo. I would just keep the 980ti for now.

All this hand wringing about async seems odd at least on the high end. There might be some PC exclusives down the road or maybe they can scale somethings up, but I doubt any decent PC will have issues replicating features designed to work on the consoles with or without async. Maybe I'm not thinking big enough though.
 

VariantX

Member
lol come on guys, enough with this stuff. 980ti's are ballin' cards for the top end. I wish I had one. Y'all need to chill you got some killer cards.

Yep. I'm not going to pretend im super educated about this stuff, but I know enough that there's not enough info available right now, particularly with actual games being built with DX12 in mind to be worried about it at this point.
 

Schnozberry

Member
Some Gameworks title will be released in the next few months that has some code workload to exploit an AMD driver flaw or design choice and we'll all be doing this bloodletting again. AMD has a rare PR victory here. Well done.
 

dogen

Member
My last 3 GPUs are all AMD. Even if AMD can do async and Nvidia can't is it expected for AMD to see large gains across the board in dx12? My assumption is no, doesn't ashes of singularity rely massively on async which is why we see these huge gains for AMD?

Another assumption is that most new games won't be using async as heavily as Ashes of Singularity. Can a dev or an enthusiast chime in on this? How crippling is this blow to game development for Nvidia not supporting async?

Ashes only uses a modest amount according to oxide themselves. I'd say you can expect people like dice, 4a games, and whoever makes graphically advanced games to use a lot more.
 
lol come on guys, enough with this stuff. 980ti's are ballin' cards for the top end. I wish I had one. Y'all need to chill you got some killer cards.

They're also extremely expensive cards. I wouldn't be happy if the Ashes benchmarks extrapolates to a big chunk of DX12 games next year.
 
Ashes only uses a modest amount according to oxide themselves. I'd say you can expect people like dice, 4a games, and whoever makes graphically advanced games to use a lot more.
Modest is certainly one way of putting it. That one post by oxide mentioned 30% (I believe) of their rendering is done through Async or something. When they referenced people who apparently are doing "a lot more" they referenced a PS4 exclusive.

I am not sure if one should expect PC ports or even PC exclusives to extensively use such apparently "hardware-specific" engine design: especially given Nvidia's control of the market.
They're also extremely expensive cards. I wouldn't be happy if the Ashes benchmarks extrapolates to a big chunk of DX12 games next year.

Be serious with yourself, how likely would that be? How often do devs design entire game engines around features that most people cannot take advantage of / 70% of hardware cannot run without terrible performance?

Expect two paths in the worst case scenario: DX12 games flag NV hardware and rendering things differently, while AMD users would basically see the "console" path.

This hardware specific flagging has always existed. Same thing would happen if a game used ROV, just with AMD being "flagged" being made the exception.
 

dogen

Member
Modest is certainly one way of putting it. That one post by oxide mentioned 30% (I believe) of their rendering is done through Async or something. When they referenced people who apparently are doing "a lot more" they referenced a PS4 exclusive.

I am not sure if one should expect PC ports or even PC exclusives to extensively use such apparently "hardware-specific" engine design: especially given Nvidia's control of the market.

Nah, they said only around 20% of their rendering is done with compute shaders, and then they made part of that 20% asynchronous.

He also said they're projecting the next version of their engine to be 50% compute, which seems more in line with what other engines are doing(moving as much as possible to compute).
 

KKRT00

Member
Be serious with yourself, how likely would that be? How often do devs design entire game engines around features that most people cannot take advantage of / 70% of hardware cannot run without terrible performance?

Expect two paths in the worst case scenario: DX12 games flag NV hardware and rendering things differently, while AMD users would basically see the "console" path.

This hardware specific flagging has always existed. Same thing would happen if a game used ROV, just with AMD being "flagged" being made the exception.

I'm pretty sure we will just get Async ON/OFF in options. Putting post-processing for example with two branches async/non-async probably is quite easy to do, especially when they are already writing it on consoles, so i think many games will use it like that, as a bonus. I bet that most compute tasks are designed first normally and then rewritten to async, so non-async path is always available.
No one will focus whole their compute budget on it though and its not like compute is majority of rendering budget. It is probably up to 30-40% and probably only up to 50% of it are 'asyncable', so generally only like up to 20% gains at most.
Still a nice boost, but nothing really groundbreaking, decreasing precision of some effect or designing better algorithm can yield similar boost on high end settings.
 

dogen

Member
I'm pretty sure we will just get Async ON/OFF in options. Putting post-processing for example with two branches async/non-async probably is quite easy to do, especially when they are already writing it on consoles, so i think many games will use it like that, as a bonus. I bet that most compute tasks are designed first normally and then rewritten to async, so non-async path is always available.
No one will focus whole their compute budget on it though and its not like compute is majority of rendering budget. It is probably up to 30-40% and probably only up to 50% of it are 'asyncable', so generally only like up to 20% gains at most.
Still a nice boost, but nothing really groundbreaking, decreasing precision of some effect or designing better algorithm can yield similar boost on high end settings.

I believe Redlynx is doing all compute in their engine asynchronously.
"...we are doing lighting, post processing, VT page generation and particle rendering using async compute in the background. Only culling + g-buffer draws + shadow draws are in the render pipeline. Everything else is async compute."

So if the next trials game has dx12(or vulkan) support, maybe we'll get to see that on pc.
 
Nah, they said only around 20% of their rendering is done with compute shaders, and then they made part of that 20% asynchronous.

He also said they're projecting the next version of their engine to be 50% compute, which seems more in line with what other engines are doing(moving as much as possible to compute).
Perhaps I am conservative in thinking 1/5 of all rendering being done computer shaders in async is already not too bad.
I believe Redlynx is doing all compute in their engine asynchronously.
"...we are doing lighting, post processing, VT page generation and particle rendering using async compute in the background. Only culling + g-buffer draws + shadow draws are in the render pipeline. Everything else is async compute."

So if the next trials game has dx12(or vulkan) support, maybe we'll get to see that on pc.

Redlynx is one (small) studio.

I still have yet to be convinced that we will see large scale , mandatory, and extensive engine integration of a DX12 (optional) feature that cripples performance on 70% of PC hardware.

AN optional path for games? Of course.
 

dogen

Member
Perhaps I am conservative in thinking 1/5 of all rendering being done computer shaders in async is already not too bad.


Redlynx is one (small) studio.

I still have yet to be convinced that we will see large scale , mandatory, and extensive engine integration of a DX12 (optional) feature that cripples performance on 70% of PC hardware.

AN optional path for games? Of course.

No.. 20% of rendering is compute. Some percentage of that is async. Not all 20%.

And nvidia can likely handle async code as if it wasn't async. Intel doesn't support async graphics+compute at all, so there's likely some way to serialize it in the driver.

So, if a game is doing a lot of things in compute, and most compute things asynchronously, it shouldn't penalize nvidia and intel, they just won't overlap compute and graphics. So it's a positive for certain architectures, and a neutral for the rest, not necessarily a negative.
 
No.. 20% of rendering is compute. Some percentage of that is async. Not all 20%.

And nvidia can likely handle async code as if it wasn't async. Intel doesn't support async graphics+compute at all, so there's very likely some way to serialize it in the driver.
Oh. I thought their entire PP was done in compute AND async. My bad.

Given your second comment, what on earth does anyone have to worry about even at all? Slightly different relative performance between AMD and NV? Seems like a similar non-issue like the difference between tesselation performance in AMD and NV.
 
There's nothing generous in saying that the different behaviour may be because of pre-emption granularity difference.
I'm saying that referring to a non-granular system as merely having a "granularity difference" is generous and misleading. You're implying that NV's approach is somewhat granular, but it really isn't.

How do you know that it's "broken"? Are all i5s "broken" compared to i7s since they don't have HT enabled? What about if we look at gaming workloads specifically?
I refer to it as "broken" because NV refer to it as "fully compliant." Yes, it doesn't crash in response to the command, but the operations intended to improve performance instead degrade it. So I assume it's actually intended to deliver the claimed functionality, and generously refer to it as broken, yes. But you may be right too; maybe it was never intended to work correctly, and they were just misleading us when they said it would.

A lot of people around here are claiming that and AMD seems to be claiming that as well.
Then I imagine you won't have any trouble providing us with some links.

If there are such resources which is totally dependent on the GPU's architecture.
Completely untrue. There are always unused resources, because not every processor is needed in every phase of the rendering pipeline. Try to keep up.

"Noticeable" can mean anything. They aren't giving an exact figure which makes me doubt that it's anything to brag about.
So you claim they admitted to not getting a lot of performance out of the feature, despite his actual statement being that he got a noticeable improvement with only a modest amount of effort. When I call you out on completely misrepresenting what he said, your defense is, "No, he's the liar!!" ><

It is a useful technique on some architectures, it is not on the others and there is no direct indication of not being able to coupe just fine without it in DX12.
It's a useful technique on any architecture that implements it correctly.

Right, and the results of Kepler cards which are showing total time as even less than compute+graphics are what exactly? You also seems to miss the part where Maxwell cards are several times faster than GCN ones even doing compute+graphics serially.
This benchmark isn't designed to test actual performance; the GCN cards are dispatching jobs half-filled. This benchmark merely tests for the presence of fine-grained compute. The AMD cards pass that test, while the NV cards fail. We can't compare fine-grained performance because the current NV cards aren't capable of doing it at all.

There is no clear understanding of what's going on in this benchmark right now so don't try to oversimplificate what we're looking at.
Did that clear things up for you?


I wouldnt interpret beyond3d's benchmark yet, it needs a lot of tweaking.

Why? My GTX 970 outperforms Fury X in compute by factor of 5 and by factor of 3 when its using async.
Again, this isn't a performance test. It merely test for the presence of fine-grained compute.


Lololol, butthurts everywhere.
Your Pants != Everywhere

I enjoy techie stuff and try to learn how things works
Then you should mock less and listen more.

don't care so much about people feelings about brands tbh.
Yes, clearly. :rolleyes:

There is a reason for Horse Armour being that funny in this forum
I didn't think he was that funny. I thought the poor guy was as confused as you are. I was gonna type a long post to try to explain this stuff to him.


I understood very little of the technical stuff. I just have one question: If even without async nVidia cards perform better, why should I care? I want to upgrade my rig and I am completely and utterly confused by all this.
The short version is, if you need a new card today, you're probably better off getting a decent AMD card and as always, newer is better. If you don't need a card today, wait and see if Pascal supports fine-grained compute, and make your decision at that point.


Also I would suggest using the Cerny's Async Fine Grain Compute in this thread, without fine grain there is no performance benefit to go async.
Not entirely true, since merely being asynchronous can free up the CPU to perform other tasks. That said, yes, fine-grained is where the "free compute" comes from; without it, you're forced to halt rendering to run compute jobs.


AMD taking advantage of certain DX12 feature for their own benefit doesn't make that stuff part of DX12.
Dude, you're being fucking ridiculous. You refer to it as a "DX12 feature" yourself. That means it's "part of DX12." You can't even keep your goalposts in the same place for an entire sentence, FFS. ><


Ashes only uses a modest amount according to oxide themselves. I'd say you can expect people like dice, 4a games, and whoever makes graphically advanced games to use a lot more.
Yup.


Modest is certainly one way of putting it. That one post by oxide mentioned 30% (I believe) of their rendering is done through Async or something.
Well, modest is a relative term, and if it's the post I'm thinking of, he also said they could actually move the entire thing to 100% compute, and not do any conventional rendering at all. As I recall, that was actually Kutaragi's original plan for the PS3; pure compute with a second Cell instead of the RSX.


I'm pretty sure we will just get Async ON/OFF in options. Putting post-processing for example with two branches async/non-async probably is quite easy to do, especially when they are already writing it on consoles, so i think many games will use it like that, as a bonus. I bet that most compute tasks are designed first normally and then rewritten to async, so non-async path is always available.
No one will focus whole their compute budget on it though and its not like compute is majority of rendering budget. It is probably up to 30-40% and probably only up to 50% of it are 'asyncable', so generally only like up to 20% gains at most.
Still a nice boost, but nothing really groundbreaking, decreasing precision of some effect or designing better algorithm can yield similar boost on high end settings.
You still don't seem to get it. Yes, there are plenty of tricks you can employ to achieve similar performance gains, but you forget/ignore those same tricks are still available to the fine-grained systems.


So it's a positive for certain architectures, and a neutral for the rest, not necessarily a negative.
Well, more negative than neutral, I'd say, because the context switching can actually penalize them. They actually need to segregate the job types as much as possible to avoid context switching.
 
Top Bottom