• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Oxide: Nvidia GPU's do not support DX12 Asynchronous Compute/Shaders.

Kinthalis

Banned
The only benchmark for DX12, for a game which uses a 'moderate' amount of async compute, performs as well on Nvidia's flagship gpu which has busted async implementation, as AMD's flagship gpu. Which is awesome at async compute.

Does not compute.

Let me help you

-> Does
-> Not
-> compute

There, async does nto compute ;) Good point though. We'll definitely need to see games that perform better on AMD hardware before people start paying attention.
 

Kezen

Banned
I doubt we'll be seeing many games implementing much async compute on PC for a while, unless AMD is willing to spend money and help developers implement it in games on their PC releases.

Odds are it will continue ot be Nvidia who does the leg work and the investment in their own tools.

Time will tell I guess.

Deus Ex will use async compute on PC (DX12), other games use it on consoles but nothing confirmed for PC. If they target dx11 exclusively then no (Mirror's Edge, Tomb Raider).
 

KKRT00

Member
I doubt we'll be seeing many games implementing much async compute on PC for a while, unless AMD is willing to spend money and help developers implement it in games on their PC releases.

As i said before, compute tasks that have async mode, also have no-async versions, so basically, at worst case scenario for Nvidia, we will see option in graphical setting for async compute ON/OFF or something like OFF/LOW/HIGH.
 
Conclusion is that we dont have finalized drivers, so current async is not working as intended on Nvidia GPUs for developers.

Will Nvidia GPUs be penalized by async in comparison to similarly speced AMD GPUs in future is yet to be seen, but for now, we dont have any info except for a fact that Nvidia said that final drivers that enables fully async are not ready.

On Maxwell/Maxwell 2, the problem isn't one of drivers, it's Nvidia's GPU architecture which can't match AMD's GCN in the area we are discussing here.

And the common consensus is that this may continue with Pascal, as iirc NV's GPU roadmap showed that Pascal will essentially be Maxwell + HBM. Volta after Pascal may be a different case, but that is late 2017.
 

Kezen

Banned
Sold my new 980Ti and bought a Radeon 9700 Pro to replace it with whilst I wait for the new AMD stuff.

What a beast of a card that was. The first DX9 GPU on the market at the time.

And the common consensus is that this may continue with Pascal, as iirc NV's GPU roadmap showed that Pascal will essentially be Maxwell + HBM.
There is nothing to suggest that.
Pascal may have been in active development way before Maxwell ships. I don't think Nvidia are naive enough not to take a cue from their rival regarding async compute, this is a feature which will be used commonly in the next few years on PC.
 

bj00rn_

Banned
Holy crap, one could almost get nauseous from all the agenda driven conjectures in here. Interesting in many ways though..
 

bj00rn_

Banned
but isn't that part of the fun? I mean, I think it's fun to read fanboy drivel at least. Generally.

Yes that's why I wrote it's interesting. I was lol-ing from amusement.

I feel sorry for those who are trying to decode some of it for objective facts though :)
 
This thread is the fucking example why we can't have any real discussion about consoles capabilities here on GAF. Any positive for consoles is nucked to death by a very vocal minority who distributes personal attacks to anyone talking positively about console architecture.

I mean, seeing DerExperte spreading FUD about users talking here, despite those claims can be returned to him the same way, what the fuck am I reading really. It's gameFAQs level of insanity and it's really burning me down.

I walked by NeoGAF in 2007 coming from B3D. I was mostly interested in console architecture back in the day and games in general. I was frankly a PC gamer at this time and my PS2 was collecting dust. I remember the big flamewars on this board about 360 and PS3, that was very cringeworthy to read. I made an account but failed to see a reason about participating.

This thread proves me I was so right about this. That was a good time to talk about async compute because at this time we don't really know the fuck why PS4 was so designed around that feature. And the OP talks about having performance advantages while using it on consoles. I never implied that async compute would save the consoles or anything like that, or even making PS4 an highend PC or something. Never i was saying that, and i never will. I just can't have a serious discussion about this facet of the OP, some guys are totally allergic to that.

And yet, like that pink panther avatar guy, in every discussion I come I always find me flagged as a console warrior that shouldn't be trusted or something. I mean, seriously? Maybe i'm naive or very enthusiast about my PS4, i don't care. But why attacking me? Why do I represent something to be defended from. Seriously? There is no tech discussion to have here about consoles. I won't regret this shit bath.

I'll play some CIV games instead. (*wink* at the mod banning me last year while saying "go back playing your fantastic PS4 instead" while I was answering to a bunch of angry guys why I like to play on PS4 because reasons in a thread whining about why people like consoles).

Wow, I'm still amazed about how hypocrite some people can be. You have been one of the most toxic posters in this thread, attacking everyone that tried to explain things that weren't cool for your agenda.

Personal attacks then? Personal attacks.
*popcorn*

You weren't taken seriously for the get go BTW.

Then you have been fully supportive to the LSD cat on his delirium, the same guy you are bashing now in a pityful try of making some distance in his dismissal.

Thanks for this, as usual.
That explains why I was blown away by The Tomorrow Children beta. Or why some upcoming games like UC4 or Horizon looks that good...

After not having shared a single insightful comment about the subject, after have been attacking and defaming people contrary to your interests, when you are proven totally wrong you play the victim now.

Shameful behavior.



Back to the topic, earlier in this thread I explained how Nvidia was able to bypass DX10/DX11 limits through CUDA, this turned to be the key part of all of this Asyncgate.

What B3D guys discovered is that Nvidia scheduler is sending all tasks to the rendering queue, ignoring the compute one. But then, they proved how titles that use CUDA are using all queues rightly, so what is happening here is that current software scheduler is unable to recognize compute tasks coming from DX12.

Just remember that Nvidia uses software scheduler whereas GCN uses a hardware one, latter being able to recognize workloads on its own.

Many people before this already theorized that current Nvidia DX12 driver is just a placeholder waiting for a trully working condition. So they might been proven right afterall.
 

KingSnake

The Birthday Skeleton
If this turns out to be just a driver issue for Nvidia (at least for Maxwell) then this thread will take a very hilarious turn.

I think the new statement from Oxide developer should be in the OP.
 

W!CK!D

Banned
If this turns out to be just a driver issue for Nvidia (at least for Maxwell) then this thread will take a very hilarious turn.

I think the new statement from Oxide developer should be in the OP.

You have to understand that Async Compute is a feature that is intended to increase performance. The problem is that Nvidia GPUs scale negatively with Async Compute: Using this features reduces the performance of Nvidia GPUs. A 2013 AMD GPU being on par with Nvidia's 2015 top-tier GPU is a complete desaster. What Nvidia wants to do now is to optimize their drivers in order to prevent their GPUs to scale negatively with Async Compute. It's not about getting more performance out of Async Compute. It's all about to prevent negative scaling. This is not just a driver issue. It is an issue that is due to different architectual approaches. All Nvidia can do now is to do damage control. If they can't get rid of the negative scaling, Nvidia GPUs will be in deep trouble for the rest of this generation since GCN is designed for Async Compute.
 

Arkanius

Member
You have to understand that Async Compute is a feature that is intended to increase performance. The problem is that Nvidia GPUs scale negatively with Async Compute: Using this features reduces the performance of Nvidia GPUs. A 2013 AMD GPU being on par with Nvidia's 2015 top-tier GPU is a complete desaster. What Nvidia wants to do now is to optimize their drivers in order to prevent their GPUs to scale negatively with Async Compute. It's not about getting more performance out of Async Compute. It's all about to prevent negative scaling. This is not just a driver issue. It is an issue that is due to different architectual approaches. All Nvidia can do now is to do damage control. If they can't get rid of the negative scaling, Nvidia GPUs will be in deep trouble for the rest of this generation since GCN is designed for Async Compute.

This

Nvidia are right now trying to mitigate with a driver the performance loss of Async Compute.
They will doubtly get better performance with it on.
 
his is not just a driver issue. It is an issue that is due to different architectual approaches. All Nvidia can do now is to do damage control. If they can't get rid of the negative scaling, Nvidia GPUs will be in deep trouble for the rest of this generation since GCN is designed for Async Compute.
Where are you basing this assumption on?

Does every game, every engine, every bit of code need done via async compute?

Also, we STILL do not know how much async compute in its form in GCN is of such necessity in Maxwell 2. GCN apparently has lots of idle transitors with out it, Maxwell probably has decidedly less performance locked behind low level API access.regarding the subject. If AMD needs low level access to just compete in a game (as aashes shows), then how exactly is Maxwell 2 "troubled" for the rest of the generation?

AMD finally becoming competitive in one benchmark and based upon your speculation scarcely means that NV is introuble for the rest of the generation.
This

Nvidia are right now trying to mitigate with a driver the performance loss of Async Compute.
They will doubtly get better performance with it on.

This seems like a misunderstanding. Why exactly does NV need to gain performance with it or without even? In its current way, barring VR, it seems as if it is THE way to make sure GCN is not underperforming.
 
i think nvs complete silence on this matter isnt a good sign. both amd and nvidia are so quick to respond to these types of controversies when they arent factual.
 

KingSnake

The Birthday Skeleton
You have to understand that Async Compute is a feature that is intended to increase performance. The problem is that Nvidia GPUs scale negatively with Async Compute: Using this features reduces the performance of Nvidia GPUs. A 2013 AMD GPU being on par with Nvidia's 2015 top-tier GPU is a complete desaster. What Nvidia wants to do now is to optimize their drivers in order to prevent their GPUs to scale negatively with Async Compute. It's not about getting more performance out of Async Compute. It's all about to prevent negative scaling. This is not just a driver issue. It is an issue that is due to different architectual approaches. All Nvidia can do now is to do damage control. If they can't get rid of the negative scaling, Nvidia GPUs will be in deep trouble for the rest of this generation since GCN is designed for Async Compute.

Sorry, I'm not discussing with marketing robots that keep throwing statements after statements without any data to back them up.
 

Kezen

Banned
I don't see anything preventing Nvidia from dedicating silicon to async compute like AMD. Nvidia are everything but doomed actually, they have bounced back before, remember unified shaders ? :)
 

Vinland

Banned
Sorry, I'm not discussing with marketing robots that keep throwing statements after statements without any data to back them up.

His statements are no less terrible then the dr. apoc. needlessly hammering in his interpretation of async computing not actually being a part of dx12 when one of the the features of dx12's API is command queues which markets itself as enabling async. compute.

The driver and runtime no longer have to sync up to execute taks. It can be done closer to the software creating tasks. I am an enterprise software engineer, not a graphics engineer, but https://msdn.microsoft.com/en-us/library/windows/desktop/dn899124(v=vs.85).aspx reads like this set of Apis may not be optional but they don't have to be strictly for async. Possibly UE4 would wrap that piece and iterate on exposing async later.

But regardless of async being optional or not dx12 has specific features in it that explicitly enable async and intentions are clearly stated as such. I think tW!CK!D has a valid possibility but outright stating it as fact is an argument from authority.
 

Freiya

Member
You can get an 8GB 390 for the same or less than a 970.
Wake me when amd can do crossfire + vsync + more than 1 monitor without crackling horribly bugged audio. I'd trade my 290x cf for sli 970 in a heart beat. This is the single most retarded bug I've ever seen in video card drivers and its been a problem for years now.
 

KingSnake

The Birthday Skeleton
His statements are no less terrible then the dr. apoc. needlessly hammering in his interpretation of async computing not actually being a part of dx12 when one of the the features of dx12's API is command queues which markets itself as enabling async. compute.

The driver and runtime no longer have to sync up to execute taks. It can be done closer to the software creating tasks. I am an enterprise software engineer, not a graphics engineer, but https://msdn.microsoft.com/en-us/library/windows/desktop/dn899124(v=vs.85).aspx reads like this set of Apis may not be optional but they don't have to be strictly for async. Possibly UE4 would wrap that piece and iterate on exposing async later.

But regardless of async being optional or not dx12 has specific features in it that explicitly enable async and intentions are clearly stated as such. I think tW!CK!D has a valid possibility but outright stating it as fact is an argument from authority.

I don't think I applauded dr. apoc. either.

We have two statements from Oxide now. The latest one stating that it might not be such a definitive issue after all. Why shouldn't both statements treated with the same importance?

Instead of commenting on this, he chose to quote me and just write again the same thing he wrote several times already in this thread. Practically speculation formulated as ultimate truth. It could happen, of course, but until there is reasonable data to back it up it remains just what it is, a speculation.
 

W!CK!D

Banned
I don't see anything preventing Nvidia from dedicating silicon to async compute like AMD.

Designing a GPU takes multiple years. It's not something you do overnight. If Pascal will be performing better with Async Compute then you can be assured that Nvidia anticipated this problem years ago.

Nvidia are everything but doomed actually, they have bounced back before, remember unified shaders ? :)

Who said Nvidia is doomed? Fact is, you can't change the architecture of current Nvidia GPUs.
 
His statements are no less terrible then the dr. apoc. needlessly hammering in his interpretation of async computing not actually being a part of dx12 when one of the the features of dx12's API is command queues which markets itself as enabling async. compute.

It's not my interpretation, it's a common agreement (few neogaf posters aside).

Once again:

Kollock said:
I don't believe there is any specific requirement that Async Compute be required for D3D12, but perhaps I misread the spec.
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1400#post_24360916


But regardless of async being optional or not dx12 has specific features in it that explicitly enable async and intentions are clearly stated as such. I think tW!CK!D has a valid possibility but outright stating it as fact is an argument from authority.

So back to start point. Async shaders take the DX12 pararellism and make it better by using idling resources. It is not a whole new feature, it is taking a DX12 feature and enhance it to able to juice more performance from a given hardware. Claiming Async Shaders as DX12 optional or even mandatory feature is like saying Intel® Multithreading™ should be a mandatory part of Windows compatible CPUs.

And then, no matter the new findings, you get some guys repeating all over again the same things for several days in a row. It's like they are pretending those new screencaps from B3D never happened:

MDolenc's benchmark:
rhV4GIV.png


Batman Arkham Origins
Yx3Z1Ck.png


· It has been already proved that Nvidia cards can do Async Compute. This is a fact.
· It has been already proved that Nvidia driver can't manage DX12 compute queue correctly [yet?]. This is a fact.

· It remains to be proved how will Nvidia cards be able to manage DX12 compute tasks once their software scheduler is fixed to handle them, and how will it compare then to GCN.

Oxide guys already corrected their first statements that started this shitstorm, but apparently some people refuse to acknowledge that neither:

Kollock said:
Wow, lots more posts here, there is just too many things to respond to so I'll try to answer what I can.

/inconvenient things I'm required to ask or they won't let me post anymore
Regarding screenshots and other info from our game, we appreciate your support but please refrain from disclosing these until after we hit early access. It won't be long now.
/end

Regarding batches, we use the term batches just because we are counting both draw calls and dispatch calls. Dispatch calls are compute shaders, draw calls are normal graphics shaders. Though sometimes everyone calls dispatchs draw calls, they are different so we thought we'd avoid the confusion by calling everything a draw call.

Regarding CPU load balancing on D3D12, that's entirely the applications responsibility. So if you see a case where it’s not load balancing, it’s probably the application not the driver/API. We’ve done some additional tunes to the engine even in the last month and can clearly see usage cases where we can load 8 cores at maybe 90-95% load. Getting to 90% on an 8 core machine makes us really happy. Keeping our application tuned to scale like this definitely on ongoing effort.

Additionally, hitches and stalls are largely the applications responsibility under D3D12. In D3D12, essentially everything that could cause a stall has been removed from the API. For example, the pipeline objects are designed such that the dreaded shader recompiles won’t ever have to happen. We also have precise control over how long a graphics command is queued up. This is pretty important for VR applications.

Also keep in mind that the memory model for D3d12 is completely different the D3D11, at an OS level. I’m not sure if you can honestly compare things like memory load against each other. In D3D12 we have more control over residency and we may, for example, intentionally keep something unused resident so that there is no chance of a micro-stutter if that resource is needed. There is no reliable way to do this in D3D11. Thus, comparing memory residency between the two APIS may not be meaningful, at least not until everyone's had a chance to really tune things for the new paradigm.

Regarding SLI and cross fire situations, yes support is coming. However, those options in the ini file probablly do not do what you think they do, just FYI. Some posters here have been remarkably perceptive on different multi-GPU modes that are coming, and let me just say that we are looking beyond just the standard Crossfire and SLI configurations of today. We think that Multi-GPU situations are an area where D3D12 will really shine. (once we get all the kinks ironed out, of course). I can't promise when this support will be unvieled, but we are commited to doing it right.

Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.

We actually just chatted with Nvidia about Async Compute, indeed the driver hasn't fully implemented it yet, but it appeared like it was. We are working closely with them as they fully implement Async Compute. We'll keep everyone posted as we learn more.


Also, we are pleased that D3D12 support on Ashes should be functional on Intel hardware relatively soon, (actually, it's functional now it's just a matter of getting the right driver out to the public).

Thanks!
http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702

Even when they benefited from all of this multiplying the exposure of their game, they look pretty transparent and honest about the subject.
 

W!CK!D

Banned
And a 2013 AMD gpu being on par with their 2015 top tier gpu is what exactly?

I already said:

Fury X is different: Devs are used to working with GDDR5 for years and the code is optimized accordingly. HBM is a completely new approach for memory that'll most likely need different memor access patterns to unlock its full potential.

Fury X has an insane amount of 4096 shader cores. You're not gonna be able to use them to the full with graphics rendering alone. This thing, in theory, is predestined for Async Compute. Cerny explained the importance of Async Compute for PS4 multiple times. Fury X has almost four times the shader cores as PS4 (1156).
 

GRaider81

Member
for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.
 

Kuro

Member
for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.

There is absolutely no reason to trade up considering there isn't one dx12 game on the market and maybe only 2 confirmed for next year. That 980ti can be sold 2 years from now when Pascal is out and I don't doubt that Nvidia will have a solution for async compute.
 

Kezen

Banned
Designing a GPU takes multiple years. It's not something you do overnight. If Pascal will be performing better with Async Compute then you can be assured that Nvidia anticipated this problem years ago.
They should have seen that coming, surely they were cognizant that the next round of consoles would be powered by the GCN. I don't buy they've just started working on Pascal, it seems more likely to me it has been in dev for a very long time, enough I believe to be much more competitive with whatever AMD have in the pipeline for 2016.

It does not change the fact that GCN cards from 2011 onwards will have an easier time with async compute though.
 

Lonely1

Unconfirmed Member
As I see it, even if Maxwell can't do Async for shit, is not that terrible. Is hardware build with other priorities. Having previously idle silicon now active surely won't help with already loud and hot CGN cards (and PS4 :p ). Just like turbo clocks on Intel CPU's, running less parallel process allows for higher clocks on the ones are actually running. Overclocking anyone? Now, I'm not saying that the technology is worthless, It will certainly yield a net benefit over non-async processes, everything else staying equal, but the difference won't be as big as synthetic benchmark show. Now, I'm not a hardware guy, so please correct me if I'm wrong.

for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.

Keep it, as I bought it since I wanted the best available now, but I'm glad I didn't went all the way for the Titan X.

What is 'up' from a 980Ti?

A 290x, apparently. :)
 

XiaNaphryz

LATIN, MATRIPEDICABUS, DO YOU SPEAK IT
Oh, I remember. I encourage everyone to take his offer, look up his posts under W!CKED, you can still find the quotes, and see for yourself. This fabrication of persecution and unreasonable users who couldn't stand up to his arguments is hysterical.

I thought that guy seemed familiar.
 
for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.

Show me a single game released right now that would make you want to trade this video card.

We know for a fact that nVidia doesn't have finalized DX12 drivers, and when they do their async compute performance could be more than good enough. I just don't understand the point of trying to decide anything right now when we have no finished products to benchmark and don't even have a finalized driver to test performance with.

P.S. Proud 980 Ti owner, and I'm not even a bit worried.
 

Kayant

Member
There is absolutely no reason to trade up considering there isn't one dx12 game on the market and maybe only 2 confirmed for next year. That 980ti can be sold 2 years from now when Pascal is out and I don't doubt that Nvidia will have a solution for async compute.

Fable legends and Gears:UE(if it comes out this year) with have DX12 support. I think there may be some others also but yes it's best to wait until the games that actually use the new platform come out so you have better and more informed data so you can make an informed decision on what you what to do.

@Raider1874
I would advise you wait and as you said the 980ti is still a fantastic whether or not this turns out to be true.
 

Vinland

Banned
It's not my interpretation, it's a common agreement (few neogaf posters aside).

Once again:

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/1400#post_24360916




So back to start point. Async shaders take the DX12 pararellism and make it better by using idling resources. It is not a whole new feature, it is taking a DX12 feature and enhance it to able to juice more performance from a given hardware. Claiming Async Shaders as DX12 optional or even mandatory feature is like saying Intel® Multithreading™ should be a mandatory part of Windows compatible CPUs.

And then, no matter the new findings, you get some guys repeating all over again the same things for several days in a row. It's like they are pretending those new screencaps from B3D never happened:

MDolenc's benchmark:
rhV4GIV.png


Batman Arkham Origins
Yx3Z1Ck.png


· It has been already proved that Nvidia cards can do Async Compute. This is a fact.
· It has been already proved that Nvidia driver can't manage DX12 compute queue correctly [yet?]. This is a fact.

· It remains to be proved how will Nvidia cards be able to manage DX12 compute tasks once their software scheduler is fixed to handle them, and how will it compare then to GCN.

Oxide guys already corrected their first statements that started this shitstorm, but apparently some people refuse to acknowledge that neither:

http://www.overclock.net/t/1569897/...ingularity-dx12-benchmarks/2130#post_24379702
dr. apocalipsis I fail to see your narrative's point.

From all accounts async. compute is directly enabled by the design philosophy of work execution of Direct3D in DX12. You have Command Queues that can be sent in parallel and each command queue has the potential to be executed asynchronously. Or not. However, if you support CQ's property and all the directives for memory management async. is completely available at any time since is just another way to execute code and get results. Instead of in->out it in->pomised-out

You say its not even a part of the API and I say that is disingenuous. It is directly enabled by the API. Instead of synchronously return back your result you can now, optionally, expect a promissory object that will have its value return at a later date which is completely determinate by how the application, instead of the driver, qualifies execution and lifespan of the wait. Btw, that is what Kollock was getting at.

Anything past this is non productive. Windows 10 Drivers for Nvidia executing DX3D 12 are broken. It uses too much context switching for a "Me Too" effect to meet Microsoft's aggressive timeline. Until they fix them and we see new results the rest is speculation. Unfortunately there is too much defense brigade happening here on both sides.
 
Man, you guys that are already talking about trading in your 980ti's are crazy.

Just wait till the benchmarks from actual games come out..

The 980ti is STILL the fastest card... even in DX12.
 

x3sphere

Member
for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.

Trade up to what?

Fury is markedly slower at 1440p and I have no plans of going 4K anytime soon. For gaming, I find high refresh rates much more benefical and have been eyeing the new Predator X34.

Perhaps the 980 Ti won't have great longevity due to this but that doesn't concern me, it gives the performance I need now and I'll likely be upgrading when 16nm GPUs hit.
 

KingSnake

The Birthday Skeleton
for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.

When you are in this "race", you anyhow expect to buy a new card in around 2 years. Or do a SLI. In both cases, it's a very safe bet that there is no "trade up".
 

Qassim

Member
for 980ti owners here, what are you planning to do? See what happens? Trade up now?

I just bought one and I love it. Best gaming product ive ever owned but I'm so confused right now.

I'll buy (and keep, I have a 980 Ti) what is best now, not based on what could be better in the future. For now and for most games, the 980Ti is the best option vs the AMD competition. I think it'd be a pretty dumb move to get rid of your 980Ti based on this:

1) Still better for DirectX11 games
2) DX12 games may trickle out
3) DX12 games using Async compute in any significant capacity may trickle out at an even slower rate.

I dunno why I'd trade for a worse card based on speculative future gains. If next year, or the year after, AMD's cards are getting significantly better performance on a reasonable amount of games, then I'll jump over - but I can't see the justification for changing now before the situation has even developed.
 

Herne

Member
And a 2013 AMD gpu being on par with their 2015 top tier gpu is what exactly?

You're saying the 290X can compare to a Fury X? Really?

Edit - Also, as someone who prefers AMD cards, people talking about getting rid of their 970's are out of their minds.
 

n0razi

Member
buying for the future is one of the worst things you can do financially in the home PC market.. wait till it comes out since tech becomes obsolete so quickly
 

fred

Member
a classic.. I have a AMD card, nvidia are miles better.. I pass to nvidia, now amd will have an edge with dx12...

I was thinking of ditching my awesome 2GB Radeon HD 4870 which has been a real trooper since I built my PC in 2008 for an NVidia when I start building my next machine...but now have changed my mind lol.

Should be getting my new GPU in November I think.
 
Man, you guys that are already talking about trading in your 980ti's are crazy.

Just wait till the benchmarks from actual games come out..

The 980ti is STILL the fastest card... even in DX12.

This is the part which makes the AMD fanboy posts here hilarious. The 980 Ti still wins the DX12 tests, it just manages to fuck up a little bit because of non-functional drivers. Meanwhile AMD gets amazing gains which dramatically show off how goddamned awful their drivers have been in DX9/DX11 before DX12 because they haven't put any effort into optimization the way Nvidia has, which is typical for AMD, but they still can't actually win over the 980 Ti. I don't see what the problem is here, who gives an actual fuck what is or isn't supported if the card is still the fastest?
 
dr. apocalipsis I fail to see your narrative's point.

It might be a bit difficult to get, but parallel and asynchronous are not the same thing.

What DX12 asks for is the ability to run several threads, based on queues and a priority system. This could lead to major delays in execution/copy since, until some high priority task is concluded, low priority stuff will have to wait given you will have a limit on the number of threads available.

Async, just as parallel, breaks things into chunks, but this time it is able to use an active thread to avoid halts, both in execution or copy, running the task in background and callingback when done. If some task is dependent of another, you will want to go async. Throw the request and let the scheduler handle it as soon as possible.

Anyway, it is not like Async>Parallel. Each type have its own uses and contexts. With parallel compute you know when chunk#1 and chunk#2 will be done, with async you don't.

The async compute ability is beyond D3D12 required specs, such as many other HW features were back in the time over former DXs. That's why I have difficult times trying to understand why some pleople react so furiously against me remarking an aparent advantage of GCN over other vendors. But what you can't do either is to hype a feature that improves performance on a select context. This won't boost your FPS by any means, but can prevent tankings.
 
This is the part which makes the AMD fanboy posts here hilarious. The 980 Ti still wins the DX12 tests, it just manages to fuck up a little bit because of non-functional drivers. Meanwhile AMD gets amazing gains which dramatically show off how goddamned awful their drivers have been in DX9/DX11 before DX12 because they haven't put any effort into optimization the way Nvidia has, which is typical for AMD, but they still can't actually win over the 980 Ti. I don't see what the problem is here, who gives an actual fuck what is or isn't supported if the card is still the fastest?

Nvidia fanboy logic. Facts don't matter.

p1KnvoN.png


GTX980 BARELY beating a previous generation AMD card.

WmxDF77.png


Also, I own cards from both teams. I just tell it like it is. For now, I definitely looks like my next card will be an AMD. Of course I don't know what the future would hold.
 

dr_rus

Member
Nvidia fanboy logic. Facts don't matter.

p1KnvoN.png


GTX980 BARELY beating a previous generation AMD card.

WmxDF77.png


Also, I own cards from both teams. I just tell it like it is. For now, I definitely looks like my next card will be an AMD. Of course I don't know what the future would hold.

980 is some 5% faster than 390X in DX11 on average, so it'll be some 5% slower, this is a difference that I call non-existent.

You should also remember that you're posting benchmarks of a game made to promote Mantle having an AMD logo on its website.

Edit: Oh, yeah, there's one more thing to consider while posting AoS benches in this thread:
Regarding Async compute, a couple of points on this. FIrst, though we are the first D3D12 title, I wouldn't hold us up as the prime example of this feature. There are probably better demonstrations of it. This is a pretty complex topic and to fully understand it will require significant understanding of the particular GPU in question that only an IHV can provide. I certainly wouldn't hold Ashes up as the premier example of this feature.
This means that whatever is holding back NV cards performance compared to AMD in Ashes benchmarks is likely to NOT be the topic of this thread. (I personally think that it's the benchmark code itself or NV's driver in general as there are zero reasons why DX12 path would be slower than DX11 one on NV h/w.)
 
GTX980 BARELY beating a previous generation AMD card.

It's funny because current generation AMD doesn't neither. Oh, and 980 still beats both of them playing hopscotch.

Also, I own cards from both teams. I just tell it like it is. For now, I definitely looks like my next card will be an AMD. Of course I don't know what the future would hold.

Excusing oneself for having both brands most of the times leads to some fanboy claim. Not talking about you expressly.
 
Top Bottom