• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

How AMD is Going to Screw Nvidia

What? No.

DX12 will have no SLI or Crossfire. mGPU solutions will have to come from the game developers. It certainly wouldn't be on-chip, that really isn't every effective use of die space.
according to the end of the video, it seems to be the direction AMD wants to go. Multiple GPUs in-bedded in a chip yet they act as one with a shared memory pool.
 
WTF.

1. If you're manufacturing small dies then you're not doing multiple GPUs on the same die.

2. Crossfire is dead (as is SLI). Even then this has nothing to do with hardware based mGPU. It is all software based and supported by a generic API.

3. See above. It isn't about supporting crossfire, it is about supporting something like dx12 mGPU.

Aren't Dual-GPUs gonna be a lot more popular due to VR?
 
The last section of the video is very explicitly titled "multi GPU consoles." His argument is:

1. Yields are far more cost effective when manufacturing lots of little dies on a wafer rather than fewer big dies.

2. Mainstream GPUs in Crossfire have far better performance per dollar than single Enthusiast GPUs. Around the GTX 980 launch, I bought two R9 290s for less and absolutely blew it away in Crossfire enabled games.

3. Developers won't have a choice but to support Crossfire when 95% of their audience are using AMD consoles.

This allows console manufacturers to sell very powerful and cost effective consoles.

1. This is a conjecture which means absolutely jack shit without the specifics of what he's talking about. Processes, sizes, architectures, price ranges, etc etc etc - dozens of factors literally - have such a big influence on this that saying only that is just a plain lie.

2. I find this very unlikely to be true on average. Take a look at AMD's own performance figures for a newly released dual Fiji card - it's hardly 30% faster than a single GPU 980Ti or about 50% faster than a single GPU Fury X - and that's for a $1500 price tag. In some ideal world of 100% mGPU scaling in all games which are coming out that may be true but then again you have to factor in the AFR frame latency which prevents you from comparing a single and mGPU solutions by average fps as these are different on them. Etc etc etc.

3. mGPU solution for a console will never be more cost effective than a single APU solution of half the size. Consoles aren't in the arms race and there is no need for them to push for SLI/CF levels of performance increases. Basically - won't happen (until there are still denser nodes left on the roadmaps at least).

This whole rant is based on a completely false idea.

It's possible because the positives of the video far outweigh the few negatives? That's usually how any aspect of life works.

Yeah, well, the problem is that it doesn't. The video is full of retarded points like the one you've mentioned, you've just happen to notice only one of them.

according to the end of the video, it seems to be the direction AMD wants to go. Multiple GPUs in-bedded in a chip yet they act as one with a shared memory pool.

A. AMD doesn't decide what will go into what console. Their plans are irrelevant here.
B. Why would console manufacturers make their h/w worse (which this idea will actually make it) for AMD to beat NV in PC space? Why would anyone care about this outside of AMD at all?
C. I struggle to figure out how this will screw NV considering that it's NV and not AMD who's got NVLink for chip-to-chip high speed interconnections at the moment and all this would lead to is a better SLI efficiency of NV's h/w - which last time I checked aren't limited to 600mm^2 chips only.
 
It won't be on chip. At best it would be on the same package.

There've been a lot of discussions regarding using multiple smaller chips on the same package (probably on an interposer). On one hand it makes sense with nodes becoming more expensive and taking longer to ramp up, but on the other you're also going to have to deal with making them work together which is going to require extra logic. Considering how many headaches multi-GPU setups bring with them and adding all these things together require extra manufacturing steps, there might just be a point you say fuck it and go for a monolithic die anyway.

I don't know enough to make these kind of predicitions, but I don't think it's going to happen anytime soon.
 
There've been a lot of discussions regarding using multiple smaller chips on the same package (probably on an interposer). On one hand it makes sense with nodes becoming more expensive and taking longer to ramp up, but on the other you're also going to have to deal with making them work together which is also going to require logic. Considering how many headaches multi-GPU setups bring with them and adding all these things together require extra manufacturing steps, there might just be a point you say fuck it and go for a monolithic die anyway.

I don't know enough to make these kind of predicitions, but I don't think it's going to happen anytime soon.

Yup. Hence why I said at best, and why earlier I stated it is far more likely for a console manufacturer to just add more CUs to their APU.
 
Interesting to watch.

I can see the thinking behind AMD making devs lead with their hardware rather than Nvidias due to it being in the consoles where they make the most money. Only time will tell if that actually bears out when we see the games over the next months and years. Quantum Break is only one example but it had enough technical issues holding it back that it's not an iron clad showing of "better on AMD".

I'm going to be upgrading in the next batch of cards and had always assumed I'd be buying Nvidia but their recent drivers have left a lot to be desired while AMD has been improving in leaps and bounds. They are somewhat on top in that regard at the moment IMO. If they can get the power draw on Polaris down to a reasonable level and sell it at a good price then I'll be taking a long look at it.
 
1. This is a conjecture which means absolutely jack shit without the specifics of what he's talking about. Processes, sizes, architectures, price ranges, etc etc etc - dozens of factors literally - have such a big influence on this that saying only that is just a plain lie.

2. I find this very unlikely to be true on average. Take a look at AMD's own performance figures for a newly released dual Fiji card - it's hardly 30% faster than a single GPU 980Ti or about 50% faster than a single GPU Fury X - and that's for a $1500 price tag. In some ideal world of 100% mGPU scaling in all games which are coming out that may be true but then again you have to factor in the AFR frame latency which prevents you from comparing a single and mGPU solutions by average fps as these are different on them. Etc etc etc.

3. mGPU solution for a console will never be more cost effective than a single APU solution of half the size. Consoles aren't in the arms race and there is no need for them to push for SLI/CF levels of performance increases. Basically - won't happen (until there are still denser nodes left on the roadmaps at least).

This whole rant is based on a completely false idea.



Yeah, well, the problem is that it doesn't. The video is full of retarded points like the one you've mentioned, you've just happen to notice only one of them.

Rus youre a great poster with a lot of knowledge but whenever AMD gets brought up you become very combative and start obviously twisting the truth. Im sure youre very well aware that the Pro Duo is getting that performance at almost the same TDP as a single Fury. Just like you also know DX12 and Vulkan are using different solutions to mGPU than AFR. And I'm sure you've seen the benchmarks to know that in perfect scaling scenarios, two high end GPUs typically outperform the enthusiast tier, at around the same or less price.

http://www.pcper.com/news/Graphics-...past-CrossFire-smaller-GPU-dies-HBM2-and-more

^ Raja Koduri very explicitly said that scaling in the future will come from multiple small GPUs in Crossfire. You may not like it, you may think there's a lot of contingencies but you can't keep disingenuously pretending this is all fairy tales.
 
So the video is very good but parts of it are retarded? How is this even possible?

The video is some fantasy land rumblings with a great deal of personal agenda. There are more than one example of technical and logical fallacy in this video and the key idea he's trying to communicate in it is very simple - would've probably fit into one minute instead of a 26 something but if he'll just say it out loud it'll just be stupidly obvious how baseless that idea is in the first place.

mGPUs aren't going to save AMD or help AMD in any way.

You are obviously smart and passionate, but your ego makes discussion hard. You pronouncing judgement without elucidation is just as drive-by as 'lol, guy must be a comedian'. Why not challenge yourself and breakdown his arguments with your own?

Like the video explained, nvidia has 75% of the pc hardware, but pc gaming represents only 10% of those 3 publishers revenue.

90% of the revenue was obtained on console, which runs 100% on amd hardware.

The numbers are factual, nothing we can dispute there. What we are disputing is the leap of faith the video is trying to propose, which is that these numbers will correlate to a dominant amd pc performance in the near future (in games coming from EA Ubisoft Blizzard).

However, pc gaming is not only EA Ubisoft and Blizzard. Far from that.

This is totally true, but much of that market runs just fine on 4 year old integrated and they don't show up in tech analysis, or at least not that I've seen. Do testers include Hearthstone benchmarks in reviews?
 
The results probably have too much rng.
image.php
 
All this talk about Kepler performance falling behind.. man, even the GTX970 is starting to stink it up in a lot of those benchmarks posted throughout the thread. I'll never get brand loyalty in PC hardware. If the competitor offers something substantially better, I'll easily switch back.
 
All this talk about Kepler performance falling behind.. man, even the GTX970 is starting to stink it up in a lot of those benchmarks posted throughout the thread. I'll never get brand loyalty in PC hardware. If the competitor offers something substantially better, I'll easily switch back.

What's sad is how the apologists say its not Nvidia's fault the 970 has fallen behind in performance and blame it all on the consoles using AMD.

And then they come in threads like this to insist the console deals haven't helped Radeon performance because Nvidia will always be the market leader.
 
Or they just stopped caring once the it reaches 60fps. Anything beyond that is useless in a game that is designed to be played with 60fps lock (fighting games). Only thing that matters is that you stay 60fps without any frame pacing issues. KI is a very bad example as it runs at 60fps on a toaster pretty much. Even the game running at over 60fps was a bug that they patched already.

The game has minimum requirements of NVIDIA GeForce GTX 480 / AMD Radeon HD 5850 and I've seen it run on both in real life on my friends computers.

I'm not denying that of course but still if we want to be objective AMD cards do better in this game, evidently Nvidia hardware optimization has never been much of a priority.
I wonder how Nvidia will adapt to an increasingly AMD dominated console landscape.

With 5 GCN consoles in the near future it is going to be interesting to see what effect this will have on Nvidia cards.
I don't think I'll go red though, I've always been very happy with Nvidia hardware and I love the work they have put in programs like Gameworks.
 
Rus youre a great poster with a lot of knowledge but whenever AMD gets brought up you become very combative and start obviously twisting the truth. Im sure youre very well aware that the Pro Duo is getting that performance at almost the same TDP as a single Fury.

No, whenever someone is starting to say or post bullshit is when I become combative.

Pro Duo's TDP is 350W while Fury X's TDP is 275W - that's not the same, that's a difference of 27%. Considering that AMD is used to lowering the typical power draw of their cards in specs and that the +50% of performance on average is AMD's own estimation which are almost always a bit off when compared to independent benchmarks - it doesn't look any different to any independently done single GPU card comparison out there. I don't see anything a dual GPU is bringing here which a single GPU can't bring without any issues of mGPU rendering.

Just like you also know DX12 and Vulkan are using different solutions to mGPU than AFR. And I'm sure you've seen the benchmarks to know that in perfect scaling scenarios, two high end GPUs typically outperform the enthusiast tier, at around the same or less price.

DX12 and VK _allow_ for different solutions than AFR - doesn't mean that most devs won't be just using the good old AFR still. I actually expect them to as this is the most easy and straight forward mGPU option to implement. This is also what is used in AotS DX12 which the video is mentioning all the time.

As for two smaller GPUs outperforming a one big GPU for the same price - I hope you understand that you're talking about the price of products here which have loose relation to the price of actually making the chips these products are built on. If such a system will be constantly beating a one chip product - they'll just lower the one chip's product price. Die price isn't that big in a price of your typical videocard.

Technically there is a window of die size where doubling the chips in a mGPU fashion will grant you performance which you won't be able to achieve on the same process with a bigger chip - somewhere between 350 and 500mm^2. In practice this never really works though because of all the issues and overheads which an mGPU system is bringing with itself.

Thinking that if you dump the problem of supporting that system on the developers they'll suddenly do it better than an IHV - the maker - of the system could is naive. And as for consoles being the reason for such support from developers - I very much doubt that any console vendor would want to end up with a $500-600 bill for the mGPU part of the console alone, and anything less than that would be more efficient as a single die / APU.

http://www.pcper.com/news/Graphics-...past-CrossFire-smaller-GPU-dies-HBM2-and-more

^ Raja Koduri very explicitly said that scaling in the future will come from multiple small GPUs in Crossfire. You may not like it, you may think there's a lot of contingencies but you can't keep disingenuously pretending this is all fairy tales.

This is just their usual talk at each start of a new process node. We've got it with 55nm / RV770X2, then it was 40nm / Cypress with HD5970, then they moved to 28nm with Tahiti and 7990. AMD is starting this talk every time it's too expensive for them to produce a big die to use it in any kind of consumer Radeon card. Each time this is the same as before with NV doing something similar and they both moving to a one big single GPU card on the same process in 1-2 years which is usually better than whatever dual GPU card they've had previously.

Watch them introducing a 450mm^2+ Vega in a year from now and talking how that card is on the same performance level as their previously released P10x2 solution with less power draw and the usual. I thought that this cycle should be pretty clear to anyone these days.

You are obviously smart and passionate, but your ego makes discussion hard. You pronouncing judgement without elucidation is just as drive-by as 'lol, guy must be a comedian'. Why not challenge yourself and breakdown his arguments with your own?

Because that would really be just a repeat of stuff I've said for a hundred times already.
 
The guy lost me at the idea that the nextgen consoles would feature multiple GPU's. Even with DX12 (and similar), I just don't see that happening.

Edit: Also, the idea that AMD is gunning for Intel's CPU market? First of all, this is almost certainly never going to happen—AMD is too far behind and the author doesn't present any strategy that AMD could use (other than a vague "because video games"). But even if it did, I don't think AMD is desperate for Intel's share of the desktop CPU market. I mean, I'm sure that AMD would be happy to have Intel's business, if they could, but the real future is in mobile.
 
I'm not denying that of course but still if we want to be objective AMD cards do better in this game, evidently Nvidia hardware optimization has never been much of a priority.
I wonder how Nvidia will adapt to an increasingly AMD dominated console landscape.

With 5 GCN consoles in the near future it is going to be interesting to see what effect this will have on Nvidia cards.
I don't think I'll go red though, I've always been very happy with Nvidia hardware and I love the work they have put in programs like Gameworks.

The same as AMD has been doing for the past few years when pretty much all the games have been optimized for nvidia and includes nvidia proprietary tech. Make it up in the drivers and/or send fanboys to the internet forums.

edit: Well both have been doing this forever. Hopefully DX12/Vulcan will help a bit and move the ball more into the game devs court instead of driver black magic.
 
The same as AMD has been doing for the past few years when pretty much all the games have been optimized for nvidia and includes nvidia proprietary tech. Make it up in the drivers and/or send fanboys to the internet forums.

edit: Well both have been doing this forever. Hopefully DX12/Vulcan will help a bit and move the ball more into the game devs court instead of driver black magic.

Nvidia have been fighting fairly haven't they ? Work very hard on drivers and work with devs to implement high quality effects. It's not like word of mouth is going to do anything for them.

They deserve their marketshare.

I don't see how putting more power in the hands of devs is better than what AMD/Nvidia are doing with their gaming programs. All I want to great tech in my PC games and both IHVs have made that possible on a wide number of titles.
I'm sure AMD will keep pushing their ridiculous tripes about Gameworks being evil and bla bla bla.
 
His final argument boils down to speculation on the meaning of "scalability" in one of AMD's slides for 2018. I think he's extrapolating way too much out of it.

I like the way he constructed his arguments, and he seems to know what he's talking about, but far too many of his points are based on unconfirmed details.
 
5 years ago things were different.

Look at the current market shares of AMD and Nvidia.

Do you understand how monumental of a switch that would be? For the foreseeable future, assuming Nvidia continues to advance its own technology and market them well, AMD even gaining 50% of the PC market is verging on the impossible.

Spinning the console market as giving AMD a way toward gaining a stranglehold on the PC market is absolutely fanfiction.

Do you know how many monumental changes in the tech industry that has happened within years of seeming impossibility? Are you inferring AMD isn't advancing it's own technology?

Like I said, it's quite unlikely in the near future but that fact that you spin it as an impossibility while making it seem like 5 years was an eternity ago is interesting to say the least.

My point has always been that it's high unlikely but not impossible.
 
What's sad is how the apologists say its not Nvidia's fault the 970 has fallen behind in performance and blame it all on the consoles using AMD.

The 970 isn't falling behind for the reasons people claim it is. The memory issue is NOT the problem with Maxwell, and Consoles using AMD hardware is also NOT the problem. The issue is that after Fermi Nvidia realized at the time that Compute functionality was a waste of die-space because the software just wasn't there. They decided stripping away all the compute hardware would help them maintain great performance in DX10/11 titles, while cutting down substantially on manufacturing costs and increasing yield. So they went from GF100 being their high-end part with GF104 being the mid-range, to GK104 being the high-end and GK106 being the beginning of their mid-range--they then held off a full year to release GK110 (skipping GK100 entirely and opting to release only a reised version). GK110 added some of that compute functionality back, but the software still wasn't there.

With Maxwell they revised the design a bit but overall kept the same design--GM104 being high-end initially (970 and 980), with GM100 being offered later as the "new" high-end (980 Ti and Titan X). This means the 970 and 980 are still lacking a lot of compute functionality that the 980 Ti/Titan X and even the 780/Ti feature. Nvidia chose to try and handle the lack of a hardware solution with a software one, but it's nowhere near as efficient. AMD realized this problem would occur and has been pushing Devs to shove Async Compute into every DX12 they are producing, which makes Nvidia GPU's look substantially worse when in reality for a majority of games they function fine.

The 970 is a great card for most scenarios, but for certain titles that make heavy use of Asynchronous Compute it struggles. It's not AMD's fault for pushing it either, Nvidia did the same thing during Fermi with Tessellation (remember Crysis 2 rendering an entire Tessellated Ocean under the map?) so it's just the nature of the market.
 
The 970 isn't falling behind for the reasons people claim it is. The memory issue is NOT the problem with Maxwell, and Consoles using AMD hardware is also NOT the problem. The issue is that after Fermi Nvidia realized at the time that Compute functionality was a waste of die-space because the software just wasn't there. They decided stripping away all the compute hardware would help them maintain great performance in DX10/11 titles, while cutting down substantially on manufacturing costs and increasing yield. So they went from GF100 being their high-end part with GF104 being the mid-range, to GK104 being the high-end and GK106 being the beginning of their mid-range--they then held off a full year to release GK110 (skipping GK100 entirely and opting to release only a reised version). GK110 added some of that compute functionality back, but the software still wasn't there.

With Maxwell they revised the design a bit but overall kept the same design--GM104 being high-end initially (970 and 980), with GM100 being offered later as the "new" high-end (980 Ti and Titan X). This means the 970 and 980 are still lacking a lot of compute functionality that the 980 Ti/Titan X and even the 780/Ti feature. Nvidia chose to try and handle the lack of a hardware solution with a software one, but it's nowhere near as efficient. AMD realized this problem would occur and has been pushing Devs to shove Async Compute into every DX12 they are producing, which makes Nvidia GPU's look substantially worse when in reality for a majority of games they function fine.

The 970 is a great card for most scenarios, but for certain titles that make heavy use of Asynchronous Compute it struggles. It's not AMD's fault for pushing it either, Nvidia did the same thing during Fermi with Tessellation (remember Crysis 2 rendering an entire Tessellated Ocean under the map?) so it's just the nature of the market.

AFAIK it was Sony that pushed AMD to add more Async Compute queues to their GCN 1.1 gpus. There is only one game on PC that is known to use Async Compute properly and that is Ashes of the Singularity. Any game running under DX11 won't be using it (unless they're using some ihv api like LiquidVR).

However, what is indeed happening is that developers on consoles are tuning their shaders for GCN in order to get the best performance out of the console hardware. This does sometimes pay dividends on the PC versions of said games.
 
I would love nothing more than for AMD to come back in a big way, competition is good for us. I've been team green for a very long time, with the only red card being the 5850 which was fantastic price/performance wise.
 
The 970 isn't falling behind for the reasons people claim it is. The memory issue is NOT the problem with Maxwell, and Consoles using AMD hardware is also NOT the problem. The issue is that after Fermi Nvidia realized at the time that Compute functionality was a waste of die-space because the software just wasn't there. They decided stripping away all the compute hardware would help them maintain great performance in DX10/11 titles, while cutting down substantially on manufacturing costs and increasing yield. So they went from GF100 being their high-end part with GF104 being the mid-range, to GK104 being the high-end and GK106 being the beginning of their mid-range--they then held off a full year to release GK110 (skipping GK100 entirely and opting to release only a reised version). GK110 added some of that compute functionality back, but the software still wasn't there.

With Maxwell they revised the design a bit but overall kept the same design--GM104 being high-end initially (970 and 980), with GM100 being offered later as the "new" high-end (980 Ti and Titan X). This means the 970 and 980 are still lacking a lot of compute functionality that the 980 Ti/Titan X and even the 780/Ti feature. Nvidia chose to try and handle the lack of a hardware solution with a software one, but it's nowhere near as efficient. AMD realized this problem would occur and has been pushing Devs to shove Async Compute into every DX12 they are producing, which makes Nvidia GPU's look substantially worse when in reality for a majority of games they function fine.

The 970 is a great card for most scenarios, but for certain titles that make heavy use of Asynchronous Compute it struggles. It's not AMD's fault for pushing it either, Nvidia did the same thing during Fermi with Tessellation (remember Crysis 2 rendering an entire Tessellated Ocean under the map?) so it's just the nature of the market.

gm200 and gm204 have the exact same compute capabilities. gm200 just has more smms, i dont think gk110 has any comptue features missing from gm104 either. amd being in both consoles absolutely is a contributing factor as well IMO. much more so than async compute. as tuxtool said, its not supported in dx11 where amd is steadily pulling ahead in the majority of new titles.

I agree about the 970s memory issue being overblown. you can count on 1 hand the scenarios where its mattered since launch.
 
gm200 and gm204 have the exact same compute capabilities. gm200 just has more smms, i dont think gk110 has any comptue features missing from gm104 either. amd being in both consoles absolutely is a contributing factor as well IMO. much more so than async compute. as tuxtool said, its not supported in dx11 where amd is steadily pulling ahead in the majority of new titles.

I agree about the 970s memory issue being overblown. you can count on 1 hand the scenarios where its mattered since launch.

Yeah, the thing that GM204 lacks is double precission which is what's used for HPC. Games don't use DP at all. Even GM200 has its DP stripped and is basically a pure gaming/FP32 beast like GM204 but scaled up. That's the reason why nVidia refreshed GK110 for their HPC Tesla cards with GK210. This was even rumored right before GM200's launch. When you look at compute benchmarks nVidia still wins across the board, although I do not know which of these workloads resemble compute in game design.

I think one of the main reasons why Hawaii has been creeping up, and in some cases surpassing GM204, is because of how long these cards have been around. HD7970 launched over four years ago, that's a lot of time to get the hang of your architecture and chips. AMD's launch drivers usually haven't been the greatest compared to nVidia (HD7970 vs GTX680), so who knows how much performance was still left on the table for GM204. AMD were forced to keep improving them with Hawaii's rebrand because the minor hardware improvements weren't going to cut it. Hawaii is just GK110's competitor that was forced to fight GM204 as they had nothing else.

I can't speak for the whole console and Async business and how much of an influence that has become.
 
I would love nothing more than for AMD to come back in a big way, competition is good for us. I've been team green for a very long time, with the only red card being the 5850 which was fantastic price/performance wise.

Back then what swayed me was GPU Physx in Batman Arkham Asylum, Mirror's Edge or Mafia II. That is why I chose a 470 instead of the cheaper 5850.

I did not regret it, fantastic card and Physx did not disappoint me.
 
The 970 isn't falling behind for the reasons people claim it is. The memory issue is NOT the problem with Maxwell, and Consoles using AMD hardware is also NOT the problem. The issue is that after Fermi Nvidia realized at the time that Compute functionality was a waste of die-space because the software just wasn't there. They decided stripping away all the compute hardware would help them maintain great performance in DX10/11 titles, while cutting down substantially on manufacturing costs and increasing yield. So they went from GF100 being their high-end part with GF104 being the mid-range, to GK104 being the high-end and GK106 being the beginning of their mid-range--they then held off a full year to release GK110 (skipping GK100 entirely and opting to release only a reised version). GK110 added some of that compute functionality back, but the software still wasn't there.

With Maxwell they revised the design a bit but overall kept the same design--GM104 being high-end initially (970 and 980), with GM100 being offered later as the "new" high-end (980 Ti and Titan X). This means the 970 and 980 are still lacking a lot of compute functionality that the 980 Ti/Titan X and even the 780/Ti feature. Nvidia chose to try and handle the lack of a hardware solution with a software one, but it's nowhere near as efficient. AMD realized this problem would occur and has been pushing Devs to shove Async Compute into every DX12 they are producing, which makes Nvidia GPU's look substantially worse when in reality for a majority of games they function fine.

The 970 is a great card for most scenarios, but for certain titles that make heavy use of Asynchronous Compute it struggles. It's not AMD's fault for pushing it either, Nvidia did the same thing during Fermi with Tessellation (remember Crysis 2 rendering an entire Tessellated Ocean under the map?) so it's just the nature of the market.

A. Compute functionality of Maxwell/900 series is more or less on par with the same compute functionality of GCN3/Fiji. Kepler (even GK110) is below them both in some important gaming related aspects, that's true. It's important to note here that DirectX's compute interface hasn't been updated since the introduction of DX11 (that's Fermi/Cypress days) and thus whatever compute advantages we're talking about here are actually performance advantages in simpler features - none of advanced features of new h/w are even accessible via DX at the moment.

B. GM204/970-980 and GM200/980Ti-TitanX have the exact same compute capability and provide the exact same compute functionality. See above about DX being the main limiting factor here at the moment as well.

C. Async compute is FUD. It's a great PR/marketing device for AMD but in practice it doesn't bring any sizeable performance gains even for their recent h/w. NV's h/w is simply performing on the same level as without it. It doesn't relate in any way to what compute features what GPUs have because this async compute is nothing more that a way of running the old DXCS5 compute shaders (Fermi/Cypress compute feature level basically) asynchronously. And the ability to run compute concurrently with graphics (what AMD is doing and where these gains are coming from) has nothing to do with how good or bad a GPU is in compute in general as these things are completely unrelated.

D. Main reason why GCN cards are doing great in latest games is the console code optimizations reaching their peak this year - it's been 2 years since new consoles launched, the usual timeframe for reaching their peak performance.
 
My friend had a dog once, very smart one. When it looked at me I've always got this feeling that the dog wants to tell me something because it was really smart. But it never did. Because dogs can't speak.
 
My friend had a dog once, very smart one. When it looked at me I've always got this feeling that the dog wants to tell me something because it was really smart. But it never did. Because dogs can't speak.

So, conjecture.

yolo man. chill :D
 
They'll continue to not care about markets with tiny margins.

Of course nVidia would say that. The margins AMD enjoys are pretty small, but that's between AMD and the three manufacturers. AMD got the job with all three by default, not only because each company wanted a product nVidia did not have to offer (SoC's, x86 compatible or otherwise), but because of the relationship each company had with both AMD and nVidia. nVidia had worked with Microsoft to supply the graphics chip for the first XBox, and then fell out with each other over pricing in an argument that went to court. They have been working with AMD ever since. They supplied the graphics chip for the PS3, but when it came to the PS4, Sony probably has the same discussions with nVidia that Microsoft have had where they likely offered each a graphics chip for a proposal for a high performance part with a plan for long term production. Then they looked at what AMD had to offer - a low cost SoC with a combined multi-core processor and graphics core, able to be produced for less, an all in one solution that took care of the processor also. Producing far less heat and taking up less space on the motherboard, of course they'd go with AMD's offering. It's extremely likely they'll continue to go with AMD from now on, unless they fall out over something in the future or nVidia has something spectacular to offer that AMD can't match. But given that nVidia has fallen out with at least one manufacturer and AMD continues to foster great relationships in that market - who wants to bet that Nintendo didn't even look in nVidia's direction for the Wii U and NX - that's not likely to happen.

Of course nVidia dismisses the console market, they have nothing to offer them. I don't know if the same GCN and eventually Polaris/Vega/Navi etc. chips running in both consoles and graphics cards will do AMD favours like the video suggests - it didn't last generation - but I feel that with both consoles and PC's using the same API's in DX12 and Vulkan, it surely won't hurt. It will probably be a hell of a lot easier to port games back and forth, and AMD cards will only benefit from that, surely.
 
Of course nVidia would say that.


It is true that Nvidia would embrace console contracts if possible because more is always better.

It is also true that AMD has not earned much money despite its console contracts. Console manufacturers know AMD is desperate and they took everything from AMD without much costs.
 
The guy seems to believe that the console ("total market share") will transform into PC dominance. The fact that they're a minority in the dGPU space isn't in dispute. A better chart to post would be AMD vs Nvidia profitability, lol. Their "master plan" better materialise right fast or they'll be the ones who are screwed.

Betting in console is a bet I wouldn't do.

But since I am #teamred since last year, I hope he's right
 
Aren't Dual-GPUs gonna be a lot more popular due to VR?

No. What VR SLI APIs the manufacturers have are targeted for DX11. Most developers aren't going to spend the time necessary to develop SLI in their games. They already consider it a bother with DX11. DX12 is worse cause developers now have to do everything they did before plus do everything the drivers did for SLI.
 
Nvidia have been fighting fairly haven't they ? Work very hard on drivers and work with devs to implement high quality effects. It's not like word of mouth is going to do anything for them.

They deserve their marketshare.

I don't see how putting more power in the hands of devs is better than what AMD/Nvidia are doing with their gaming programs. All I want to great tech in my PC games and both IHVs have made that possible on a wide number of titles.
I'm sure AMD will keep pushing their ridiculous tripes about Gameworks being evil and bla bla bla.
Gameworks IS evil:
https://www.youtube.com/watch?v=O7fA_JC_R5s
 
They've still got a hell of a hill to climb, in financial terms!

Advanced Micro Devices (2015)
Revenue $3.991 billion
Operating income -$481 million
Net income -$660 million
Total assets $3.109 billion
Total equity -$412 million

Nvidia (2014)
Revenue $4.13 billion
Operating income $496.227 million
Net income $439.99 million
Total assets $7.250894 billion
Total equity $4.456398 billion

 
DX12 Explicit Multiadapter GPU incoming? AdoredTV hit the nail on the head on AMD small dual-GPU strategy.

OH0dN9r.png
 
Meanwhile in reality, we are in a situation where multi-GPU is less reliable and consistent in applicability and performance across a variety of games than ever before.
 
Top Bottom