• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia CEO laments low Turing sales, says last quarter was a ‘punch in the gut’

Panajev2001a

GAF's Pleasant Genius
I have to disagree here...

People talk about nVidia pushing the industry forward with RTX, but what they were really doing, at least up to and including RTX, was limit the adoption of DX12/Vulkan and increase their card prices for more profit. And there is literally ZERO reason, why ray tracing could not be implemented through compute, which AMD is also extremely good at. If things were done openly, gamers would benefit, but that's not what happens.
Now that they have RTX they are seen as innovative, even though it's simply the next stage of what used to be TWIMTBP, PhysX, GameWorks in general, G-Sync, which is closing off certain technologies for themselves and singling out the competition, because they can.

But you only have to look back a year, where Microsoft announced Ray Tracing support for DX12, and Vulkan followed later in that same year. AMD had its Radeon Rays 2.0 ready in March of last year, which is real time Ray Tracing in Vulkan using Async compute. That's not even counting the first version of Radeon Rays.
DLSS, same thing. Microsoft released DirectML early in 2018. Nothing DLSS does is special, in the sense that it all can be done while using DirectML. And that's ignoring the fact that dropping resolution and upscaling normally often gives superior quality compared to- for the same performance as DLSS. But no. nVidia has to play this game of closing off everything in order to pretend that only they have the technology to do what they're doing, while that can't be further from the truth.

Vulkan in particular (although DX12 also is equally capable) offers an efficiency and flexibility that is unprecedented, where many technologies could be implemented without the need for nVidia's closed off approach. All the while Vulkan, and probably even DX12 wouldn't exist in its current form if it wasn't for AMD, but they don't get props for innovation. nVidia gets praised for their anti-consumer closed off approach though, and to me, that's disgusting.
Large groups of gamers bashed DX12 because nVidia performed badly (or at least worse than DX11) in it, while AMD had either parity or an improvement with it in most cases. So what really is the problem then? DX12, or nVidia? Vulkan is bashed to a lesser degree because there the benefits have been more obvious. The only reason it's bashed is because it's barely used compared to DirectX. But nVidia's mind share has dominated gaming for too long, and the consumer would be better off if things went differently, as in, more open.

The issue is, AMD didn't market RT even though they had it, and why should they, when hardware in reality is not ready for it? nVidia on the other hand are masters of marketing and most importantly, deception. Everyone thinks that only they can do RT now, and that's a big joke. But this time they have gone too far and it didn't work out like they expected.
And I'm extremely glad to see that at last, this approach is firing back in their face, because honestly, I'm tired of it.

Yes, the creation of Mantle by AMD was an important step for both DX12 and Vulkan, where the latter incorporated tech from Mantle.
 

dark10x

Digital Foundry pixel pusher
I have to disagree here...

People talk about nVidia pushing the industry forward with RTX, but what they were really doing, at least up to and including RTX, was limit the adoption of DX12/Vulkan and increase their card prices for more profit. And there is literally ZERO reason, why ray tracing could not be implemented through compute, which AMD is also extremely good at. If things were done openly, gamers would benefit, but that's not what happens.
Now that they have RTX they are seen as innovative, even though it's simply the next stage of what used to be TWIMTBP, PhysX, GameWorks in general, G-Sync, which is closing off certain technologies for themselves and singling out the competition, because they can.

But you only have to look back a year, where Microsoft announced Ray Tracing support for DX12, and Vulkan followed later in that same year. AMD had its Radeon Rays 2.0 ready in March of last year, which is real time Ray Tracing in Vulkan using Async compute. That's not even counting the first version of Radeon Rays..
I certainly understand this perspective but I don't really care about Nvidia vs AMD all that much. I own and have owned many cards from both companies. It's not about that for me but you seem to greatly dislike Nvidia. I don't dislike AMD at all and would love to see them succeed again.

Here's the thing, while RT is possible using compute, it isn't fast enough right now. Period. The RT core is basically just a black box which greatly increases the speed of ray-triangle intersections. I don't believe either company has the ability to produce a GPU capable of even coming close to this using compute. We're just not there yet.

This isn't just my opinion either - it's something shared by developers I've spoken with working on these technologies. The DXR features being tested and implemented would be impossible if they relied solely on compute. It would be like 4-5 times slower on a very fast GPU.

Would you be opposed to a similar DXR compatible core from AMD? I'd love to see it myself.

Again, though, it's not about card wars - I just want to see real-time ray tracing start to take-off.

Now DLSS? Yeah, I'm not hugely into that and I feel the die space could have been used differently there. No arguments there.

Yes, the creation of Mantle by AMD was an important step for both DX12 and Vulkan, where the latter incorporated tech from Mantle.
Exactly. Taking these steps is important.
 
Last edited:

longdi

Banned
DLSS? is tensor cores right? Nvidia is just being lazy, leaving their data center crap in our gaming GPU.

In fact, i fear Nvidia stock prices will depress further. You dont need big general purpose GPU for tasks like AI.

Nvidia really must relook their GPU designs, gone are the days of all-in-one GPU, kinda ironic, GPU started as specialised compute toy over stuff like Intel x86 line.
 

KINDERFELD

Banned
1080ti owner here and the bump in performance that an RTX 2080ti gives you is nice and in a lot of games means the difference in holding a stable 4K/60fps or not.
However, the cost is ludicrous especially when compared to the performance and price of a 1080ti.
 
Last edited:

tr1p1ex

Member
NO reason to get new cards really. But you can say that about many generations of cards.

The main reason for their dip is the crash of the bitcon mining.
 
Last edited:

thelastword

Banned
What are the chances of a price drop of the 2070 & 2080 due to the low sales report?
Maybe none yet, they will probably keep it there and try to see how many of these cards people will still buy at a high price, they will announc the GTX cards at cheaper prices whilst trying to justfify RTX features at the higher price........Eventually they will phase out RTX cards since it's not ready for primetime and relaunch when they are finally on 7nm.......In the meantime, they will try to help devs put some more RTX games out to push the remaing RTX stock, clearly, devs are having trouble as it does not just work and lots of work is involved with this hybrid solution, perf and rez are just crucified too much and devs don't like this.....RTX in shadow of the TR, MIA, DLSS in a bevy of games MIA, RTX will also not be available at launch in many upcoming games, some advertized as supporting it day 1....So they have quite a few problems here...Soon they will push GTX hard at lower prices than RTX......GTX 1670, 1680, 1680ti...........


But the 2060 should help things along.
No it won't, Vega 56 is outpacing the 2060 in many titles, not DX11 or NV titles...DX12 and vulkan titles see Vega 56 excel over 2060 and 1070ti....Dx12 and vulkan is the future, plus Vega has 8Gb of Vram, there's no way you should buy a 1070ti level card with 6Gb of vram in 2019.....

As for Raytracing?

J8CXPGD.png


Then we have some gems like this...

dd053bb07e470ff994386768addb328af991ebc630bd2cc7a6725331178c6108.png


ddca8cb80b726501d44ba5b267cffcf3752ba74883b2913a9487ea94638042ee.png


DX11 is going the way of the DODO, no matter how much Nvidia wants to surpress DX12 and vulkan....You realize how in some titles you got great DX12 support and the sequels have worst DX12 support????? Guess what....

This just has to end, it's keeping gaming and performance back....








I have to disagree here...
Vulkan in particular (although DX12 also is equally capable) offers an efficiency and flexibility that is unprecedented, where many technologies could be implemented without the need for nVidia's closed off approach. All the while Vulkan, and probably even DX12 wouldn't exist in its current form if it wasn't for AMD, but they don't get props for innovation. nVidia gets praised for their anti-consumer closed off approach though, and to me, that's disgusting.

Large groups of gamers bashed DX12 because nVidia performed badly (or at least worse than DX11) in it, while AMD had either parity or an improvement with it in most cases. So what really is the problem then? DX12, or nVidia? Vulkan is bashed to a lesser degree because there the benefits have been more obvious. The only reason it's bashed is because it's barely used compared to DirectX. But nVidia's mind share has dominated gaming for too long, and the consumer would be better off if things went differently, as in, more open.
Great points made........Even Youtubers are on the DX12 bashing train because NV does worse in it.......NV is all about propietary standards that bogs down performance and the industry...If it was up to NV we would be on DX11 forever......Gameworks features that crucifies performance would be more prevalent, so people could justify spending $800.00, 1200.00, 3000.00 to get more performance with these resource hogs enabled.....Which brings us to NV's RTX, which even in hybrid form crucifies performance and they ask an arm and a leg for cheap tech and low vram counts..........

Yet, how can people not see who's really moving this industry forward, are they blind? This baffles me, because surely persons can't be so disingenuous.......Remember how so many people laughed at compute cards when AMD offered it to gamers, so when NV lost heavily in certain compute heavy titles and in vulkan, guess what, NV is now onboard with compute on their gaming cards in Turing, followers really......NV had their proprietary and of course expensive Gsync tech, but AMD went with the open standard Freesync, which benefits the gamer...Do you know how much NV and it's fans bashed freesync, now guess who's on the freesync train? And you would think they would come and concede that AMD had the better approach and the better vision for gamers, but no, they came in arrogant on day 1 of their freesync support announcement and bash AMD saying they have better freesync support etc.......The audacity......It just never ends with Nvidia, but perhpas they going to learn the lesson they need soon, it has already begun......
 

Ascend

Member
I certainly understand this perspective but I don't really care about Nvidia vs AMD all that much. I own and have owned many cards from both companies. It's not about that for me but you seem to greatly dislike Nvidia. I don't dislike AMD at all and would love to see them succeed again.
It's not about nVidia vs AMD for me either. At least, not directly. If AMD was the one with a bunch of money, I'm quite sure the situation could easily flip. Has AMD messed up? Sure. Multiple times. The silent downgrade of the RX 560 is practically worse than the GTX 1060 3GB naming scheme.
But it's up to the consumers to keep companies 'honest', and for a very long time, barely anyone is really critical of nVidia's practices. In fact, they are even praised for offering closed technology, while AMD is shunned for doing so. Prime example... G-Sync vs FreeSync.
When AMD brought FreeSync, they were just copying nVidia and having an inferior version of G-sync.
When nVidia started supporting FreeSync, nVidia is great for doing so.

The double standard is appalling.

Here's the thing, while RT is possible using compute, it isn't fast enough right now. Period. The RT core is basically just a black box which greatly increases the speed of ray-triangle intersections. I don't believe either company has the ability to produce a GPU capable of even coming close to this using compute. We're just not there yet.

This isn't just my opinion either - it's something shared by developers I've spoken with working on these technologies. The DXR features being tested and implemented would be impossible if they relied solely on compute. It would be like 4-5 times slower on a very fast GPU.

Would you be opposed to a similar DXR compatible core from AMD? I'd love to see it myself.
But it can be done like nVidia is doing it, a hybrid between rasterization and ray tracing. AMD talked about it at GDC last year for their instinct products. Basically, everything nVidia is doing regarding RT, was talked about. Look at page 62 of their GDC presentation... It states this;


PRORENDER REAL-TIME RAY TRACING
Rasterization for primary visibility and lighting
‒ No noise in primary
‒ Fast feedback

Asynchronous ray tracing for secondary and complex effects
‒ Based on RadeonRays

You choose
‒ Ambient occlusion
‒ Glossy reflections
‒ Diffuse global illumination
‒ Area lighting

Effects can be turned on/off based on HW capabilities

MC-based effects are denoised using wavelet filter


Source

Does that sound like what nVidia is doing? Seems exactly the same to me, except AMD does it through Async compute, something that AMD cards had since 2011. And if you really look at tensor cores, in reality they're compute cores.

Again, though, it's not about card wars - I just want to see real-time ray tracing start to take-off.
Me too. But I don't want it as any sort of exclusivity from either company. The ideal thing is if each make their implementation through Vulkan/DX12 or another existing API.

Sadly, I don't have the capabilities of doing it, but, I'd love to see someone try to get RT to work through async compute on Radeon VII for Quake 2 or something. Just to confirm that it can work without specified cores. AMD knows they can do it, but won't be shoehorning it in for their premium cards...;

For the time being, AMD will definitely respond to Direct Raytracing, for the moment we will focus on promoting the speed-up of offline CG production environments centered on AMD’s Radeon ProRender, which is offered free of charge ….. utilization of ray tracing games will not proceed unless we can offer ray tracing in all product ranges from low end to high end, – David Wang

https://www.tomshardware.com/news/david-wang-amd-ray-tracing,38056.html
 

dark10x

Digital Foundry pixel pusher
Does that sound like what nVidia is doing? Seems exactly the same to me, except AMD does it through Async compute, something that AMD cards had since 2011. And if you really look at tensor cores, in reality they're compute cores.
My point is that it’s not fast enough using this approach. It can absolutely work but at a lower level of performance. The RT core is faster but even that is only just barely reasonable enough to produce smooth frame-rates in games. The method you describe is similar but at a much lower level of performance so it doesn’t make sense to do it.

Me too. But I don't want it as any sort of exclusivity from either company.
DXR isn't exclusive, though, right? I don't see how it wouldn't be exclusive to one or the other for a period of time. The two aren't going to synchronize new features. AMD could support DXR right now if they were so inclined.
 
Last edited:
Bit coin mining inflated the prices - most PC gamers do not want to pay more than £350 for a graphics card.

I have a 970 and I will not upgrade until the latest mid range card is in that region.
 

Ascend

Member
My point is that it’s not fast enough using this approach. It can absolutely work but at a lower level of performance. The RT core is faster but even that is only just barely reasonable enough to produce smooth frame-rates in games. The method you describe is similar but at a much lower level of performance so it doesn’t make sense to do it.
Yeah I don't buy it... No, it won't be as fast as the RTX 2080 Ti, but it's not as if nVidia's solution is somehow that much superior compared to AMD's. VEGA cards could at least match the "low" Ray Tracing setting in BFV.

The Vega 56/64 cards are capable of around ~4-5GRays/s (reference here). nVidia claims up to 10 GRays/s for its RTX 2080 Ti, 8 GRays/s for the RTX 2080, 6 GRays/s for the RTX 2070. Guess where the RTX 2060 lands.... Yeah... The same capability as Vega 56/64. It's near the same as traditional performance. So I'm not seeing where nVidia's solution is so much faster. And going by this, there is no reason that the RTX 2080 would be superior at this compared to the Radeon VII.

But indeed it doesn't make sense to do RT at this point for AMD, because putting resources in it is not an efficient use of their currently limited resources.

DXR isn't exclusive, though, right? I don't see how it wouldn't be exclusive to one or the other for a period of time. The two aren't going to synchronize new features. AMD could support DXR right now if they were so inclined.
I'm quite sure its current implementation is catering to the nVidia hardware, meaning, even if AMD had implemented a similar capability in its drivers, or in hardware, it either wouldn't work or would perform dreadfully by the way it has been programmed. Think HairWorks vs TressFX.
 
Last edited:

Redneckerz

Those long posts don't cover that red neck boy
Absolutely which is why competition is so important. I really hope AMD manages to come up with something but the Radeon VII isn't it.

That said, due to lack of competition, it was a good time to introduce the RT core.
Navi might be a thing?

I do hope you are right, since i will wait long time before upgrading CPU as well.
Oh you shouldn't worry. Developers impressions on system specs fluctuate wildly on real world specs.

Yeah I don't buy it... No, it won't be as fast as the RTX 2080 Ti, but it's not as if nVidia's solution is somehow that much superior compared to AMD's. VEGA cards could at least match the "low" Ray Tracing setting in BFV.

The Vega 56/64 cards are capable of around ~4-5GRays/s (reference here). nVidia claims up to 10 GRays/s for its RTX 2080 Ti, 8 GRays/s for the RTX 2080, 6 GRays/s for the RTX 2070. Guess where the RTX 2060 lands.... Yeah... The same capability as Vega 56/64. It's near the same as traditional performance. So I'm not seeing where nVidia's solution is so much faster. And going by this, there is no reason that the RTX 2080 would be superior at this compared to the Radeon VII.

But indeed it doesn't make sense to do RT at this point for AMD, because putting resources in it is not an efficient use of their currently limited resources.


I'm quite sure its current implementation is catering to the nVidia hardware, meaning, even if AMD had implemented a similar capability in its drivers, or in hardware, it either wouldn't work or would perform dreadfully by the way it has been programmed. Think HairWorks vs TressFX.
Regular GTX 10 cards can do multiple GRAYS per second if we have to take a look at scene demo's, where a 1080ti manages 11 Grays/s:



So what's important is what is traced in a scene to get to the number Nvidia is been giving.
 

Dontero

Banned
And there is literally ZERO reason, why ray tracing could not be implemented through compute, which AMD is also extremely good at. If things were done openly, gamers would benefit, but that's not what happens.

So where is AMD real time in game ray tracing ? Nowhere and they doesn't seem to be interested in it right now.
Also radeon rays etc. is not for games mate it is for rendering stuff for companies and it is nowhere near what RTX does in real time.

While nvidia are fuckers who often play shit game RTX is not one of them.
The real problem with RTX is not the technology but price. They jacked prices of their GPUs to basically impossible limit to most of people.

RTX also is important lesson. That ray tracing does not have any more WOW factor like it used to decade ago, because our faking of effects got so good that you can switch on raytracing and layman will not see difference between two monitors.

I mean the biggest reason FOR ray tracing was surface stuff like getting good roughness, scattering etc,. All of that is now almost completely done in rasterization pipeline at the fraction of ray tracing cost and effects of it are really fucking good.

So ray tracing still has benefits but those benefits melt away with every year layer after layer. Few years ago it was surface stuff, recently Global illumination techniques are making huge leaps.

Nvidia should be given thumbs up for innovating but that innovation will go nowhere if they ask premium for it.
Just like PhysX.
 

Ascend

Member
So where is AMD real time in game ray tracing ? Nowhere and they doesn't seem to be interested in it right now.
And with good reason. Even nVidia, which normally makes mountains of money with their cards, is disappointed with the RTX sales. You really think AMD, which normally gets the short end of the stick compared to nVidia, is going to put their already limited resources into something that basically failed nVidia? Yeah... Don't think so.

Also radeon rays etc. is not for games mate it is for rendering stuff for companies and it is nowhere near what RTX does in real time.
Nowhere near RTX based on what? It is the exact same technology. More importantly, the same architecture is being used for Radeon Rays hardware as for the gaming hardware, so...

While nvidia are fuckers who often play shit game RTX is not one of them.
The real problem with RTX is not the technology but price. They jacked prices of their GPUs to basically impossible limit to most of people.
Not true. Judge for yourself what the 1080 Ti is or isn't capable of... Watch this, from 2:15 to 3:50, and listen carefully.



Based on that... I'm quite sure nVidia could do everything it's doing with RTX on a 1080Ti if they really wanted to. But obviously they don't want to, because RTX would look even more ludicrous.

The real problem with RTX is not the technology but price. They jacked prices of their GPUs to basically impossible limit to most of people.

RTX also is important lesson. That ray tracing does not have any more WOW factor like it used to decade ago, because our faking of effects got so good that you can switch on raytracing and layman will not see difference between two monitors.

I mean the biggest reason FOR ray tracing was surface stuff like getting good roughness, scattering etc,. All of that is now almost completely done in rasterization pipeline at the fraction of ray tracing cost and effects of it are really fucking good.

So ray tracing still has benefits but those benefits melt away with every year layer after layer. Few years ago it was surface stuff, recently Global illumination techniques are making huge leaps.
All true.

Nvidia should be given thumbs up for innovating but that innovation will go nowhere if they ask premium for it.
I'm not seeing the innovation here. Other than arguably it being possible for AMD to do the same thing with GCN hardware as I explained here, nothing the Tensor cores are doing are really innovative.

Let's just quote two things...;

"Tensor cores: A tensor core is a unit that multiplies two 4×4 FP16 matrices, and then adds a third FP16 or FP32 matrix to the result by using fused multiply–add operations, and obtains an FP32 result that could be optionally demoted to an FP16 result.[7] Tensor cores are intended to speed up the training of neural networks."
https://en.wikipedia.org/wiki/Volta_(microarchitecture)

And...;
Whilst the compiler emitted the expected v_pk_mul_f16 operations, it didn’t emit the code sequence you might expect to load a min16float4 from memory. It loaded FP32 values and packed them down to an FP16 vector manually. If you were to access a larger type, such as a min16float4x4 matrix, the code sequence would be very sub-optimal. There is an easy solution. If we change the source code to:
*code*
The driver recognises this code sequence, and issues a much more optimal sequence of instructions:
*instructions*

https://gpuopen.com/first-steps-implementing-fp16/

Now what does that mean? It means Vega can handle 4x4 FP16 matrices, but the coding needs to be different to be efficient.
Fused multiply-add (FMA) operations are also nothing new... Search for FMA in this 2011 GCN presentation;
http://developer.amd.com/wordpress/media/2013/06/2620_final.pdf

And look at this list of all the hardware that supports it... It includes AMD GPUs from 2009, and Bulldozer... And even funnier, the first processor to have this technology was a 1990 processor by IBM.
https://en.wikipedia.org/wiki/Multiply–accumulate_operation#Fused_multiply–add

If we can believe a random person on a forum, it also seems to be that whatever Tensor is doing, AMD's VLIW architecture was capable of doing...;
"AMDs old VLIW architectures support a "horizontal" FP32 DP4 (i.e. dependent operations) in a single VLIW instruction (using 4 slots) meaning the old VLIW-SIMDs (consisting of 16 VLIW units) could do almost the same as a tensor core (they had the capacity for 16 DP4 per clock, which is what you need to do for a 4x4 matrix multiplication, and that even in FP32)"
https://forum.beyond3d.com/posts/1984638/

So tell me again, what is so special about Tensor cores and RTX?

Just like PhysX.
PhysX was not nVidia technology originally. They bought the company that developed it and made it exclusive to their own hardware. That's why it became a gimmick, rather than a mainstream feature, because it could never be implemented by everyone. I would hardly call that innovation. It's stagnation due to exclusivity. It's the same reason GameWorks in general was a gimmick.
 
The 'rtx' and 'dlss' are promising features (if they ever come to fruition in a meaningful way,) but I bought a 2060 for the rasterization (it outperforms the 1070ti and competes with the 1080 in some games.)
 
Last edited:

Mr Nash

square pies = communism
How much of the purchasing of silly expensive video cards was fueled by all those people piling in on cryptocurrencies when that was full of speculators? That must have played a part in video cards selling so well and prices going up for them as well. In any case, people aren't forking out that kind of money for these cards any more so companies are going to have to get prices down if they want units to sell going forward.
 

Stuart360

Member
Apart from the people that simply have to have 120fps, 144fps etc, you dont really need to buy expensive gpu's constantly, as long as your gpu has a decent edge over consoles as 95+% of games released are based on the console specs.
I'm still rocking a GTX 980ti here and its more than good enouigh for 1080p/60, even 1440p/60 in some games, in fact i can even do 4k/30 in a lot of new games, and as a former console user, i have no problem with 30fps.
 
Top Bottom