• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD ray-tracing hardware strategy overview.

Leonidas

Member
rdnartchkqq.png

AMD also briefly touched on its vision for real-time ray-tracing. To begin with, we can confirm that the "Navi 10" silicon has no fixed function hardware for ray-tracing such as the RT core or tensor cores found in NVIDIA "Turing" RTX GPUs. For now, AMD's implementation of DXR (DirectX Ray-tracing) for now relies entirely on programmable shaders. At launch the RX 5700 series won't be advertised to support DXR. AMD will instead release support through driver updates. The RDNA 2 architecture scheduled for 2020-21 will pack some fixed-function hardware for certain real-time ray-tracing effects. AMD sees a future in which real-time ray-tracing is handled on the cloud. The next frontier for cloud-computing is cloud-assist, where your machine can offload processing workloads to the cloud.

Source


  • "Next-Gen" RDNA coming 2020 or later will have ray-tracing hardware, presumably this is what Xbox Scarlett (and next-gen PlayStation) is using.
  • The just announced 7nm RDNA 2019 (RX 5000 series) does not have this hardware capability.
  • The wording "select lighting effects" makes Next Gen RDNA ray-tracing appear limited.
  • Cloud will support full scene ray-tracing some time in the future...
Looks like we'll have to wait till 2020 or later for RTX vs "Next Gen" ray-tracing comparisons :lollipop_confounded:
 
Last edited:

Aintitcool

Banned
Watch the newest halo trailer for example of ray tracing in next gen.

But its a hybrid implementation and the devs must be having a hard time cause it was a very lighting inconsistent version of it. Probably because the game is also on current gen.
 
This is what next gen graphics will look like with a bit of luck, no need for ray tracing for another 10 years lol:



thing is: without RT or at least some decent realtime GI there willl be a big graphical gap between realtime cinematics and gameplay. i don't want this to be the case.
 

lukilladog

Member
thing is: without RT or at least some decent realtime GI there willl be a big graphical gap between realtime cinematics and gameplay. i don't want this to be the case.

If the hardware can´t handle probe style GI from cinematics during normal gameplay, much less RT GI.
 
Last edited:
Seems like I'm gonna be buying Nvidia GPUs for yeas to come.

I say that as someone who's buying a 3900X Zen 2 CPU on June 7th and would prefer to support AMD over Nvidia.

But they are still SO far behind in the PC GPU space.

They are still noticeably behind with their brand new GPUs in how powerful they are, they still use more electricity and damningly they use more electricity WITH THE ADVANTAGE of being on 7nm ahead of Nvidia!

What's gonna happen when Nvidia move to 7nm?

Hell, what's gonna happen (possibly today) when Nvidia release refreshed "SUPER" RTX lineup which may be as little as the same old RTX cards but with slightly faster memory and possibly even a price cut.

AMD is farther behind Nvidia than some people choose to believe.

The only way I can imagine AMD pulling ahead of Nvidia at this point is if they manage to make chiplet based GPUs connected by infinity fabric and do it in a way that is invisible to developers ( none of the drawbacks of SLI or crossfire ) and just works. They can have their chiplets compete against Nvidia's monolithic dies.... it's worked so well against Intel.

I'd be surprised if that's not exactly what they're working towards but it seems like if that's coming it's still years away.
 
Last edited:

llien

Member
But they are still SO far behind in the PC GPU space.
For someone who's buying stuff from AMD you are too much into green FUD.
The only aspect in which AMD is notably behind is power consumption.

40CU 5070 XT beats more expensive 2070
5070 (non-XT) looks like this vs 2060:

asXrbyI.jpg


power consumption figures are on par, and yes, AMD has a process node advantage.

Hell, what's gonna happen (possibly today) when Nvidia release refreshed "SUPER" RTX lineup which may be as little as the same old RTX cards but with slightly faster memory and possibly even a price cut.
Nothing.

The only way I can imagine AMD pulling ahead of Nvidia
AMD doesn't need to have $1300+ GPU to pull ahead of nVidia.
 
Last edited:
I really don't think it's "green FUD" at all.

If all that AMD can do is simply almost match Nvidia ( but not quite in power efficiency, not quite in features and then not competing in the high end ) years later.... why should I bother?

I just don't understand how any of these Navi GPUs make any sense. If I wanted a 2060 or 2070 I could have already bought one. I could have already had those levels of performance for years on the Nvidia side.

And if these NAVI GPUs do have any advantage ( either in price or performance ) Nvidia will shortly erase it with either price cuts, a refresh or both.

The only AMD GPU that makes any sense is the 570 (the 570 not the 5700)... nothing else.

I'm baffled why AMD is only releasing these mid range NAVI cards this year. RIGHT NOW they have SOME advantage by being first at 7nm.

Is it not possible for them to release an 80CU Navi GPU that can compete with or even beat at 2080ti? Even if it consumes an ABSURD amount of electricity and no raytracing, just DO IT AMD. Hold that performance crown again even if it's only for a short time.

So what? "Wait for Big Navi next year"? By then Nvidia will be on 7nm too and take another giant leap forward and the gap between them will increase again. Opportunity lost.

AMD is not close to Nvidia and if you think they are you haven't really thought about it.
 
Last edited:

llien

Member
I really don't think it's "green FUD" at all.
If all that AMD can do is simply almost match Nvidia
Because something is very wrong with your glasses:

asXrbyI.jpg



If I wanted a 2060 or 2070 I could have already bought one.
Right.
In fact, if you wanted 2060 or 2070 you could have bought 1080Ti and 1070/1070Ti already.


And if these NAVI GPUs do have any advantage ( either in price or performance ) Nvidia will...
570 wipes the floor with nvidia's products from the same price segment, nvidia didn't drop price on those.


Is it not possible for them to release an 80CU Navi GPU that can compete with or even beat at 2080ti?
Of course it is possible, but why do it? Are you into buying $1300 cards?


So what? "Wait for Big Navi next year"?
Majority of gamers shown on steam have 1060/1050Ti/1050 level cards (or below).
Now, if you are $1000+ GPU customer, why wait for anything, just keep buying the fastest card available at any given point.

But "I will buy 2060 over 5700, because AMD doesn't have a product in $1300 range" is nonsensical.
 
Last edited:
I feel like you are underestimating just how EASY it will be for Nvidia to counter Navi.

They can simply reduce their prices slightly. Or they can release a 2060ti or 2070ti which will bring them up to parity or above. They will automatically be the better options because they have RTX and consume less power.

And I 100% expect them to make an announcement any time now about doing exactly that.
 

llien

Member
I feel like you are underestimating just how EASY it will be for Nvidia to counter Navi.

They can simply reduce their prices slightly. Or they can release a 2060ti or 2070ti which will bring them up to parity or above. They will automatically be the better options because they have RTX and consume less power.

And I 100% expect them to make an announcement any time now about doing exactly that.

You are comparing what we have, with some theoretical scenario, which includes nVidia harming own profit margins.
I expect "Super" to simply not cost more than non-super.
So on par or faster 2070 for $500 while 5700XT is $449.
2060 Super will still be quite slower than 5700.
 
Last edited:

LordOfChaos

Member
The more we learn the more I think that this first gen RDNA was supposed to be called GCN all along, but they were wary of all the negative connotations that would bring with it, so being a substantial efficiency gain, moved the RDNA branding down one and will call the true "Next Gen" that has been on their roadmaps for years, RDNA second gen.

Navi has a substantial change to the SIMD groupings, so it is a fair step away from base GCN, but I think Next Gen turned "Next Gen RDNA" is the whole banana.


gpuroadmap.png
 

Leonidas

Member
So on par or faster 2070 for $500 while 5700XT is $449.
2060 Super will still be quite slower than 5700.

5700 series doesn't have ray-tracing hardware though. When RDNA Next-Gen comes out next year to PC and next-gen consoles, RX 5700 will be struggling since it lacks dedicated ray-tracing hardware, it will be behind consoles already in a year...
 

Ascend

Member
I really don't think it's "green FUD" at all.

If all that AMD can do is simply almost match Nvidia ( but not quite in power efficiency, not quite in features and then not competing in the high end ) years later.... why should I bother?
It's always a lose lose for AMD. If they add features that nVidia has, they're copying, and if they don't, they lack features. They added FidelityFX, which is basically a superior DLSS, but the only point that is relevant is ray tracing apparently, even though no one saw the relevance since launch. Suddenly now Ray Tracing is the biggest most important features ever. Lower input lag across all games? Nah, we don't need that! We need ray tracing!

I just don't understand how any of these Navi GPUs make any sense. If I wanted a 2060 or 2070 I could have already bought one. I could have already had those levels of performance for years on the Nvidia side.

And if these NAVI GPUs do have any advantage ( either in price or performance ) Nvidia will shortly erase it with either price cuts, a refresh or both.

The only AMD GPU that makes any sense is the 570 (the 570 not the 5700)... nothing else.
And yet everyone buys the more expensive 1050 Ti and GTX 1650 over it. The lower price better performance has not worked for them on multiple occasions. Why would they lower prices this time then, if people will still flock to nVidia anyway? They might as well simply keep their prices up to gain money from the small market that does see value in their products.

I'm baffled why AMD is only releasing these mid range NAVI cards this year. RIGHT NOW they have SOME advantage by being first at 7nm.

Is it not possible for them to release an 80CU Navi GPU that can compete with or even beat at 2080ti? Even if it consumes an ABSURD amount of electricity and no raytracing, just DO IT AMD. Hold that performance crown again even if it's only for a short time.
Yeah... How did that turn out with the 290X? It gained them nothing. They had the flagsip product for over 6 months, and yet, they still didn't succeed.

So what? "Wait for Big Navi next year"? By then Nvidia will be on 7nm too and take another giant leap forward and the gap between them will increase again. Opportunity lost.
You mean opportunity lost for them to lower nVidia prices so everyone goes out to buy their competition? Yeah. Great opportunity lost...

AMD is not close to Nvidia and if you think they are you haven't really thought about it.
Oh they are close. The whole node talk is yet another smokescreen to put AMD in a negative light. If these Navi cards were nVidia's, everyone would be praising them for beating AMD's and charging less. Power consumption wouldn't be an issue at all. It's only an issue when it happens to AMD.

Additionally, the lack of ray tracing would be irrelevant too if the roles were reversed. How I know? I saw no one skipping nVidia for the lack of async compute. I saw no one skipping nVidia for not having FreeSync at the time. I see no one skipping nVidia for having no Radeon Chill equivalent. And right now, the focus is only on exactly what nVidia has and AMD doesn't, not the opposite. Even though the features that AMD has are arguably way more useful than the one nVidia does have...
 

Dontero

Banned
But they are still SO far behind in the PC GPU space.

They are not. They are above. Right now at least.

I really don't think it's "green FUD" at all.

Yes. But this is your lack of knowledge.

Here is why:
2070__: die size 445mm^2
5700Xt: die size 225mm^2

5700XT is faster than 2070, is almost half the size and consumes similar amount of power (180 vs 220W)
And costs 100$ less. They could lower performance a bit to match 2070 and they would get 180 easily because clocks usually scale energy requirement in hyperbole. AMD does what Nvidia does in ~ half the size.

And yes node process is integral part of technology as architecture itself.

Now you do have right to say that their architecture might be worse overall but you don't know that until Nvidia actually releases something on 7nm. Maybe their stuff that worked well on 14nm will have to be redesigned after switching to 7nm ? Who knows. Right now though there are no rumors about Nvidia switching to 7nm
 

Ballthyrm

Member
Graphics will always be limited by work load and nothing else.
The question should always be, how much work and how fast can you make asset like theses.

The limiting factor is the number of qualified artist you can afford to make all that stuff.

Every year we throw more and more people at the problem.
No graphics don't "improve" automacily, there a ton of pipeline stuff under the hood that have to go right for all that stuff to work.

You may have the horsepower, but it is useless if you can even make the assets affordably in the first place.
All i see with that stuff is needing more people to make something slightly prettier.

More people = more money = more expensive games

that money has to come from somewhere, and that somewhere is US the consummers.
Sure it's going to make all the environnements pretty and stuff, but everything that move (human, animals,etc) will need more work

what's next 3000 people team....
 

ethomaz

Banned
I thought RDNA could be a new life for Radeon.
But the sad truth is that their product still lacks a lot compared with the competition.

The power draw is still unacceptable.
They still can't match performance even with better process node.
Hardware features still waiting to be implemented.

RX 5700 even having 2TFs additional power just matching RTX 2070 show the actual situation of AMD's GPUs.
 
Last edited:

Leonidas

Member
They are not. They are above. Right now at least.

2060 is ahead of 5700XT in terms of ray-tracing capability.
Imagine buying a $500 GPU in 2019 and falling behind a $500 console in graphics capability in a year...

2070__: die size 445mm^2
5700Xt: die size 225mm^2

5700XT is faster than 2070, is almost half the size and consumes similar amount of power (180 vs 220W)

Weak argument. That shows that Turing architecture is better than RDNA, and Turing already has ray-tracing.

Also 5700 series is 251mm^2, not 225...
 
Last edited:
So basically, ray-tracing will be used for more Global Illumination and less for Reflective surfaces. Chances are, Screen Space Reflections will stay.
 
Last edited:

llien

Member
The power draw is still unacceptable.
It's as unacceptable, as 2070/2060's power consumption is, chuckle.
One goes either after lower power consumption, or higher perf per die area.
AMD went after latter and the math looks pretty good.

I also suspect they can drop price quite a bit and still make good money. Maybe after Vega 56/64 are sold out, they will.

They still can't match performance even with better process node.
They have shown much slower chips beating much bigger nvidia chips.

AMD is not going after 750mm^2 chips any time soon.

Hardware features still waiting to be implemented.
Well, I honestly tried to "see it" in Battlefield.
I could not.
Performance hit, however, is very significant.

On the other hand extra 2Gb of RAM, and that, clearly overlooked, lover input response (at the same framerate) could make a big difference.
 
Last edited:

CrustyBritches

Gold Member
It was with CrustyBritches CrustyBritches
But the bet was PS5 will use big navi
It was 2 part:
1. GTX 1080/RTX 2060 can beat PS5
2. PS5 is Navi 10(ie 44-48CU, but now we know *clarify* Navi 10 is 36-40CU), not the bigger Navi 20.

All my guesses and bets are based around performance and power consumption. Features had nothing to do with it.
 
Last edited:

Dontero

Banned
2060 is ahead of 5700XT in terms of ray-tracing capability.
Imagine buying a $500 GPU in 2019 and falling behind a $500 console in graphics capability in a year...

Imagine having fixed function hardware that doesn't work correctly and slows down to crawl your game.
Current implementation of RTX is atrocious. Not only it is basically few effects instead of full on raytracing but it also murders your performance worse than PsyhX at the time.

Weak argument. That shows that Turing architecture is better than RDNA, and Turing already has ray-tracing.
How can it be better if it does exact same thing and needs 100% more die space to accomplish this ?
So what it has RayTracing ? AMD has TrueAudio build in and 4 games total use it lol.
 

ethomaz

Banned
How can it be better if it does exact same thing and needs 100% more die space to accomplish this ?
So what it has RayTracing ? AMD has TrueAudio build in and 4 games total use it lol.
It doesn't need 100% more die space... that difference in space is just because one is 7nm and the other not.
You can't compare die space of different nodes... the lower one will be smaller lol
 
Last edited:

llien

Member
The architecture is better because it has similar performance/watt at on an older process.
But it is a chip nearly twice as big.
Node shrinks give you either perf gain per die area, or lower power, but not both.

The best comparison is, perhaps, VII. Which also is on 7nm, but loses to 2080 on both perf and power consumption.
Navi seems to do better, despite giving up expensive HBM2.
 
Last edited:

ethomaz

Banned
But it is a chip nearly twice as big.
Node shrinks give you either perf gain per die area, or lower power, but not both.

The best comparison is, perhaps, VII. Which also is on 7nm, but loses to 2080 on both perf and power consumption.
Navi seems to do better, despite giving up expensive HBM2.
TSMC claims 7nm offer 0.52x die size of 16nm.
445 * 0.52 = 232mm2

Ohhh the magic.

Even if the shirking from the buzzword 12nm is smaller it will be near 50% (that is the double size you claim)... that is why people found weird AMD having a lot of unused die space in Vega 7nm that made the shrink way lower than the process offer.

Plus Turing has RT units in hardware that increases the die size.

How your face will look if RTX 2070 in 7nm ends smaller than RX 5700???
 
Last edited:

sertopico

Member
I believe it is up to developers choose where to use, no?
They choose hybrid approaches because RT is really that demanding.
Of course, it will be up to them. I'm curious to see how they'll manage it, always keeping in mind that what we are seeing right now on PC's won't be doable on future consoles.
 

Ascend

Member
Games support ray-tracing today and many more will in the future.

RX 5700 series will be obsolete next year.
Did you say the same when GCN had async compute and Maxwell didn't?

Yeah didn't think so.

The green bias is strong in here.
 

ethomaz

Banned
Did you say the same when GCN had async compute and Maxwell didn't?

Yeah didn't think so.

The green bias is strong in here.
About that...


BTW Maxwell was the biggest evolution of nVidia Architecture that they are profiting from it even today... Navi is no where close to what Maxwell was to the GPU industry.

It is basically the opposite of Maxwell.
 
Last edited:

gspat

Member
You are comparing what we have, with some theoretical scenario, which includes nVidia harming own profit margins.
I expect "Super" to simply not cost more than non-super.
So on par or faster 2070 for $500 while 5700XT is $449.
2060 Super will still be quite slower than 5700.
I expect nvidia to simply add a "S" at the end of the name, charge 50 bucks more and call it a day.
 

gspat

Member
TSMC claims 7nm offer 0.52x die size of 16nm.
445 * 0.52 = 232mm2

Ohhh the magic.

Even if the shirking from the buzzword 12nm is smaller it will be near 50% (that is the double size you claim)... that is why people found weird AMD having a lot of unused die space in Vega 7nm that made the shrink way lower than the process offer.

Plus Turing has RT units in hardware that increases the die size.

How your face will look if RTX 2070 in 7nm ends smaller than RX 5700???
Are you sure the I/O portion will scale for nvidia? That's a big chunk of any die. It's a big reason AMD ended up with chiplets.
 

CyberPanda

Banned
Even dictator from digital foundry is worried about AMD’a solution. AMD is LTTP in regards to this tech, and Nvidia will pull further ahead in terms of raytracing technology. GPU wars are gonna get interesting.
 

Ascend

Member
I expect nvidia to simply add a "S" at the end of the name, charge 50 bucks more and call it a day.
Considering the RX 570 vs 1050Ti & GTX 1650 situation, this is exactly what I expect also, if not even more... But then, will all these greenhorns be complaining to nVidia that the price is too high?
 
Last edited:

Dontero

Banned
The architecture is better because it has similar performance/watt at on an older process.

Why do you decouple architecture from node process ? You do realize that it is important part of it no ?
Fact is that AMD does have right now node advantage and can build dies half the size of Nvidia for greater power.
I don't understand why you ignore someone obvious technological advantage.

You mean like AMD doesn't play by rules ? What rules exactly. There are no rules, those who make better hardware win.

If Nvdia can kill AMD right now by moving toward lower node then they should do it if they can.
Maybe there is good reason why AMD went ahead first instead of focusing on architecture.
Maybe they decided that they shouldn't waste time making changes to arch and they should just go for lower node now.


It doesn't need 100% more die space... that difference in space is just because one is 7nm and the other not.
You can't compare die space of different nodes... the lower one will be smaller lol

And why i can't ? You act like above poster where you believe in some sort of unwritten rules of business or something.
Fact is that if you will be buying gpu you will not care if something is 7nm or 14nm you will care only how fast is something, for what price.
If nvidia can release 7nm gpus they should then we could have different talk.
 

Dontero

Banned
Even dictator from digital foundry is worried about AMD’a solution. AMD is LTTP in regards to this tech, and Nvidia will pull further ahead in terms of raytracing technology. GPU wars are gonna get interesting.

DF folks while they are fine with analising framerate they are not really that good with tech predicitons.
There is simple reason why AMD doesn't need to care what Nvidia is doing with RTX.

AMD unlike Nvidia have all best developers on board with what they will be doing by virtue of using AMD hardware in next gen consoles and what they will be cooking for RT will set industry standard, not nvidia. It could be even that AMD solution will be incompatibile with RTX stuff which will mean that nvidia stuff will be completely useless for all new games with RT features and they will have to redesign it to fit into AMD design (which is pretty standard in GPU race)

Though i am not so sure AMD will be giving RT that much stuff to play with. They already mentioned that it will support hardware wise select effects not whole thing. Which imho is much more realistic solution but nonetheless not something RTX is trying to do.
 
Last edited:

gspat

Member
Considering the RX 570 vs 1050Ti & GTX 1650 situation, this is exactly what I expect also, if not even more... But then, will all these greenhorns be complaining to nVidia that the price is too high?
Nope they'll still complain AMD is too expensive.
 
Top Bottom