• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Report: AMD expects “Big Navi” to compete with 3070, not 3080

llien

Member
Isn't the 3080 like 50-60% faster than the 2080 Ti in that 4K Doom video?
TPU (with orgasmic title) said so:

Throughout the video, NVIDIA compared the RTX 3080 to the previous-gen flagship, the RTX 2080 Ti, with 20-30% performance gains shown for Ampere.
TPU
 

JimboJones

Member
However I agree that if AMD doesn't have a proper answer to RayTracing and especially DLSS it might be a generation where the feature set easily decides in favor of Nvidia.

Yeah before I would have been all for a cheaper AMD (maybe slightly weaker) alternative but if it's missing or has worse alternatives to what's looking more and more as must have features I can't really get behind them.
 

GymWolf

Gold Member
So, with that Doom demo, we have 3080 at 2080Ti + 20-30%.
As expected.

The only unexpected bit is modest, no hikes, no Fools Edition bullshit pricing.

But only up to 3080.
Hm, what could have possibly caused it.

BIoqBMZ.jpg




The <resolution> DLSS is a nonsensical concept that is highly misleading.
There is also something wrong with hyping technology that in your personal experience actually sucks.
As if you were victim of FUD campaign or something.

Peace and Harmony.


Between 3070 and 3080, closer to 3080 (so that 3070Ti is still beaten) is what was leaked by that spot on at times dude.
My personal experience was with dlss 1.0 tho.
 

Ellery

Member
TPU (with orgasmic title) said so:

Throughout the video, NVIDIA compared the RTX 3080 to the previous-gen flagship, the RTX 2080 Ti, with 20-30% performance gains shown for Ampere.
TPU

Thanks for the link. I see. Looking at some scenes (they don't present us a lot of direct comparisons sadly) it looks to be higher though
Maybe TPU has done a more thorough analysis on this or has knowledge that I do not have, but based on the video it seems to be more than 20-30% performance gains.
Other explaination I have is that TPU doesn't confused the difference between gain(increase) and difference / decrease when taking about percentages.

FBEPxtJ.jpg



Also at 1:19 where they switch from 2080 Ti to 3080 it goes from around 100 to 150 fps.

Things can be different based on scenes, but 20-30% sounds a bit low too me looking at that video, but maybe I don't have the full picture and TPU knows more or has done a much more thorough analysis of this video. I would love to know what they actually did to get to 20-30% gains.

Edit :
More pictures
E3iBGIs.jpg


Yeah before I would have been all for a cheaper AMD (maybe slightly weaker) alternative but if it's missing or has worse alternatives to what's looking more and more as must have features I can't really get behind them.

Yup those are killer features. I currently have an RTX card and depending on what the game support in the future looks like I don't want to miss RTX and DLSS. (moreso DLSS than RTX, but the combination is what makes it sweet/useful).

I do have to admit that I am not sure how AMD is going to tackle RayTracing and what FidelityFX is actually about (in comparison to DLSS).

But knowing this industry they are both copying each other. Like Nvidia REFLEX which is a copy of AMD Lag Reduction (or something like that) and so I bet AMD will find an A.I. upscaling way eventually.
 
Last edited:
I would expect a doubled up 5700XT with RDNA2 upgrades pass a 3070 with relative ease if any of the claims of RDNA2 are true. Considering what we've seen from the consoles, RDNA2 seems decent enough, but of course that doesn't necessarily translate into Big NAVI being a success in itself. They could have all sorts of issues from power consumption to drivers, but currently one of the biggest problems they seem to have is a lack of answer to DLSS2. We also don't really know how their ray-tracing solution performs vs. RTX. There's a ton of question marks around Big NAVI, and I don't have a lot of confidence they're up to the task of catching up on all fronts. Even in the very best case they're only on par, in theory, with one or two cards, in mainstream games that don't have DLSS2/RTX in them. That's a lot of caveats.

While I think Nvidia certainly made a much better offer this time than with 20 series, from a tech standpoint this wasn't really a big surprise. They doubled up on a full node jump, which was to be expected. The reason they didn't push hard on the pricing on the low end was likely two-fold: To pre-emptively counter Big Navi which they likely estimated to slot between 3070/80, and the fact that 20 series didn't move that well and people actually won't pay any price they ask. What's curious about Nvidia's lineup is the fact that 3080 and 3090 use the same chip, while having massive price difference. Usually xx80 has been a 04 chip. I wonder how big of a chip GA102 actually is. It might end up being costly for Nvidia to have a 3080 be a cut GA102 if Big NAVI ends up being in that performance class with a more lean die. Of course Nvidia does have plenty of room to play with any Super/Ti cards they'll likely introduce after AMD shows its cards. A lot of this also depends on how TSMC 7nm vs. Samsung 8nm plays out. We'll probably never really know though how much AMD or Nvidia really pay for their chips from either fab, but at least density and power should be easy to compare.

On another note, people need to seriously ignore these speculative hardware channels. They're so embarrassingly wrong most of the time it's painful to hear them talk about things they clearly don't understand much at all. GPU fanboyism is one of the most ridiculously dumb things people could be wasting their time with.
 
Last edited:

Ellery

Member
Amazing post by G General Lee people need to read it. Especially about clickbait youtube channels. They just think about their clicks with baiting headlines and spit out wrong information to get more clicks and money.
 

RoboFu

One of the green rats
Anyone expecting AMD to create a high end card any time soon hasn’t been paying attention. A 2980 ti/3070 medium range is right were they have been rumored at all along.
 

Ellery

Member
Oh I didn't know that about FidelityFX, but it makes sense given how open AMD has been with their stuff ever since TressFX (?)
 

Kenpachii

Member
I would expect a doubled up 5700XT with RDNA2 upgrades pass a 3070 with relative ease if any of the claims of RDNA2 are true. Considering what we've seen from the consoles, RDNA2 seems decent enough, but of course that doesn't necessarily translate into Big NAVI being a success in itself. They could have all sorts of issues from power consumption to drivers, but currently one of the biggest problems they seem to have is a lack of answer to DLSS2. We also don't really know how their ray-tracing solution performs vs. RTX. There's a ton of question marks around Big NAVI, and I don't have a lot of confidence they're up to the task of catching up on all fronts. Even in the very best case they're only on par, in theory, with one or two cards, in mainstream games that don't have DLSS2/RTX in them. That's a lot of caveats.

While I think Nvidia certainly made a much better offer this time than with 20 series, from a tech standpoint this wasn't really a big surprise. They doubled up on a full node jump, which was to be expected. The reason they didn't push hard on the pricing on the low end was likely two-fold: To pre-emptively counter Big Navi which they likely estimated to slot between 3070/80, and the fact that 20 series didn't move that well and people actually won't pay any price they ask. What's curious about Nvidia's lineup is the fact that 3080 and 3090 use the same chip, while having massive price difference. Usually xx80 has been a 04 chip. I wonder how big of a chip GA102 actually is. It might end up being costly for Nvidia to have a 3080 be a cut GA102 if Big NAVI ends up being in that performance class with a more lean die. Of course Nvidia does have plenty of room to play with any Super/Ti cards they'll likely introduce after AMD shows its cards. A lot of this also depends on how TSMC 7nm vs. Samsung 8nm plays out. We'll probably never really know though how much AMD or Nvidia really pay for their chips from either fab, but at least density and power should be easy to compare.

On another note, people need to seriously ignore these speculative hardware channels. They're so embarrassingly wrong most of the time it's painful to hear them talk about things they clearly don't understand much at all. GPU fanboyism is one of the most ridiculously dumb things people could be wasting their time with.

There is no way in hell its going to sit at 3070 performance that's 2080ti output. That's straight up a 30% increase over there 5700xt. It will sit around 3080 and that's probably what nvidia expects to happen.

I expect there big navi to sit at 3080 or a bit above it, nvidia probably expects the same.

Also it makes sense they the same chip, because this allows them to easily overclock the 3080 and add extra ram towards it for the ti version to compete with AMD when they outperform the 3080 to still stay on top with minimal effort as the 3070 probably couldn't reach that performance. U would get more a 980 vs 970 ideology then which wouldn't be interesting to them.
 
Last edited:

sinnergy

Member
There is no way in hell its going to sit at 3070 performance that's 2080ti output. That's straight up a 30-40% increase over there 5700xt.
I think it will be, and that’s short of 3080 and 3090, but that’s expected . If they price smart they have a winner . Next round they could be in a better position .

But Nvidia had a really strong showing .
 

nochance

Banned
Holy wishful thinking batman!

We went from extremely optimistic "AMD will have a 2080ti competitor at a lower price" to "AMD will triple their performance" in a matter of days. The combined strength of AMD fanboyism and console fanboyism is creating some entertaining content.

AMD is already on the 7nm process, their last super expensive card was competing with a 1080ti, they have not shown any additional hardware to handle ray tracing and they have not shown additional hardware that would allow for technology akin to DLSS.
 

Kenpachii

Member
I think it will be, and that’s short of 3080 and 3090, but that’s expected . If they price smart they have a winner . Next round they could be in a better position .

But Nvidia had a really strong showing .

Nvidia will just counter it with a price drop or a new series of budget cards. They already did this, this generation.

Holy wishful thinking batman!

We went from extremely optimistic "AMD will have a 2080ti competitor at a lower price" to "AMD will triple their performance" in a matter of days. The combined strength of AMD fanboyism and console fanboyism is creating some entertaining content.

AMD is already on the 7nm process, their last super expensive card was competing with a 1080ti, they have not shown any additional hardware to handle ray tracing and they have not shown additional hardware that would allow for technology akin to DLSS.


5700xt die size 251 mm2

2080ti die size 775 mm2

maxresdefault.jpg


And u think its going to sit at only a 30% gain for there next series of actual high end cards?

Zero chance.
 

llien

Member
I expect there big navi to sit at 3080 or a bit above it, nvidia probably expects the same.
The on number of occasions spot on leaker said "a104 cannot compete with Big Navi", "not even TI version".
3080 is A102 though.

Gains are not linear, apparently, it is also expected to wield only GDDR6 not GDDR6x, although 7nm TSMC should be superior.

But still, we don't even know if 485mm2 or 505mm2 TSMC 7nm costs less than what NV pays Samsung for "8nm" 627mm2 chip.

We went from extremely optimistic "AMD will have a 2080ti competitor at a lower price" to "AMD will triple their performance"
2080Ti is about 30-45% faster than 250mm2 RDNA1 5700XT.
Your "triple performance" reference is bazinga.

they have not shown any additional hardware to handle ray tracing

akin to DLSS.
Dude, you do not need to invent reasons to stick with filthy green.
 
Last edited:
I typically root for the underdog and am a huge proponent of AMD APUs/CPUs. But I've been underwhelmed and bored by their GPU output for some time. The last AMD GPU I purchased was the RX 480, which despite having some tunable advantages over my 1060 just felt like a less efficient, less polished product. I think that's been an unfortunately true factor for quite a while. I always start out each gen wanting AMD to bring the big guns and motivate me to pick up their product, but there's always some reason why they just feel lackluster by comparison and I usually end up going with nVidia. The build quality of the Founder's Edition GPUs alone is something AMD hasn't matched. And their driver suite is always buggy by comparison.

And, for one of my specific PC use cases - a slim line low profile HTPC in my entertainment center - AMD never offers anything competitive. I've purchased Low Profile 750 Ti, 1050 Ti and (2) 1650s to put in my HTPC rigs simply because AMD doesn't ever offer anything that can compete with a LP form factor.

I want AMD to give me a reason to go team red with at least one of my rigs but I'm guessing I'll be picking up a 3070 or 3080 to upgrade my Ryzen/2060 system. [sigh]
 

nochance

Banned
Nvidia will just counter it with a price drop or a new series of budget cards. They already did this, this generation.




5700xt die size 251 mm2

2080ti die size 775 mm2

maxresdefault.jpg


And u think its going to sit at only a 30% gain for there next series of actual high end cards?

Zero chance.
And you think, that given the failure rates on their recent "high end" entries they can just scale the die and call it a day?

Even if they had the 50% increase on 5700XT, how does additional acceleration for ray tracing and AI play into that?

This is the same type of wishful thinking that AMD fanboys fall into with every hardware cycle.
 

nochance

Banned
2080Ti is about 30-45% faster than 250mm2 RDNA1 5700XT.
Your "triple performance" reference is bazinga.
We need to look at the solution as a whole and take ray tracing into account.


We've all seen that demo, and it hardly raises any hope. Also, it does not specify additional cores specifically used for that purpose, the argument was that AMD can just produce a bigger die and fill it with shader units to somehow compete with RTX3080.

Dude, you do not need to invent reasons to stick with filthy green.

Precisely, and you don't need to set yourself up for disappointment to enjoy games. It's not like AMD has been competitive on the GPU front in the last 4 years.


PS: apologies for the multipost - is it possible to delete posts?
 
Last edited:

onunnuno

Neo Member
Some people are really blind to everything, I don't know what's coming there (I hope it's good). But honestly, you believing that it will not compete with the 3070?? Are you that blind?

A chip, twice as big as 5700 XT, better node, upgrade "cores" and you expect it be way below to 50% improvement over the 5700 XT? What the hell are you thinking?

Please really wish that an upgrade architecture will be worse than the new one while being twice as big?
 

JohnnyFootball

GerAlt-Right. Ciriously.
Things look good for nvidia, but AMD can still be competitive with price. People need to realize that the 3070 is still priced at $500. That isn't cheap. Prior to the 2070, the 70 series of card was closer to $350, where the 60 series ended up for the 20 series.

If AMD can offer 2080 Ti (3070, but until we see benchmarks I won't believe it) level performance for under $400 then they are still very much in the game.

The 5700 and 5700XT were the best cards at their pricepoint in terms of raw performance. The 5700XT handily beat the $400 2060 Super and came very close of the 2070 Super. Not to mention AMD cards go on sale many times more often that nvidia cards do. I've seen 5700XTs for as low as $325. Nothing beats that deal. Despite being good cards, they were not needle movers. If they had been priced $100 less, they would have game changers.

I've been an nvidia owner for 10 years and I want AMD to really shake things up in the GPU market, the way they have in the CPU market. Unfortunately, for AMD, nvidia has not been as incompetent as Intel and that battle is much tougher.
 
Last edited:

Kenpachii

Member
And you think, that given the failure rates on their recent "high end" entries they can just scale the die and call it a day?

Even if they had the 50% increase on 5700XT, how does additional acceleration for ray tracing and AI play into that?

This is the same type of wishful thinking that AMD fanboys fall into with every hardware cycle.

failure rates means nothing. they could have had bad memory modules from shipments of a provider that made most of there cards duds. ( 290 cards had this ) which can be solved with minor adjustments, or your architecture could be a complete dud.

If it was bending more to the later, we would not be seeing cards on the market now for the price they are asking or even availability. So it really doesn't seem to be much of a issue really.

About raytracing, increasing the soc by adding raytracing blocks into the soc, exactly what nvidia does. That's how. xbox is proof of this.

There is no wishful thinking, its basically how the world works.

I estimate raytracing wise the flag ship of rdna2 will sit around 2080ti performance, and performance around 3080 upwards if they push things as halo card, a tiny bit lower if they skimp it and just want something more affordable.
 
Last edited:
TPU (with orgasmic title) said so:

Throughout the video, NVIDIA compared the RTX 3080 to the previous-gen flagship, the RTX 2080 Ti, with 20-30% performance gains shown for Ampere.
TPU
The guy who wrote the article can't count. The performance gains are between 40-50%. There isn't a single frame with a 20% advantage for the 3080.

I like Coreteks for his speculative stuff as much as the next guy here, but I wouldn't trust him on this one.

The most reliable leaker we had for Ampere had this to say about the 3070 vs RDNA2:





Anybody believing Big Navi can't beat a 3070 is an idiot. The 2080 Ti is only about 25% faster than the SX's GPU with 52 CU's at 1825MHz. Big Navi will destroy the 2080 Ti/3070 but might not be up to par with the 3080. Seriously, people expect AMD to only match what NVIDIA did over 2 years ago? Come on man.
 
Last edited:
Jesus Christ why do you keep lying?

relative-performance_3840-2160.png


49% according to TPU's actual benchmarks and not just an idiot writing a false headline that is easily disprovable by watching 5 seconds of the video.
I don't understand why llien llien vouches for AMD so hard. Maybe he works for AMD? I just don't get how someone could white Knight for them, after poor performance from consoles and desktops, several years in a row. Even VFXVeteran VFXVeteran shut his claims down, hella hard the other day. But here we are again....
 
Jesus Christ why do you keep lying?

relative-performance_3840-2160.png


49% according to TPU's actual benchmarks and not just an idiot writing a false headline that is easily disprovable by watching 5 seconds of the video.
At 4K mind.
45% at 1440p
35% at 1080p

Overall about 40% faster.

Also baring in mind that TechPowerUp is one of many benchmarking sites.

That doesn't seem too unreasonable an ask for Radeon to beat with literally double the CU's with their largest chip, does it?

Doubling the 5700XT's performance would put a GPU at 33% faster than a 2080Ti at worst.
AMD don't need to triple their performance at all.
 
At 4K mind.
45% at 1440p
35% at 1080p

Overall about 40% faster.

Also baring in mind that TechPowerUp is one of many benchmarking sites.

That doesn't seem too unreasonable an ask for Radeon to beat with literally double the CU's with their largest chip, does it?

Doubling the 5700XT's performance would put a GPU at 33% faster than a 2080Ti at worst.
AMD don't need to triple their performance at all.
Because there are CPU bottlenecks at 1080p and nobody cares about the 1080 performance of a 2080 Ti.

I also agree Big Navi will crush the 3070. Just pointing llien llien being dishonest:
 

llien

Member
Jesus Christ why do you keep lying?

relative-performance_3840-2160.png


49% according to TPU's actual benchmarks and not just an idiot writing a false headline that is easily disprovable by watching 5 seconds of the video.
btarrung is the guy doing the benchmarks, dumbo. (have you noticed that older card runs on a very old driver versions? why would that be, hmm...)
You've cherry picked 4k results where 5700XT is severely bandwidth starved, something that won't be the case withBig Navi.
 
Last edited:
btarrung is the guy doing the benchmarks, dumbo. (have you noticed that older card runs on a very old driver versions? why would that be, hmm...)
You've cherry picked 4k results where 5700XT is severely bandwidth starved, something that won't be the case withBig Navi.
The idiocy.

1080p doesn’t take advantage of what the 2080 Ti can do. In a 4K scenario, they can stretch their legs to the fullest. Why pick 1080p when the 2080 Ti doesn’t care about it and when it’s held back? Unleashing their full power, the 5700XT gets murdered by 45%+.
 

llien

Member
250mm2 10 billion transistor chip with laughable mem bandwidth getting "murdered" by 754mm2 18.6 billion transistor chip is a notable event.

1080p doesn’t take advantage of what the 2080 Ti can do.
CPU benchmarks show barely any difference even at 1080p with 2080Ti. Certainly not something explaining the 15% difference.
4k doesn't take advantage of what the 5700XT can do.
 
Last edited:
250mm2 10 billion transistor chip with laughable mem bandwidth getting "murdered" by 754mm2 18.6 billion transistor chip is a notable event.
It isn’t but why do you have to outright lie about the numbers? No one gives a damn about the 2080 Ti at 1080p and within the context we are discussing, saying "35-45%" is straight up dishonest. The most common resolution used for the 2080 Ti has it beating the 5700XT by 49%.
 

nochance

Banned
failure rates means nothing. they could have had bad memory modules from shipments of a provider that made most of there cards duds. ( 290 cards had this ) which can be solved with minor adjustments, or your architecture could be a complete dud.

If it was bending more to the later, we would not be seeing cards on the market now for the price they are asking or even availability. So it really doesn't seem to be much of a issue really.

About raytracing, increasing the soc by adding raytracing blocks into the soc, exactly what nvidia does. That's how. xbox is proof of this.

There is no wishful thinking, its basically how the world works.

I estimate raytracing wise the flag ship of rdna2 will sit around 2080ti performance, and performance around 3080 upwards if they push things as halo card, a tiny bit lower if they skimp it and just want something more affordable.
A bigger die means higher failure rate.

And if they add ray tracing cores to the silicone, then they will not have room to magically double the number of streaming processors, which logically means that they will not be able to increase rasterisation performance in line with wishful thinking here. And that still doesn't even take into account separate cores dedicated to ML, which reportedly they will not have.

Let's not forget, that in software based ray tracing (Neon Noir tech demo from Crytek), 5700XT fares substantially worse than 1080ti, and 2080ti is around twice as fast.

So tell me again about how AMD will magically multiply their performance on every front because of using a bigger die (which will still be considerably smaller than the one NVidia is using).
 

MiguelItUp

Gold Member
Yeah, it's cool to have options. But with everything NVIDIA has coming with the 3000 series, I really don't know why you'd bother with anything else. Because it's not just the hardware, but all the software as well. There's just quite a bit of cool stuff there.
 

FireFly

Member
A bigger die means higher failure rate.

And if they add ray tracing cores to the silicone, then they will not have room to magically double the number of streaming processors, which logically means that they will not be able to increase rasterisation performance in line with wishful thinking here. And that still doesn't even take into account separate cores dedicated to ML, which reportedly they will not have.

Let's not forget, that in software based ray tracing (Neon Noir tech demo from Crytek), 5700XT fares substantially worse than 1080ti, and 2080ti is around twice as fast.

So tell me again about how AMD will magically multiply their performance on every front because of using a bigger die (which will still be considerably smaller than the one NVidia is using).
Based on the XSX chip, which has hardware BVH acceleration, we know that 56 CUs take up around 300 mm^2 of space. Going from 56 to 80 is an increase of 43%, so crudely scaling everything by 1.43 that would be 429 mm^2 , which is already below the estimated 505 mm^2. Based on the Minecraft demo, it seems the XSX GPU is about on par with a 2070 for raytracing. Add another 54% (52 active to 80 CUs) and you are at 2080 Ti levels, not accounting for clockspeed improvements. So it makes sense to think that Navi 21 could be between the 3070 and 3080 for ray tracing, especially in "hybrid" titles that rely on conventional rasterisation.

Edit: Just to add, I think you are confusing silicon failures, which result in chips that have to be cut down or thrown away, and product failures caused by faulty components. A chip of 500 mm^2 would be perfectly in line with past AMD chips. And if 500 mm^2 is big, then what is the ~627 mm die Nvidia is using for the 3080?
 
Last edited:
Top Bottom