• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Navi21 XT/6800XT(?) allegedly scores more than 10,000 in Fire Strike Ultra

regawdless

Banned

"I have a unique perspective on the recent nVidia launch, as someone who spends 40 hours a week immersed in Artificial Intelligence research with accelerated computing, and what I’m seeing is that it hasn’t yet dawned on technology reporters just how much the situation is fundamentally changing with the introduction of the GeForce 30 series cards (and their AMD counterparts debuting in next-gen consoles).

Reviewers are taking a business-as-usual approach with their benchmarks and analyses, but the truth is, there is nothing they can currently include in their test suites which will demonstrate the power of the RTX 3080 relative to previous-gen GPU’s.

nVidia has been accused of over-stating the performance improvements, by cherry picking results with RTX or DLSS turned on, but these metrics are the most representative of what’s going to happen with next-gen games. In fact, I would say nVidia is understating the potential performance delta.

Don’t get me wrong, most of the benchmarks being reported are valid data, but these cards were not designed with the current generation of game engines foremost in mind. nVidia understands what is coming with next-gen game engines, and they’ve taken a very forward-thinking approach with the Ampere architecture. If I saw a competing card released tomorrow which heavily outperformed the GeForce 3080 in current-gen games, it would actually set my alarm bells ringing, because it would mean that the competing GPU’s silicon has been over-allocated to serving yesterday’s needs. I have a feeling that if nVidia wasn’t so concerned with prizing 1080 Ti’s out of our cold, dead hands, then they would have bothered even less with competing head-to-head with older cards in rasterization performance. "

Interesting read. Curious what games will use "next gen engines". Does Cyberpunk already count?
 

Rikkori

Member
Interesting read. Curious what games will use "next gen engines". Does Cyberpunk already count?

It's called bullshit. RT performance is still gens away from parity with the present games frameworks. Hell what's more next-gen than UE5? That's still going all-in on rasterised techniques (Nanite+Lumen) with ray-tracing as an optional luxury alternative (mostly for super high-end PCs).
 

Papacheeks

Banned
Curious why AMD is so incredibly fast on Fire Strike, even beating the 3090, while being a little bit behind the 3080 on TimeSpy.

Clocks, and effeciency of memory bandwidth. Fire strike likes cores, ipc, and high clocks. Which these chips have. They have 70-80 cu's all clocked high when needed. Much higher than any Nvidia card.

On top of using infinity cache to get almost on average higher bandwidth. Add in 10-15% ipc gains and that explains the score which is 100% based more in pure rasterization.
 

thelastword

Banned
bd5e07eaaa1a69eaad755c08221a3c546f42e7519c62029f2f81d68c6187d007.jpg
 
It's called bullshit. RT performance is still gens away from parity with the present games frameworks. Hell what's more next-gen than UE5? That's still going all-in on rasterised techniques (Nanite+Lumen) with ray-tracing as an optional luxury alternative (mostly for super high-end PCs).


We shall soon see, my young padawan. Watch Dogs 3 is 5 days away. Cyberpunk 4 weeks away. Call of Duty 3 weeks away. Godfall the same. DLSS allows the imposant 3080 to render the full suite of raytracing effects with playable, high framerates while keeping crystal clear image quality. AMD's cards seem to be very fast in ... Witcher 3 ? And other games of its ilk.
 

MadYarpen

Member

"I have a unique perspective on the recent nVidia launch, as someone who spends 40 hours a week immersed in Artificial Intelligence research with accelerated computing, and what I’m seeing is that it hasn’t yet dawned on technology reporters just how much the situation is fundamentally changing with the introduction of the GeForce 30 series cards (and their AMD counterparts debuting in next-gen consoles).

Reviewers are taking a business-as-usual approach with their benchmarks and analyses, but the truth is, there is nothing they can currently include in their test suites which will demonstrate the power of the RTX 3080 relative to previous-gen GPU’s.

nVidia has been accused of over-stating the performance improvements, by cherry picking results with RTX or DLSS turned on, but these metrics are the most representative of what’s going to happen with next-gen games. In fact, I would say nVidia is understating the potential performance delta.

Don’t get me wrong, most of the benchmarks being reported are valid data, but these cards were not designed with the current generation of game engines foremost in mind. nVidia understands what is coming with next-gen game engines, and they’ve taken a very forward-thinking approach with the Ampere architecture. If I saw a competing card released tomorrow which heavily outperformed the GeForce 3080 in current-gen games, it would actually set my alarm bells ringing, because it would mean that the competing GPU’s silicon has been over-allocated to serving yesterday’s needs. I have a feeling that if nVidia wasn’t so concerned with prizing 1080 Ti’s out of our cold, dead hands, then they would have bothered even less with competing head-to-head with older cards in rasterization performance. "
Aren't XSX and PS5 GPUs based on the same architecture? Which, when you think about the next gen, could mean that the AMD card is what is designed with next gen in mind?
 
With just 10GB vram for their 4k card it is going to be having issues before RDNA2 cards do.

Its not "just". 10 gigs is a lot of vram. Series X has the same amount reserved for games. And PS5 has bandwidth thats twice as slow than the 3080. We'll have to wait and see when bottlenecks will occur. They will at some point, but will it be many years into the future or faster ? Thankfully, dlss takes care of that as well. Lowers the amount of vram used
 

Ascend

Member
"I have a unique perspective on the recent nVidia launch, as someone who spends 40 hours a week immersed in Artificial Intelligence research with accelerated computing, and what I’m seeing is that it hasn’t yet dawned on technology reporters just how much the situation is fundamentally changing with the introduction of the GeForce 30 series cards (and their AMD counterparts debuting in next-gen consoles).

Reviewers are taking a business-as-usual approach with their benchmarks and analyses, but the truth is, there is nothing they can currently include in their test suites which will demonstrate the power of the RTX 3080 relative to previous-gen GPU’s.

nVidia has been accused of over-stating the performance improvements, by cherry picking results with RTX or DLSS turned on, but these metrics are the most representative of what’s going to happen with next-gen games. In fact, I would say nVidia is understating the potential performance delta.

Don’t get me wrong, most of the benchmarks being reported are valid data, but these cards were not designed with the current generation of game engines foremost in mind. nVidia understands what is coming with next-gen game engines, and they’ve taken a very forward-thinking approach with the Ampere architecture. If I saw a competing card released tomorrow which heavily outperformed the GeForce 3080 in current-gen games, it would actually set my alarm bells ringing, because it would mean that the competing GPU’s silicon has been over-allocated to serving yesterday’s needs. I have a feeling that if nVidia wasn’t so concerned with prizing 1080 Ti’s out of our cold, dead hands, then they would have bothered even less with competing head-to-head with older cards in rasterization performance. "
That's exactly what GCN was. And so was Bulldozer. Look how that turned out.

AMD has a worse raytracing implementation that takes away from rasterisation performance.
Does that matter though? If RT on an RTX card makes the game run at 50 fps instead of 100, that basically means that about half the rasterization performance is not being used and all those SMs are being idle, since RT is the bottleneck.

On AMD's implementation, the RT and rasterization can be balanced and adapted as games evolve. It's basically the difference of back in the day, where the 'standard' was separate pixel shaders and vertex shaders. AMD implemented unified shaders and nVidia followed. The reason they did that is because one would always be taxed too much and the other would stand idle.
AMD's implementation of RDNA is closer to a 'unified' RT & rasterization method, while nVidia keeps them separate by having separate dedicated cores for rasterization, RT and AI.
 
Last edited:

Rikkori

Member
We shall soon see, my young padawan. Watch Dogs 3 is 5 days away. Cyberpunk 4 weeks away. Call of Duty 3 weeks away. Godfall the same. DLSS allows the imposant 3080 to render the full suite of raytracing effects with playable, high framerates while keeping crystal clear image quality. AMD's cards seem to be very fast in ... Witcher 3 ? And other games of its ilk.

The funny thing is you just proved my point and don't even get it.
 
The funny thing is you just proved my point and don't even get it.


And what point would that be sir ? That AMD made cards for 7 years ago when todays games and tomorows use nvidia-s aproach ? Both consoles and both gpu vendors are all in raytracing and you think rasterisation will still be king ? Who cares what aproach ue5 takes. People here are obsessing over that nothing tech demo like its the future. Every major publisher uses its own engine. UE has fallen by the wayside for a long time and that'll be the case going forward. We're not in the ue3 days anymore.
 

CrustyBritches

Gold Member
For those unfamiliar with 3DMark benchmarking, Time Spy is a much better indicator of general game performance than Fire Strike. Taking a peak at Guru3D's 3080 review with Time Spy GPU results and cross-referencing it with TPU GPU hierarchy charts pretty much confirms this. Whereas Fire Strike Ultra has Vega 64 above a card like 2060 Super.

We'll have to wait for the 28th for more info. I'm sure AMD will have a slide with a 10-game, or so, benchmark lineup comparing it to Navi 10. For some reason I missed that chart not from Igor showing the Navi 21 80CU ES numbers. Those are still good and maybe with luck they'll straddle the 3080 in performance. XTX slightly above 3080 and XT slightly below.
 

duhmetree

Member
Interesting read. Curious what games will use "next gen engines". Does Cyberpunk already count?
Halo infinite was built on a next gen engine. :messenger_squinting_tongue:

There's no doubt in my mind we will see something close to a UE5 demo in 2-3 years. Hoping for Uncharted 5 in 2022 as a PS5 exclusive.
 
So just finished watching a new video from RedGamingTech, it seems he has been shown performance benchmarks for the 6800XT for 10 games.

AMD will apparently show 10 games on stage in both 4K and 1440p.

At native 4K (no RT, no DLSS), 6800XT allegedly beats out the 3080 in 5 of the games, draws roughly in 2 games and loses to the 3080 in 3 games.

At 1440p (no RT, no DLSS), the 6800XT beats the 3080 in 8 of the titles and loses in 2.

Ray Tracing matches roughly what we know from the synthetic benchmarks, a little better than 2080ti in RT but still behind 3080, so a clear RT win for Nvidia.

Regarding a DLSS competitor, apparently AMD have one but it may not be actually available to users until a software update in December. Supposedly he has heard whispers that it is not quite as good quality as DLSS but is supposedly much faster (whatever that means.)
 

Kenpachii

Member
No way will it be $500.

I think the 6800 might launch around that price but the 6900XT will be at least $700.

Actually when i think about it, they could ask whatever they want for those cards if they ship with 16gb of memory. And specially if nvidia isn't going to make bigger v-ram modules versions. Nothing nvidia has will compete with those cards.
 

Marlenus

Member
Actually when i think about it, they could ask whatever they want for those cards if they ship with 16gb of memory. And specially if nvidia isn't going to make bigger v-ram modules versions. Nothing nvidia has will compete with those cards.

12GB 3090 for $899?
 

McHuj

Member
Regarding a DLSS competitor, apparently AMD have one but it may not be actually available to users until a software update in December. Supposedly he has heard whispers that it is not quite as good quality as DLSS but is supposedly much faster (whatever that means.)

The thing with DLSS is that it's a neural network running on the tensor cores. The good thing with that is that it's a software solution in a very hot and active research area. NVIDIA aren't the only ones working on AI upscaling, many companies are so I think the networks will get better and further refined. In theory, you could probably run the DLSS network on any GPU or CPU. The big question in the short term is how good will AMD's HW acceleration for AI inferencing be in RDNA2? Can it render and run an AI inference within one frame time?
 

Mhmmm 2077

Member
What I want:
-good performance (as good as or better than 3070)
-good price (not more than 3080, depends on the performance though)
-some cool surprise tech (like freesync is better than gsync and free - AMD could make something cool again)
-good drivers (so it's not 5700XT launch situation again, hopefully they got it under control again)
-please be reelased before Cyberpunk 2077 comes out (even though CP77 RT for AMD won't be at launch - that's fine)
 

Kenpachii

Member
384 bit bus though, 16GB would be like series X and have 12 GB fast, 4GB slow.

Sucks for nvdia then got fucked by there greed, cant feel sorry for them they should have designed those cards around 16gb. they can dumpster that 3090 for all i care as when a 3080 20gb model arrives that whole 3090 is going to be jack shit worth. Specially when amd drops 600 bucks 6800xts with 16gb of memory. And not releasing a 20gb model is good luck with that.
 
Last edited:
If these can get to 3080 levels of performance I'll be SUPER impressed. Like, really really impressed. I never expected those NAVI cards to perform that well. Can't wait for the reveal and eventual launch to see what they're like.
 

Ascend

Member
What I want:
-good performance (as good as or better than 3070) Check
-good price (not more than 3080, depends on the performance though) Highly likely to be slightly cheaper
-some cool surprise tech (like freesync is better than gsync and free - AMD could make something cool again) Don't count on it
-good drivers (so it's not 5700XT launch situation again, hopefully they got it under control again) Likely to be fine.
-please be reelased before Cyberpunk 2077 comes out (even though CP77 RT for AMD won't be at launch - that's fine) Even AMD doesn't know yet. They're deciding if they'll delay for increased availability at launch
See replies in bold.
 

llien

Member
The thing with DLSS is that it's a neural network running on the tensor cores.
Who has ever said NN inference runs on tensor cores? Why on earth would it?

-good price (not more than 3080, depends on the performance though)
3080 is not a product you can actually buy. It would be silly for AMD to price anything against it.
 
Last edited:

Kenpachii

Member
What AMD needs to do to fuck with nvidia is simple get extra ultra options in games that pushes to 16gb of v-ram usage. It will trigger elitists to no end that they can't max stuff.

Also AMD needs to showcase they are in communication with cyberpunk team and other high profile games to get the best out of it with AMD tech implemented in those games and supported. They need to up there effort.
 

notseqi

Member
Yep, but lets be honest, Ampere only had a small ray tracing bump over Turing, so that is hardly a big loss if RDNA2 matches Turing.
Something that robs you to <60fps is not something I will activate. When I saw those benchmarks I disregarded RT for this gfx-card generation unless something significant happens with developers.
I am fine with 10% less performance than 3080 should that come about, if the price is right and because of having an AMD boner.
 

notseqi

Member
lol I love the confidence that Radeon fanboys exude every single time they're about to launch a product. It's always so funny when the reality sets in 😂
What reality? That we're getting an okay product at very good prices?
You sound stoopid.
 
What reality? That we're getting an okay product at very good prices?
You sound stoopid.

'Very good' being $50 less (*at best, according to Red Gaming Tech)?

That's quite a pill to swallow if you care about the upcoming AAA releases that will utilize both DLSS and Ray Tracing to a great degree. Given that it's an AMD pick on the 4k titles, I'd say 5 wins for team red with 2 ties and 3 losses is probably optimistic. I play on 43 inch 4k120hz monitor, so 4k, DLSS and to a lesser degree, ray tracing performance are all very important to me.

As a 1440p card, it might have a compelling argument to make. Wait for reviews. A lot of people on here might have a serious decision to make.

But for my use case, there's no way I choose what is likely going to be a wash in 4k rasterization performance for MAYBE $50 when I can have a tech that preserves image quality to almost a native 4k image while granting 30% performance gains.
 

McHuj

Member
Who has ever said NN inference runs on tensor cores? Why on earth would it?

Because that is what they're meant for.


Powered by dedicated AI processors on GeForce RTX GPUs called Tensor Cores, DLSS 2.0 is a new and improved deep learning neural network that boosts frame rates while generating beautiful, crisp game images. It gives gamers the performance headroom to maximize ray tracing settings and increase output resolutions.

Tensor cores performing int8 inference on pixel data are a lot more efficient that wasting FP32/FP16 processing of the shader core. Floating point is mainly used in the training/learning portion.
 
Last edited:

McHuj

Member
Yep, but lets be honest, Ampere only had a small ray tracing bump over Turing, so that is hardly a big loss if RDNA2 matches Turing.

I kind of agree. I had hopes that Ampere would really increase RT performance over Turing by a large factor like 4x. I think that's the kind of gain you need for RT to become a common place method. Put it this way, I think a ~$200 GPU needs to be able to deliver a playable RT experience before developers build an engine/game with RT support as a requirement. We're not there yet.

Although RDNA2 matching Turing levels is really good. I expected them to still be behind. I think that's great news for consoles. We may see some good targeted use of RT with that kind of performance level.
 
Saying AMD cards won't do ray tracing is kind of weird since the consoles are ray tracing capable and they are using APUs that are heavily scaled back version of this tech. Arguably since PC and consoles share much of their AAA library, I wouldn't be surprised if a lot of games were optimized for AMD's solution even though Nvidia's may be inherently more capable.
 

BluRayHiDef

Banned
I kind of agree. I had hopes that Ampere would really increase RT performance over Turing by a large factor like 4x. I think that's the kind of gain you need for RT to become a common place method. Put it this way, I think a ~$200 GPU needs to be able to deliver a playable RT experience before developers build an engine/game with RT support as a requirement. We're not there yet.

Although RDNA2 matching Turing levels is really good. I expected them to still be behind. I think that's great news for consoles. We may see some good targeted use of RT with that kind of performance level.

Based on my experience playing Control, I think that performance with ray tracing enabled is actually quite good thanks to DLSS 2.0. With an R9 3950X and an RTX 3090, I can play Control at 4K with all settings maxed out - including ray tracing - via DLSS Quality mode (1440p) and experience anywhere from 60 to 80 frames per second. Keep in mind that DLSS Quality mode is indistinguishable from native 4K. I don't know how a more affordable card, such as the RTX 3080, performs under these settings with the 3950X because I haven't bothered connecting it ever since I got the 3950X; I'll try it out sometime this week.
 

notseqi

Member
'Very good' being $50 less (*at best, according to Red Gaming Tech)?

That's quite a pill to swallow if you care about the upcoming AAA releases that will utilize both DLSS and Ray Tracing to a great degree. Given that it's an AMD pick on the 4k titles, I'd say 5 wins for team red with 2 ties and 3 losses is probably optimistic. I play on 43 inch 4k120hz monitor, so 4k, DLSS and to a lesser degree, ray tracing performance are all very important to me.

As a 1440p card, it might have a compelling argument to make. Wait for reviews. A lot of people on here might have a serious decision to make.

But for my use case, there's no way I choose what is likely going to be a wash in 4k rasterization performance for MAYBE $50 when I can have a tech that preserves image quality to almost a native 4k image while granting 30% performance gains.
Your use case. I prefer frames to 4k+ experience and every choice in those regards are fine.
I don't like to be boned by industry leaders. If you ask me for a price for my best work you will get that price in only slight increments for the next 10 years, unless something significant happens. You're welcome to renegotiate if you aren't happy.
Check back on what Nvidia did when they finally had the upper hand.

Now, you buy a 3090. Performance with RT isn't what you expected, and as it turns out, not what you wanted. Spent a fuckton of money, nice. Not faulting you for it.

Should AMD turn out to be a shit company when 'taking over' I will not support them, but for now, I will support competition.
 
Your use case. I prefer frames to 4k+ experience and every choice in those regards are fine.
I don't like to be boned by industry leaders. If you ask me for a price for my best work you will get that price in only slight increments for the next 10 years, unless something significant happens. You're welcome to renegotiate if you aren't happy.
Check back on what Nvidia did when they finally had the upper hand.

Now, you buy a 3090. Performance with RT isn't what you expected, and as it turns out, not what you wanted. Spent a fuckton of money, nice. Not faulting you for it.

Should AMD turn out to be a shit company when 'taking over' I will not support them, but for now, I will support competition.

Of course I support competition, and I would never buy a 3090, as I have not, because all I do is game. It's not a price-competitive product for that use case. And with just a few settings turned down, the 3080 gets enough frames in 4k (120+ in 4k in a lot of last gen games that I still play almost daily like Destiny 2, COD, etc...) that I'm still more than competitive. I like frames too. That's the promise of these high end next gen cards.

I'm just warning everyone who's worshipping at the alter of Lisa Su that they don't care (much) more about you than Nvidia does. They're still a company working for their stock holders just like any other tech giant. Right now, they're making a very cold, calculated decision between profit and regaining marketshare. Don't think for a second that if they think they can still makes strides on the second, they won't capitalize as much as possible on the first. Just like their CPU offerings, when they know they have something, they will INCREASE prices, not decrease. After this latest leak by Red Tech Gaming and the info that AMD's ATTITUDE toward pricing changed once they realized they would be more competitive than their initial projections, any hope of them seriously undercutting Nvidia IMO went out of the window.
 
Last edited:
Top Bottom