• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Navi21 XT/6800XT(?) allegedly scores more than 10,000 in Fire Strike Ultra

rofif

Can’t Git Gud
Firestone ultra it's irrelevant. Time spy extreme and port royale matter more.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not


Manage stock.
Price it at or around RTX 3070.
And its gonna be a really really bad day for Nvidia.
They will almost certainly need a big price drop and/or full refresh.

Am I a bad person for really wanting AMD to do really well this cycle just so Nvidia drop their prices because I need dem CUDA cores?

Fucking lol at the drag time argument
I dont know about any gpus, just here for the cars.

Had I know jesting like that was going to cause what it did I wouldnt have even bothered.

I drive a Stinger!! :)

Should have gotten a Mustang GT.
It has Drag Mode so you can get down the 1/4 mile in who gives a shit seconds.
 
Am I a bad person for really wanting AMD to do really well this cycle just so Nvidia drop their prices

Yes. :messenger_tears_of_joy:

But speaking seriously for a noment: If CUDA is important to you then fair enough, but this attitude only hurts competition in the market, AMD need to actually make profit to invest in R&D to stay competitive, they also need to expand market share to put pressure on Nvidia.

If a company releases a good product then they should be rewarded with success, otherwise why would they continue to bother? Also you are sending a bad message to Nvidia that you will buy their products no matter what, which means stagnation and high prices.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Yes. :messenger_tears_of_joy:

But speaking seriously for a noment: If CUDA is important to you then fair enough, but this attitude only hurts competition in the market, AMD need to actually make profit to invest in R&D to stay competitive, they also need to expand market share to put pressure on Nvidia.

If a company releases a good product then they should be rewarded with success, otherwise why would they continue to bother? Also you are sending a bad message to Nvidia that you will buy their products no matter what, which means stagnation and high prices.

I want AMD to do well for the price drop all round.
When AMD gets supported and/or finds a way to allow AMD acceleration or whatever theyll call it in Octane, Redshift and Substance.
I would fully switch over to AMD.
But productivity almost everything I use has some sort of reliance on Nvidia GPUs.
Indigo is OpenCL so AMD works, but alot of other apps are relying on CUDA.
 
Last edited:
AMD seems to be really bringing it this time in raw performance.

Now we need to see what they're bringing for raytracing and a DLSS competitor.

Do it AMD, bring the competition!

Who can actually release enough GPUs (no paper launches) in time for Cyberpunk?
 

Ascend

Member
fdkna.jpg
 

Bo_Hazem

Banned
"Rice burners" are good cars, reliable and good economy vs performance, unlike most American cars, which have lot of faults, are expensive to maintenance, are laughably big + have big motors that are weak + drink way too much. (When you need huge V8 to get same amount of HP as tuned 2.0 engine, they are weak)

At least that is how it is seen outside of US.

Maybe AMD cards do The same now, less tflops but similar performance.

Technically those V8 engines are quite small (OHV), a 7.0L V8 Corvette Z06 engine is much smaller than Nissan 350z V6 engine, but extremely fragile and unreliable and have plenty of valve issues. But Ford has went with the modern world with DOHC lately (pretty big engines, but have 2 cam reads instead of static, outdated hot rod, and much more reliable by default).

Real world results you'll see the Kia Stinger wiping the floor with most of those V8 muscle cars, that's why a new stupid lame race has been invented (roll-race at 40-80km/h "mph") to overcome the lack of engineering intelligence.

Although cams lack the reliability and variety (2 reads depending on driving behavior) of modern DOHC engines, nothing beats the roar of those old fashion, big displacement V8, even though if at the end of the day you get smoked by a KIA V6, especially on normal roads instead of the unrealistically sticky drag strips. :messenger_sunglasses:

Now the new Nvidias are showing too much "displacement" advantage for much less improvement over their previous cards in terms of gain-per-teraflop. So I can see a card competing/exceeding 3080 and getting in between closer to 3090.
 
Last edited:

01011001

Banned
Oh I'm sure there are games, but I haven't seen shit that says "rt is the best way to play the game"

if the pricing is similar the Nvidia cards will be the winner IMO.
why would you buy a card that can run these games slightly faster with no RT when there's another card that's similarly priced but can play it way better with RT? and can also play it way better using DLSS2.0 which is a great way to boost performance AND slowly but surely becomes the best AA solution on PC even if you're not looking for a resolution/FPS boost

If AMD sells their highest SKU at the price of a 3070 with 3080 rasterization performance I could see some value there. but 3080 rasterization performance at 3080 pricing? why would you buy that?
 

adamosmaki

Member
If they can price it at 3070 levels then thyve got me interested.

Otherwise 100+ dollars for a bit more performance, but losing out on DLSS and DediRT hardware doesnt sound like a good deal.
Because once I reach 600 dollars I could stretch my budget to 700 and get a 3080.

Theyve got to do what Ryzen 3000 did in terms of pricing, basically have much better bang for buck.
Because, beating the RTX 3080 in bang for buck is going to be hard with a card thats marginally faster than a 3070 but 100+ dollars more expensive.
its still a good deal considering 3080 is just a paper launch for the foreseeable future and considering it will have more vram. of course that might change if 3070 is available at 500 but even then the extra vram and potential extra preformance will still make 6800 a good deal
 
I'd still wait on 30XX cards if this thing can't compete honestly. If it's better in rasterization than even a 3080, but drops down to a potential RTX 3060 because no answer to DLSS 2.1 and worse raytracing performance, it wouldn't be worth it IMO.
 
I'd still wait on 30XX cards if this thing can't compete honestly. If it's better in rasterization than even a 3080, but drops down to a potential RTX 3060 because no answer to DLSS 2.1 and worse raytracing performance, it wouldn't be worth it IMO.

Yup. A few extra fps wouldn't tear me away from better Raytracing + DLSS 2.1.
 

Rickyiez

Member
You guys need to start using more of your own neurons when watching DF videos.
What did Death Stranding comparison reveal, when done not by paid shills?



This is with DLSS quality . Looks just as good as native 1440p . Believe it or not but DLSS is the future and it actually works for free performance
 
You guys need to start using more of your own neurons when watching DF videos.
What did Death Stranding comparison reveal, when done not by paid shills?

What are you talking about?

AMD has a sharpening filter. That's it. They do NOT currently have any kind of answer to DLSS.
 
Last edited:

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I'd still wait on 30XX cards if this thing can't compete honestly. If it's better in rasterization than even a 3080, but drops down to a potential RTX 3060 because no answer to DLSS 2.1 and worse raytracing performance, it wouldn't be worth it IMO.

For me its most about CUDA.
But yeah Raytracing performance and continued improvements to DLSS is really going to be a factor here.
Games are even starting to list DLSS in the Rec Specs (WatchDogs: Legion) and Im sure more will follow.
If it can out rasterize the RTX 3080 and most reviews ignore DLSS and RT features, the new king of the hill will be AMD in CPUs and GPUs??? What a time to be alive
AMDs slides during their presentation are gonna look brutal assuming they dont pull a Ryzen 5000 and price these cards equal or higher than Nvidia.


P.S Yes I know RTX3080 can be OC'd into the 12 - 13000 range while we dont know if RDNA2 has any overclocking headroom.
Nvidia will still likely be leading the FireStrike leaderboards.
 

ZywyPL

Banned
honestly, the RTX cards will most likely still be a way better value.

most likely better RT performance and DLSS2.0 are just 2 factors in favour of Nvidia right now.

have fun trying to play any modern game at reasonable framerates and raytracing on RDNA2 cards.


I'd still wait on 30XX cards if this thing can't compete honestly. If it's better in rasterization than even a 3080, but drops down to a potential RTX 3060 because no answer to DLSS 2.1 and worse raytracing performance, it wouldn't be worth it IMO.


Yeah, CP2077 is said to run below 1080p on 2080Ti, so DLSS is the only way to get the game running at a reasonable resolutions, so if that will be the case for the retail copy as well, and AMd doesn't have an equivalent tech or just better raw RT performance, then the reviews will eat the cards alive.
 

FireFly

Member
You guys need to start using more of your own neurons when watching DF videos.
What did Death Stranding comparison reveal, when done not by paid shills?


Baseless trashing of DF really does get old.

(Can't wait until AMD gets its own version of DLSS, so it becomes just another expected feature like AA, VRR etc. and not another piece of ammo for PC warring)
 

llien

Member
DLSS quality
DLSS quality
Oh god.
Let's stop using our brains shall we.
And, let me guess, there are no reviews comparing TAA DLSS vs FidelityFX and concluding the latter is better.
Right?

Baseless trashing of DF really does get old.
Yeah. That legit "exclusive preview" thing was totally not "problematic"

any kind of answe
It does not matter, what you call things.
They are upscalers.

Raindrops being wiped out by NV upscaling has been noted 15 minutes into thread, by some random poster here.

When someone tells you "but from a distance, you won't see any difference" your bullshitmeter should start ringing.

The only answer AMD needs is to overbook more paid shills.
 

FireFly

Member
Yeah. That legit "exclusive preview" thing was totally not "problematic"
I think it's reasonable to criticise preview content such as that, which comes without the proper context. But I don't see any issues with DF's 3080 review itself (which correctly highlighted that some of the 3080's advantage in Doom Eternal was due to VRAM at max settings).
 

ZywyPL

Banned
Raindrops being wiped out by NV upscaling has been noted 15 minutes into thread, by some random poster here.

When someone tells you "but from a distance, you won't see any difference" your bullshitmeter should start ringing.

The only answer AMD needs is to overbook more paid shills.

That's true. There are also some smudging effects in some segments of the game. Although that's just a one game case, so the question is if it's the tech or that one particular game's fault? Not that DS is a very demanding title that needs DLSS to begin with IMO.

Anyway, the tech is quite young, it wasn't even good until it reached 2.0 iteration, so it can only get better over time. As many pointed out, AI upscaling is the inevitable future, RT effects are just too damn demanding, like in CP2077 which I mentioned, no one will buy 500-700$ Big Navi cards to play at 1080p30, it'll be DOA unless AMD is holding something up their sleeves.
 
Last edited:

llien

Member
Anyway, the tech is quite young, it wasn't even good until it reached 2.0 iteration

Uh, give me a break on "iterations".
DLSS 2.0 has as much things in common with 1.0 as perhaps FidelityFX.
1.0 was actual AI upscaling with NN training at higher resolution.
2.0 is TAA with some static quirks.
And if what I just stated is shocking for you, perhaps you need to learn a bit more about the tech in question, anand has excellent articles, among others.

So let me ask again, of a handful of Death Stranding upscaling reviews, DLSS2.0 vs FidelityFX, are there none that prefer the latter?

...correctly highlighted that some of the 3080's advantage in Doom Eternal was due to VRAM at max settings)...
So very kind of them totally not fishy. Did they mention how much of "some" it was?
 
Last edited:

FireFly

Member
So very kind of them totally not fishy. Did they mention how much of "some" it was?
Yes!

"You may be wondering why we have included ultra textures benchmark variants for the GTX 1080 and RTX 2080 at 4K resolution. The truth is that the huge boosts RTX 3080 delivers here come from two sources: the chip itself and the fact it has more VRAM capable of handling the so-called 'ultra nightmare' textures. Here, by including both ultra nightmare and ultra nightmare with ultra textures benches, you can see how much of the perf boost comes from the architecture and how much comes from the new chip, plus the extra RAM. "

 
Technically those V8 engines are quite small (OHV), a 7.0L V8 Corvette Z06 engine is much smaller than Nissan 350z V6 engine, but extremely fragile and unreliable and have plenty of valve issues. But Ford has went with the modern world with DOHC lately (pretty big engines, but have 2 cam reads instead of static, outdated hot rod, and much more reliable by default).

Such bullshit. Current GM small blocks are very reliable, light, small and powerful. Ford also has introduced a new for 2020 7.3 liter OHV V8 push rod engine that made a big splash:


And the world is full of videos where a GM small block is installed in various vehicles. Great engines, And very fuel efficient too.
 
Uh, give me a break on "iterations".
DLSS 2.0 has as much things in common with 1.0 as perhaps FidelityFX.
1.0 was actual AI upscaling with NN training at higher resolution.
2.0 is TAA with some static quirks.
And if what I just stated is shocking for you, perhaps you need to learn a bit more about the tech in question, anand has excellent articles, among others.

So let me ask again, of a handful of Death Stranding upscaling reviews, DLSS2.0 vs FidelityFX, are there none that prefer the latter?


So very kind of them totally not fishy. Did they mention how much of "some" it was?
Let's keep this civil. Why not inform of us on what we're not seeing? Do you get the same or better performance with FidelityFX than DLSS 2.0 on equivalent gpu's? Will the upcoming gpu's have better RT than Turing, or even Ampere? We don't know, but is it not safe to go by the past 7 or so years? Its fine that you prefer AMD, and I'd even go so far to say that you absolutely hate Nvidia (for God knows what reason, maybe you're an AMD employee sent out to damage control forums?).

I just can't understand preferring them with their current lineup compared to Nvidia's, unless you don't care about performance or raytracing.
 
Last edited:

Pedro Motta

Member
I want AMD to do well for the price drop all round.
When AMD gets supported and/or finds a way to allow AMD acceleration or whatever theyll call it in Octane, Redshift and Substance.
I would fully switch over to AMD.
But productivity almost everything I use has some sort of reliance on Nvidia GPUs.
Indigo is OpenCL so AMD works, but alot of other apps are relying on CUDA.
I have exactly the same issue as you, I love AMD, wish I could buy their cards, but I'm stuck because of offline rendering with GPU. Also need them CUDA cores.
 

jigglet

Banned
Do not care considering it took them 6 months to fix their last set of drivers. As someone who grew up during 90’s era pc gaming, the thought of spending hours trouble shooting driver issues and glitches has me terrified. I left that shit in the 90’s where it belongs.
 

llien

Member
That's cool, although I think I was talking about DF video, not some site.

Do you get the same or better performance with FidelityFX than DLSS 2.0 on equivalent gpu's?
In general, you get better performance by lowering resolution, what kind of question is this?

D2.0 is more aggressive upscaling, wiping out fine detail (in general, there is an issue with small fast moving objects when using TAA).
FidelityFX is cross platform.

Will the upcoming gpu's have better RT than Turing, or even Ampere?
I don't know and, frankly, don't care.
After EPIC rolling out UE5 demo on a platform that supports hardware RT, but without using it, at this point and in the forseable future hardware RT is just a gimmick that goes nowhere. That's just my impression.

Try to imagine effects much cooler than in this video
, that could realistically be done with hardware RT-ing on current gen cards. I can't.


I just can't understand preferring them with their current lineup compared to Nvidia's, unless you don't care about performance or raytracing.
I'm more into voting with my wallet than an average user, but on top of it, hardware RT is not something that is widely for anything but basic gimmicks and it's not going to change.
Upscaling is upscaling, no matter which fancy word you call it, I don't want to pretend that I'm gaming at higher resolution than I actually am, or lose details in the process.
D1.0 was a forbidable attempt of "really really AI" upscaling, but it failed miserably.

5700 series had hardware QA issues (which is also shitty on AMD), that are mistaken for software.
 
Last edited:

Rikkori

Member
People just don't understand DLSS at all, all the AI magic is non-sense because if it did anything then they'd still do per-game training and the result wouldn't look like smeary shit not even on par with lowered render scale. How can anyone believe all this is so reliant on AI and yet the per-game model is worse than the general one? Use your brain ffs this is never the case in any ML instance. What DLSS 2.0 does well is be a very good form of TAA, that's it. Most games have shit TAA (including UE4 & 5) that's why the comparisons look good, but compare it to what competent devs do, like The Division 2 and then it all falls apart and DLSS 2.0 gets revealed as more marketing than AI magic.

The only advantage is is that Nvidia pays full-time engineers & even pays the devs themselves for integration. But if they wanted to the devs themselves could make a non-shit TAA, most just don't bother.
That's it. That's all "advantage" DLSS 2.0 has. Tensor cores need not apply.

You can see console devs are already switching up with various forms of temporal injection / accumulation and the end result will be 99% as good as DLSS 2.0. And if you want to test this out then go boot up Division 2 & play with render scale w/ AA on high. It's easy. I'll put up their 4K @ 50% render against any DLSS performance mode comparison and they'll still come out on top.
 

Marlenus

Member
How can anyone believe all this is so reliant on AI and yet the per-game model is worse than the general one? Use your brain ffs this is never the case in any ML instance.

Have you read the mu zero paper? It exceeds deepminds less general algs in less training time in many games like chess, go, shogi, atari game suite.
 
Am I a bad person for really wanting AMD to do really well this cycle just so Nvidia drop their prices because I need dem CUDA cores?

Same here. I'm not giving up on DLSS and other features just because AMD is finally closing in on nvidia.

Still, I'd like them to get their head out of their asses with this 3000 series fiasco.

So go team red.
 
The only advantage is is that Nvidia pays full-time engineers & even pays the devs themselves for integration. But if they wanted to the devs themselves could make a non-shit TAA, most just don't bother.
That's it. That's all "advantage" DLSS 2.0 has. Tensor cores need not apply.

Not sure if trolling. Not only does the quality mode of DLSS 2.0 look better than TAA, it also delivers 30% more performance. That's the entire point.
 

ZywyPL

Banned
I'm more into voting with my wallet than an average user [...]

But what does that mean tho? For me voting with my wallet is = here is my xxx$, show me what you got, and the better offer wins. It's showing the other offer(s) that they need to step up their game if they want my money. Just look at how AMD has advanced in their CPU lineup, that wouldn't happened if Intel didn't have 90% market share year after year after year, and if we want AMD to excel on the GPU side as well we need to show them they need to do better, not support mediocrity.

By betting on the weaker horse you only hurt yourself, nobody else, and the frustration is clearly seen in all you posts regarding NV ever since Turing GPUs launched - this is a gimmick, this is fake, and what's not, but this is where the industry is heading and there's nothing a single angry consumer like you can do about it, by being so stubborn you just purposely leave yourself in the dust while everyone around you just keeps enjoying the games. You just have to accept the fact that you bough an inferior product, instead of building up a philosophy why the better product actually isn't better, suck it up and deal with it. And if you are really that much against RT and AI, luckily for you both AMD and NV have a solution for you, it's called 5700XT and 1660Ti. But I guess with those cards you'd probably start a new agenda of how terribly unoptimized games are...


People just don't understand DLSS at all

Do the end consumers need to understand tho? Most people simply see that they get sharper image and double the framerate, so no matter how you slice it, 99% will take that with open hands and will even close their eyes on some minor disadvantages/artifacts. It's like with, I don't know, cars for example - barely anyone really knows what the hell is going on under the hood, what happens with the gas throughout it's journey from the tank all the way up to the exhaust pipe, how different parts on the engine, transmission and suspension move and cooperate with eachother, they just don't know, nor they want, nor they need, nor they care, all people know is that when you step on the gas the car goes forward, and the more powerful the engine the faster it goes, no one needs to have an engineer degree to know that single basic principle, even a few year old kids know it. And with GPUs all what people care about is how fast the games run, nothing more, nothing less, they see the FPS charts and that's all that matters, and how it it achieved, nobody really cares.
 
Last edited:
I've seen a couple of posts mentioning 6800XT > 3080 in rasterisation. While it may end up being the case by a tiny %, I think more realistically the two cards will be roughly on par.

So + or - up to 5% depending on the title. In other words they are going to likely trade blows and be roughly on par. Not trying to dilute the hype train exactly, just trying to keep expectations in check and somewhat realistic.

So for example if you benchmarked title A, maybe 3080 has 2-5 more FPS, then title B AMD has 2-5 more FPS etc...

Then in some edge cases one card or the other might be ahead by 10+fps in title C if the engine of the game heavily favours one architecture or the other. We will likely see this kind of thing a lot in the benchmarks.

In a bunch of other titles they might be essentially the same minus 1 fps or so.

Exciting times and it is great to (seemingly) have good competition in the high end again.

Really looking forward to the 28th of October.
 

FireFly

Member

regawdless

Banned
Possible raytracing benchmarks:


Looks like there is a chance the 6800 XT will be competitive with the 3070 in raytracing.

This is big, if true. I mean the non RTX performance, really impressive.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Possible raytracing benchmarks:


Looks like there is a chance the 6800 XT will be competitive with the 3070 in raytracing.


Ohh man.
Equal or better performance at 4K than the RTX 3080.
Equal or better RT performance than an RTX 3070.

Dont fuck this up AMD, dont fuck this up........this should be a layup.

02-Score.png
 
Top Bottom