• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RDNA2 Isn't As Impressive As It Seems

GymWolf

Gold Member
My advice; Don't let anyone influence your decisions. Inform yourself as much as possible and make your decision based on that. And be careful of marketing, both direct and indirect.
I have time to see benchmarks for both gpus in real world applications, i probably gonna upgrade in 2-6 months. (bullshit i'm gonna upgrade when the hype kicks in but still not sooner than 2 months tho)
 
Last edited:

bohrdom

Banned
Nothing can "stack up" when one lives in a bubble, where TAA derivative upscaling gives results "on par or better than" 4k.

What are you even saying dude? I'm looking at this from a technical and an end result view point. I really don't have any skin in the game.
 
Last edited:

iQuasarLV

Member
What is more funny? People used to 'care' about the cost of a mainstream card. Used to be $200-$250 range. Then it was a firm $250. Last gen it was okay to consider a $400 card mainstream. Now? $500. My God, how do you people look yourselves in the mirror when both, BOTH, companies have convinced you to smile at a shit sandwich, eat it, and ask for more? I am the poor one in my circle of friends and I had to swallow a bitter pill to buy a 5600 XT for $300 because there was nothing, NOTHING cheaper this generation. Before that it was an RX470 4GB for $180 and before that hand-me-down 780ti and 680ti video cards. In reality, most people get what they can afford and set principles on what is acceptable cost. For me? Anything at or exceeding my car's monthly payment can screw right off. I like my car. I love my house. To buy something and say, eh I'll cover the bill next month is an idiot pretending to keep up with the Jones'. That is what I feel Ray Tracing and DLSS are; fancy feature sets to justify buying a new generation of video card. No game I play currently has either of those features. And at 28 games by end of 2021, do I really plan to play any of them.? Not really.

As for Nvidia v AMD performance and cock measuring about disposable income, I got two hetero-lifemate-friends who both work as engineers and single. Been friends over 28 years. They have disposable income for days enough to buy stock and bitcoiin on a whim. When it comes to flexing on what their PC specs / car / fashion is and I quote, "The fuck do I care what other people think about what I bought? I wanted it, and I am happy with it."

When you flex with explanations on what your awesome history is over others in a conversation, you lost.

When you flex pictures of what you got and others don't, you lost.

When you got to reach for the heavens and pluck the ambrosia of an explanation to justify your place in the world, you lost.


So, here I am questioning my sanity for bothering to read 6 pages of this stuff. The cognitive dissonance and logical fallacy started this thread, and just went to the races in continuing to exist there. The dog I have in this game? The time I spent reading all the replies. phew. Gonna go earn a paycheck now. PEACE.
 

bohrdom

Banned
Is RT and DLSS more important than Smart Access Memory?

It really depends on the consumer. The market will only say.

If I had to bet I think DLSS will yield better performance gains than Smart Access Memory for the gamer. Both require developer input and I think DLSS is easier to integrate than SAM.
 
Last edited:
It really depends on the consumer. The market will only say.

If I had to bet I think DLSS will yield better performance gains than Smart Access Memory for the gamer. Both require developer input and I think DLSS is easier to integrate than SAM.

While DLSS will definitely grant more performance than SAM, SAM does not require developer intervention. If you have Zen 3, 500 series motherboard and RX6000 series GPU then you can simply turn it on in the bios and it just works.

Although the two are completely different technologies solving different problems so they are not really directly comparable.
 
Last edited:

BluRayHiDef

Banned
Is RT and DLSS more important than Smart Access Memory?
When rasterization is good enough and/ or pretty much equal between the competition, then yes, RT and DLSS are more important than SAM. Nvidia's cards trade blows with AMD's cards in rasterization and at least exceed the threshold of good performance in rasterization in the instances in which they lose. Hence, RT and DLSS are what make them stand out relative to AMD's cards.
 

cucuchu

Member
When rasterization is good enough and/ or pretty much equal between the competition, then yes, RT and DLSS are more important than SAM. Nvidia's cards trade blows with AMD's cards in rasterization and at least exceed the threshold of good performance in rasterization in the instances in which they lose. Hence, RT and DLSS are what make them stand out relative to AMD's cards.

I tend to agree, hence why AMD is working on a DLSS alternative. If they didn't think they needed it, they wouldn't even bother. It'll be interesting to see what they have to show next year.

RDNA2 is pretty impressive and I don't think anyone should try to take anything away from AMD because it seems they are bringing the fight to NVIDIA which is a good thing for everyone. Its just the hyperbolic statements you see (from both NVIDIA and AMD enthusiasts) which leads to people running with these naratives that a few fps increase in one game/benchmark or the other equals an utter slaying of the competition. I've personally gone with a 3080 this go around simply because I was able to get one and there certainly is not a reason to trade a 3080 for a 6800XT unless you want specifically a few more FPS in a couple of games without RT turned on...or vice versa if you get a 6800(XT). You are going to get amazing performance from both for the next 3-5 years and it all comes down to if you appreciate RT which to be fair can be hit or miss depending on the title.
 
Last edited:

bohrdom

Banned
While DLSS will definitely grant more performance than SAM, SAM does not require developer intervention. If you have Zen 3, 500 series motherboard and RX6000 series GPU then you can simply turn it on in the bios and it just works.

Has this been verified anywhere?

Although the two are completely different technologies solving different problems so they are not really directly comparable.

Yes but in terms of end effect for the consumer they're comparable. FPS and Image Quality.
 
Has this been verified anywhere?

Yes. All of the games benchmarked at AMD's show have already been released. None of them were coded with SAM in mind because it wasn't even a thing at the time those games were released. SAM is something that happens in the hardware/OS not something specific in the game code. Of course it will likely soon be possible for game developers to optimize their games in some ways to utilize SAM for additional performance uplift on top of what is natrually there with SAM. AMD mentioned at their reveal that none of the games shown with SAM on were optimized for usage with SAM.

Granted the performance uplift will vary depending on how the game itself is coded, some games will receive little to no increase, while an outlier like Forza receives a massive 11% increase in performance. Most of the games shown seemed to get between 5-7% performance uplift with SAM enabled.

Yes but in terms of end effect for the consumer they're comparable. FPS and Image Quality.

They are not really comparable at all to be honest, SAM is not a post processing effect and does not affect image quality at all. It simply grants more performance to the game by allowing the CPU to access the full memory of the GPU. DLSS is a post processing, machine learning based upsampling and reconstruction technique. The reason you get higher FPS when using DLSS is that it is literally rendering at a lower resolution and then upscaling that frame by frame to a 4K approximation.

Granted using either one will likely increase the FPS of the game you are playing, they are fundamentally different and cannot be directly compared. It would be like comparing multi-threading in CPUs to Anti-Aliasing or something like that.
 
Last edited:

bohrdom

Banned
Yes. All of the games benchmarked at AMD's show have already been released. None of them were coded with SAM in mind because it wasn't even a thing at the time those games were released. SAM is something that happens in the hardware/OS not something specific in the game code. Of course it will likely soon be possible for game developers to optimize their games in some ways to utilize SAM for additional performance uplift on top of what is natrually there with SAM. AMD mentioned at their reveal that none of the games shown with SAM on were optimized for usage with SAM.

Granted the performance uplift will vary depending on how the game itself is coded, some games will receive little to no increase, while an outlier like Forza receives a massive 11% increase in performance. Most of the games shown seemed to get between 5-7% performance uplift with SAM enabled.

Gotcha. This is some pretty cool tech. It shows the gains you can have when you own the full stack on the machine.

They are not really comparable at all to be honest, SAM does is not a post processing effect and does not affect image quality at all. It simply grants more performance to the game by allowing the CPU to access the full memory of the GPU. DLSS is a post processing, machine learning based upsampling and reconstruction technique. The reason you get higher FPS when using DLSS is that it is literally rendering at a lower resolution and then upscaling that frame by frame to a 4K approximation.

Granted using either one will likely increase the FPS of the game you are playing, they are fundamentally different and cannot be directly compared. It would be like comparing multi-threading in CPUs to Anti-Aliasing or something like that.

I'm saying they're comparable because of the end result for the gamer. At the end of the day the gamer cares about fps and image quality. They don't really care about how the sw/hw actually achieves the metrics.

If DLSS renders a higher quality image compared to SAM and DLSS provides more FPS than SAM (all other things held constant), who cares about the technology except the technical folks. I understand that not all games support DLSS and the customer will probably have to factor that in their purchase decision.
 
Last edited:

Ascend

Member
One of the main differences between DirectML and DLSS is that the former does the AI calculations in real-time, and the latter is done offline and the result is implemented into the game code for final execution. DirectML has the advantage of potentially working across all games without requiring manual implementation. DLSS has the advantage that it can be optimized per game and therefore potentially offer better per game results.

Additionally, DirectML will likely work on everything, while DLSS is proprietary to nVidia. That means that DirectML will be adopted widely more naturally, specifically because it is a primary feature in the Xbox Series S/X. DLSS implementation will be dependent on how much money nVidia is willing to throw around.
 

A2una1

Member
One of the main differences between DirectML and DLSS is that the former does the AI calculations in real-time, and the latter is done offline and the result is implemented into the game code for final execution. DirectML has the advantage of potentially working across all games without requiring manual implementation. DLSS has the advantage that it can be optimized per game and therefore potentially offer better per game results.

Additionally, DirectML will likely work on everything, while DLSS is proprietary to nVidia. That means that DirectML will be adopted widely more naturally, specifically because it is a primary feature in the Xbox Series S/X. DLSS implementation will be dependent on how much money nVidia is willing to throw around.


Are independent benchmarks out yet? Reading through this Thread seems like all is on the table... I mean how do they perform in a variety of benchmarks in regards of different settings (Intel and AMD CPUs <5000 series).
OK the AMD gpus seem to be a real competition now, but even if they outperform Nvidia by 10% (arbitrary number I choose) they are slightly cheaper (I Don't consider the 3090 as gaming card) the Nvidia cards aren't that catastrophic. I mean so few were able to get ones... So what is the problem here?
 

Ascend

Member
Are independent benchmarks out yet? Reading through this Thread seems like all is on the table... I mean how do they perform in a variety of benchmarks in regards of different settings (Intel and AMD CPUs <5000 series).
OK the AMD gpus seem to be a real competition now, but even if they outperform Nvidia by 10% (arbitrary number I choose) they are slightly cheaper (I Don't consider the 3090 as gaming card) the Nvidia cards aren't that catastrophic. I mean so few were able to get ones... So what is the problem here?
I am not sure why you're asking what the problem is...? Technically, there is no problem. At least, nothing that is unusual. People generally have opinions while having a lack of knowledge. And those people are often the loudest. We should indeed be waiting for independent benchmarks before jumping to conclusions...

For one, pretty much all ray tracing implementations up until now have been DXR 1.0. Supposedly, RDNA2 is better at DXR 1.1 than it is at DXR 1.0. That still doesn't mean they'll beat the RTX cards (they can do 1.1 also), but, the disadvantage will likely be smaller. Remember that this is still a rumor, but it can be one of the main reasons why AMD hasn't shown any ray tracing performance yet with available DXR games. As far as I know, the only game that supports DXR 1.1 is Minecraft. The rest is still on DXR 1.0.
But instead we get talks about AMD not having any hardware ray tracing at all, or that they will not be competitive at all with ray tracing... Or that their architecture isn't impressive etc.
What happened to waiting for benchmarks? It's always doom and gloom when it concerns AMD. AMD went from not supporting DXR all the way to DX12U. So... Any argument that RDNA2 is not impressive is moot, which is what a bunch of people in this thread are constantly claiming.

In any case, for the ones interested in actual tech advancements...;

DXR 1.1 is an incremental addition over the top of DXR 1.0, adding three major new capabilities:

- GPU Work Creation now allows Raytracing. This enables shaders on the GPU to invoke raytracing without an intervening round-trip back to the CPU. This ability is useful for adaptive raytracing scenarios like shader-based culling / sorting / classification / refinement. Basically, scenarios that prepare raytracing work on the GPU and then immediately spawn it.
- Streaming engines can more efficiently load new raytracing shaders as needed when the player moves around the world and new objects become visible.
- Inline raytracing is an alternative form of raytracing that gives developers the option to drive more of the raytracing process, as opposed to handling work scheduling entirely to the system (dynamic-shading). It is available in any shader stage, including compute shaders, pixel shaders etc. Both the dynamic-shading and inline forms of raytracing use the same opaque acceleration structures.

Inline raytracing can be useful for many reasons:
  • Perhaps the developer knows their scenario is simple enough that the overhead of dynamic shader scheduling is not worthwhile. For example, a well constrained way of calculating shadows.
  • It could be convenient/efficient to query an acceleration structure from a shader that doesn’t support dynamic-shader-based rays. Like a compute shader or pixel shader.
  • It might be helpful to combine dynamic-shader-based raytracing with the inline form. Some raytracing shader stages, like intersection shaders and any hit shaders, don’t even support tracing rays via dynamic-shader-based raytracing. But the inline form is available everywhere.
  • Another combination is to switch to the inline form for simple recursive rays. This enables the app to declare there is no recursion for the underlying raytracing pipeline, given inline raytracing is handling recursive rays. The simpler dynamic scheduling burden on the system can yield better efficiency.

Scenarios with many complex shaders will run better with dynamic-shader-based raytracing, as opposed to using massive inline raytracing uber-shaders. Meanwhile, scenarios that have a minimal shading complexity and/or very few shaders will run better with inline raytracing.

If the above all seems quite complicated, well, it is! The high-level takeaway is that both the new inline raytracing and the original dynamic-shader-based raytracing are valuable for different purposes. As of DXR 1.1, developers not only have the choice of either approach, but can even combine them both within a single renderer. Hybrid approaches are aided by the fact that both flavors of DXR raytracing share the same acceleration structure format, and are driven by the same underlying traversal state machine.

Best of all, gamers with DX12 Ultimate hardware can be assured that no matter what kind of Raytracing solution the developer chooses to use, they will have a great experience.


 

Rickyiez

Member
I have a 2700x, 650w PSU and currently game at 1080p. I plan on getting a whole new system in about 2 years. You don't think a 6800xt or 3080 would be overkill right now?

6800xt is only $70 more , pretty much no brainer here . 6800 is a very weirdly priced product .
 

BluRayHiDef

Banned
I don't know whether or not this has been posted yet, but supposed benchmarks of the RX 6800, the RTX 2080TI, and the RTX 3070 running Shadow of the Tomb Raider with ray tracing enabled have leaked and they indicate that the RX 6800 is faster than the other two cards when they're not using DLSS.

Considering that the RX 6800 is meant to compete with and correspond to the RTX 3070 (which is roughly equal to the RTX 2080Ti in performance), if these benchmarks are real, then they indicate that the RX 6800XT and RX 6900XT are proportionally faster than the RTX cards with which they compete and to which they correspond (i.e. the RTX 3080 and RTX 3090, respectively) in Shadow of the Tomb Raider under the aforementioned settings with ray tracing enabled when the RTX cards aren't using DLSS.

Do you guys think that this performance delta would be consistent across most or all games that include ray tracing, considering that Shadow of the Tomb Raider has only ray traced shadows?

Article said:
The performance benchmarks that were carried out are apparently of an AMD Radeon RX 6800 graphics card which is the most cut-down variant of the three Big Navi GPUs based on the RDNA2 graphics architecture. The graphics card was tested with an AMD Ryzen 5 3500X 6-core desktop processor along with 16 GB of DDR4 memory. The game selected was Shadow of The Tomb Raider which was on of the first titles to enable ray-tracing. The game makes use of Raytraced shadows & these can be enabled with any graphics card that features support for DXR through the DX12 API.

The game was set to the Ultra quality preset with 8x Anisotropic Filtering, Screen space Lens and effects set to Normal quality, and raytracing shadows quality set to High. DLSS was disabled since that feature is not supported by AMD RDNA 2 graphics cards such as the Radeon RX 6800 but it is reported that AMD is already working on a similar solution based on Microsoft's DirectML tech & should come in a later date in select titles.

TmsKcNY.png


BFJgiol.png


Source:

 

llien

Member
I have better version of the OP:

bYKXms9.png


I'm looking at this from a technical a
AMD rolled out cards on slower memory and 256bit bus beating pricey VRAM on 384 bit, is not impressive for you.
Anti-lag and radeon chill is not impressive either.

There is no technical in "my butt feels this tech is better than that tech".
 

bohrdom

Banned
I have better version of the OP:

bYKXms9.png



AMD rolled out cards on slower memory and 256bit bus beating pricey VRAM on 384 bit, is not impressive for you.
Anti-lag and radeon chill is not impressive either.

There is no technical in "my butt feels this tech is better than that tech".

IDK how much clearer I need to be. Like I said I'm talking about SW tech. Rather than looking at my last post out of context, put it in the context of the discussion in the previous replies.
 

llien

Member
I don't know whether or not this has been posted yet, but supposed benchmarks of the RX 6800, the RTX 2080TI, and the RTX 3070 running Shadow of the Tomb Raider with ray tracing enabled have leaked and they indicate that the RX 6800 is faster than the other two cards when they're not using DLSS not running at lower resolution.

FTFY
"But 1440p upscaled to 4k by NV's TAA derivative looks the same as native", no it doesn't and you had that embarrassing moment on page 4 in this very thread.
 

bohrdom

Banned
Of three items mentioned above, only one is hardware.

I won't even travel into "exactly what aspect of what do we need to cherry pick" for your true scotsman fallacy to stand.
In fact, I'm rather bored and done with it.

lol you're bored because you don't know what you're talking about.

Also yes like I said earlier. Antilag and radeon chill are traditional software solutions. Nothing really exciting. DLSS and the Raytracing Denoiser are pretty much black magic if you look at how they achieve those solutions.
 
Last edited:

BluRayHiDef

Banned
FTFY
"But 1440p upscaled to 4k by NV's TAA derivative looks the same as native", no it doesn't and you had that embarrassing moment on page 4 in this very thread.

Yes, it does. I would know because I've played through most of Control via DLSS Quality Mode in 4K (1440p to 4K). Sure, there may be small details that aren't as defined as they'd be in native 4K (e.g. small text, reflections, etc), but they're just as smooth as they'd be in native 4K and you'd have to stop the game and overanalyze what's on the screen while repeatedly switching DLSS on and off in order to notice. Furthermore, the major details, such as the quality of character models and environmental textures, look identical.

On the other hand, if I switch from native 4K to regular 1440p, I notice the difference immediately.

DLSS is so good that when it defaulted to Performance Mode (1080p to 4K) as a result of me swapping one of my graphics cards out of my computer for another, I didn't even notice. It was only after I checked the settings due to observing a significant boost in frame rate via MSI Afterburner (Rivatuner) that I realized that the game had been running in Performance Mode.

Don't underestimate DLSS; it's very effective.
 
One of the main differences between DirectML and DLSS is that the former does the AI calculations in real-time, and the latter is done offline and the result is implemented into the game code for final execution. DirectML has the advantage of potentially working across all games without requiring manual implementation. DLSS has the advantage that it can be optimized per game and therefore potentially offer better per game results.

DirectML does not do anything "realtime" beyond running on an existing model. All machine learning models are trained offline, and run realtime. Training takes a hell of a long time. DLSS2.0 uses a generic model now for all games, as long as they send the correct inputs into the model (Multiple frames / buffers, movement vector buffer). Training for specific games is possible on both models also, but again, training could take hours - days.
 
Last edited:
No, not "small details".
Death Stranding, entire screen is blurry when mouse moves fast, it's not "just quickly zoom on pretty much anything to see blur", entire screen is.

FP8axdV.png
Mr. expert is at it again.

Please let us know how many games you played with DLSS on and what's the screen you're using.

:messenger_grinning_sweat:
 

BluRayHiDef

Banned
No, not "small details".
Death Stranding, entire screen is blurry when mouse moves fast, it's not "just quickly zoom on pretty much anything to see blur", entire screen is.

FP8axdV.png

1. I've never played that game on PC; my experience is based on Control.

2. Death Stranding is meant to be played with a controller.
 
2. Death Stranding is meant to be played with a controller.
I don't know about that. I have close to 100 hours on PC and I played with M+K. It's a far superior experience, especially thanks to how easy it is to manage cargo. Saves a lot of time.

Guess what, I also played with DLSS on, despite the fact that I could achieve 4k 60FPS natively. No problem and no complains. Image quality was amazing, fast movement or not.
 
I don't know whether or not this has been posted yet, but supposed benchmarks of the RX 6800, the RTX 2080TI, and the RTX 3070 running Shadow of the Tomb Raider with ray tracing enabled have leaked and they indicate that the RX 6800 is faster than the other two cards when they're not using DLSS.

Considering that the RX 6800 is meant to compete with and correspond to the RTX 3070 (which is roughly equal to the RTX 2080Ti in performance), if these benchmarks are real, then they indicate that the RX 6800XT and RX 6900XT are proportionally faster than the RTX cards with which they compete and to which they correspond (i.e. the RTX 3080 and RTX 3090, respectively) in Shadow of the Tomb Raider under the aforementioned settings with ray tracing enabled when the RTX cards aren't using DLSS.

Do you guys think that this performance delta would be consistent across most or all games that include ray tracing, considering that Shadow of the Tomb Raider has only ray traced shadows?



TmsKcNY.png


BFJgiol.png


Source:


Yeah the performance seems pretty impressive here. Assuming that these leaked benchmarks are real and assuming that the performance would be roughly the same across a suite of titles then it is looking pretty good for RDNA2 Ray Tracing. Having said that this looks somewhat in line with earlier leaks and rumours that RDNA2 would have better performance than Turing but a little less than Ampere.

In addition to this, the absolute performance of Ampere RT is likely higher than RDNA2 but this will likely manifest the largest gap in fully path traced games or rendering workloads that can utilize RT hardware of the GPU. We would also likely see this in other non gaming calculations/workloads or just benchmarks.

In actual games, where they 99% use hybrid rendering, the gap between Turing and Ampere massively shrinks. This also means that the gap between RDNA2 and Ampere will be smaller again in actual real world scenarios in games. I still predict Ampere cards to take the overall lead in RT vs their AMD equivalents, for example I expect 3080 to have a few more frames than 6800XT in most games, but the gap looks like it might be smaller than a lot of people thought.

Of course wait for real benchmarks and all that to see how they all perform in the end. One thing I'd like to add is that the 6800 is not exactly direct competitor to 3070, it sort of is and isn't. What I mean is that across the board the 6800 is more powerful than 3080 by a good chunk, 6800 seems positioned against whatever further cut down GA102 die that Nvidia will release next year in response.

It occupies a weird market in that it both does and doesn't compete with 3070 at the same time. I think the idea was to make the 3070 look redundant while competing with whatever higher tier card Nvidia releases as a response.

Anyway what I'm try to say is that the performance delta here between 3070 and 6800 won't necessarily be repeated at higher tiers with for example the 6800XT and 3080 or the 6900XT and the 3090. In those I expect Nvidia to be ahead, but it might only be by a tiny margin. Of course this is without DLSS or Super Resolution active, just to be clear.
 
Death Stranding is not that bad with DLSS. Sure it aint quite the 'better than native' folk on GAF like to parrot tho, You lose a little fine detail for practically no aliasing as the tradeoff - but it doesn't blur like that in the image. I'm not even the biggest DLSS fan, play Death Stranding native instead & only really use it for Metro Exodus to get 4K/60fps with RTX on, but you should probably ease off the hate a little.

This is a strange thread tho. AMD have gone from trading blows with a 2070S to taking on the 3090 in a single generational jump. It is as impressive as it seems.
 

llien

Member
1. I've never played that game on PC; my experience is based on Control.
That is fine.
Let me ignore "it adds blur and wipes fine detail, as you'd expect from TAA derivative" then.

is not that bad
Of course, as TAA would not exist if it didn't add benefits, and DLSS 2.0 is one of the better TAA derivatives.
As for blur, the way "tell us which is 1440p upscaled with DLSS and which is 4k" was quickly resolved with checking which one is blurred on page 4 of this thread.

The argument at hand is "but it's 4k... it's even better 4k", I don't argue it adds some value. (although your mileage might vary).
The pic above was to demonstrate that effects as not as "local" as people claim it to be.

UeykcQW.jpg
 
That is fine.
Let me ignore "it adds blur and wipes fine detail, as you'd expect from TAA derivative" then.


Of course, as TAA would not exist if it didn't add benefits, and DLSS 2.0 is one of the better TAA derivatives.
As for blur, the way "tell us which is 1440p upscaled with DLSS and which is 4k" was quickly resolved with checking which one is blurred on page 4 of this thread.

The argument at hand is "but it's 4k... it's even better 4k", I don't argue it adds some value. (although your mileage might vary).
The pic above was to demonstrate that effects as not as "local" as people claim it to be.

UeykcQW.jpg
Guys do not engage in discussion with this individual. He/She has no clue. Potential troll. 😅
 
I think I should show mercy and leave, before buthurt Ampere users thread gets any more pathetic... :messenger_beaming:
The only pathetic buthurt person in this thread is the one that's shitting on something that he/she never, ever used on a proper 4k 65+ inch screen and is using a 1080p monitor to do his/her "expert" analysis.
 
Wait until they find out AMD's "super resolution" is just another sharpening filter.
I hope it won't be the case. I hope whatever AMD's solution is, it will be on par or better than DLSS.

DLSS is seriously amazing. I want this technology to be available to everyone and to improve over-time.
 
What is more funny? People used to 'care' about the cost of a mainstream card. Used to be $200-$250 range. Then it was a firm $250. Last gen it was okay to consider a $400 card mainstream. Now? $500. My God, how do you people look yourselves in the mirror when both, BOTH, companies have convinced you to smile at a shit sandwich, eat it, and ask for more? I am the poor one in my circle of friends and I had to swallow a bitter pill to buy a 5600 XT for $300 because there was nothing, NOTHING cheaper this generation. Before that it was an RX470 4GB for $180 and before that hand-me-down 780ti and 680ti video cards. In reality, most people get what they can afford and set principles on what is acceptable cost. For me? Anything at or exceeding my car's monthly payment can screw right off. I like my car. I love my house. To buy something and say, eh I'll cover the bill next month is an idiot pretending to keep up with the Jones'. That is what I feel Ray Tracing and DLSS are; fancy feature sets to justify buying a new generation of video card. No game I play currently has either of those features. And at 28 games by end of 2021, do I really plan to play any of them.? Not really.

As for Nvidia v AMD performance and cock measuring about disposable income, I got two hetero-lifemate-friends who both work as engineers and single. Been friends over 28 years. They have disposable income for days enough to buy stock and bitcoiin on a whim. When it comes to flexing on what their PC specs / car / fashion is and I quote, "The fuck do I care what other people think about what I bought? I wanted it, and I am happy with it."

When you flex with explanations on what your awesome history is over others in a conversation, you lost.

When you flex pictures of what you got and others don't, you lost.

When you got to reach for the heavens and pluck the ambrosia of an explanation to justify your place in the world, you lost.


So, here I am questioning my sanity for bothering to read 6 pages of this stuff. The cognitive dissonance and logical fallacy started this thread, and just went to the races in continuing to exist there. The dog I have in this game? The time I spent reading all the replies. phew. Gonna go earn a paycheck now. PEACE.

We're all grown ass men here, most of us with a decent income. Paying 300 or 500 bucks for a thing you buy every 3 years or so doesn't really make a difference in the grand scheme of things. I cared about prices when I was a teenager.
 

Mister Wolf

Member
I hope it won't be the case. I hope whatever AMD's solution is, it will be on par or better than DLSS.

DLSS is seriously amazing. I want this technology to be available to everyone and to improve over-time.

They planned ahead when they designed those cards for DLSS to be amazing adding hardware to the GPU intended for the feature. Doubt any purely software based solution can or will match it.
 
Top Bottom