• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RX 7900XTX and 7900XT review thread

GreatnessRD

Member
I don't think that's the case. Given their public statements as of late, I think Nvidia is trying to creep up the prices significantly on their whole range and have their revenue take a much larger part of the PC gamers' disposable income.
Jensen & Co. saw how PC gamers' eventually bent the knee and bought GPUs at ridiculous prices during the COVID lockdowns, and he's trying to hold that inflated value perception to see if it sticks indefinitely. We're also living in an era where FOMO gets a hold on many young people with too much daddy's money and honestly that's probably a big factor to all those 4090 being sold.
AMD is just trying to ride the same winds, but given their currently abysmal dGPU marketshare I wonder if this move isn't going to result in their GPU division ending the PC discrete card business as a whole. They won't survive without sales.
Oh, they're for sure trying to up the price since people loss their got damn minds during the Covid Era. But I think Jensen and friends are forgetting about those stimmy checks and prices rising that most folks aren't just spending wildly like the last two years. I do agree with you and the FOMO era of young folks. But these cards are still targeted to the enthusiast because the Steam survey still shows us what lower mid-range/entry cards that reign supreme. And I don't expect that to change much even if AMD and Nvidia are playing the high price game down the product stack.
 
On the other side, we have sites like DF, especially Alex, who is an hardcore NVidia fanboy, who has admitted to only have ever used NVidia cards on his personal rigs.
The kind of fanboy that would praise DLSS 1, when all other sites and gamers said it was crap. One that said that DLSS 2.0 was better than native in Control, despite it having many issues with ghosting and blurring in motion.
The kind of fanboy that would praise RT, even in games where it made no difference at all, except for the huge drop in frame rate.

I used to think I'm the only one noticing this.
 

//DEVIL//

Member
I don't think that's the case. Given their public statements as of late, I think Nvidia is trying to creep up the prices significantly on their whole range and have their revenue take a much larger part of the PC gamers' disposable income.
Jensen & Co. saw how PC gamers' eventually bent the knee and bought GPUs at ridiculous prices during the COVID lockdowns, and he's trying to hold that inflated value perception to see if it sticks indefinitely. We're also living in an era where FOMO gets a hold on many young people with too much daddy's money and honestly that's probably a big factor to all those 4090 being sold.
AMD is just trying to ride the same winds, but given their currently abysmal dGPU marketshare I wonder if this move isn't going to result in their GPU division ending the PC discrete card business as a whole. They won't survive without sales.


If they showed a chart won't only RT games to show "AMD bEiNg dEstRoYeD" then they'd also need to show a rasterization-only chart showing the 4080 bEiNg dEstRoYeD and then the exact same people would complain about this second graph existing at all.

Truth is the 7900XT/X are competent enough on almost all RT enabled games and score just some 17-20% below the 4080.
Save for these last minute patches and demos, like the Cyberpunk 2077 Uber mode and Portal let's-just-bounce-all-the-rays-a-gazzillion-times RTX, all of which released just in time for the RTX4090 reviews (and were undoubtedly a part of the review "guidelines".. the ones that will put reviewers into a blacklist if they're not followed).
What are you talking about ? The rigged chart with 2 call of duty and 2 borderland 3 ARE resta chart . And STILL 7900xtx didn’t beat the 4080.
 

winjer

Gold Member

AMD-RADEON-7900-IGOR-1-1200x491.png
 

Crayon

Member
I checked that hardware unboxed review expecting... something after the comments here. I've watched the channel plenty before so I should have known the test and conclusion would be more sensible than sensational. There's clearly a beef here. So they didn't jump on the rtx train right off the bat but gradually warmed up to it in line with the rate that it gets more relevant. Is that stepping out of line or something? Plenty of people feel that way. At least on paper, it seems to be the natural tack to take with a slow-moving technology.

The conclusion was that the 7900's cheaper price is not as enticing as the 5700's was. Partially because rt has been coming into it's own and partially because all of this stuff is too expensive for what you get and you probably shouldn't buy it anyway. Maybe you think they had to put their thumb on the scale to get to these conclusions, but when they seem to not depart at all from common sense, what did you want the conclusion to be?
 

Topher

Gold Member
I checked that hardware unboxed review expecting... something after the comments here. I've watched the channel plenty before so I should have known the test and conclusion would be more sensible than sensational. There's clearly a beef here. So they didn't jump on the rtx train right off the bat but gradually warmed up to it in line with the rate that it gets more relevant. Is that stepping out of line or something? Plenty of people feel that way. At least on paper, it seems to be the natural tack to take with a slow-moving technology.

The conclusion was that the 7900's cheaper price is not as enticing as the 5700's was. Partially because rt has been coming into it's own and partially because all of this stuff is too expensive for what you get and you probably shouldn't buy it anyway. Maybe you think they had to put their thumb on the scale to get to these conclusions, but when they seem to not depart at all from common sense, what did you want the conclusion to be?

I can't say I disagree with them for not warming up to ray tracing as quickly as others. I've felt RT, up to this point, was far too much of a drain on resources. At the same time, someone can acknowledge that and also say this technology is a work in progress or "slow-moving" as you say, but some have said they were completely dismissive and that's not a good take either. Whether or not HU did that, personally I don't know. As I said before, I think their comment about RT results being sponsored was off-base unless they can prove it which I don't think they can. Just seemed petty and frankly, they are giving ammo to those who claim HU is Nvidia haters.
 

Crayon

Member
I can't say I disagree with them for not warming up to ray tracing as quickly as others. I've felt RT, up to this point, was far too much of a drain on resources. At the same time, someone can acknowledge that and also say this technology is a work in progress or "slow-moving" as you say, but some have said they were completely dismissive and that's not a good take either. Whether or not HU did that, personally I don't know. As I said before, I think their comment about RT results being sponsored was off-base unless they can prove it which I don't think they can. Just seemed petty and frankly, they are giving ammo to those who claim HU is Nvidia haters.

I'm not surprised to learn of this dynamic. They are big and are going to be perceived as influential. Some major outlet has to be perceived as being more favorable to one team or the other, no matter by what slim margin. The perception is what's important, whether it's accurate or not. I'm inclined to believe it is, and that is significant. The extent to which they favor amd though?

...I think we've all seen coverage that tries to make a horse race out of a clearly inferior product but these guys are popular for doing the opposite. I only started watching them in the last couple years and I think I've only seen them range from lukewarm to scathing on amd reviews. They take a "fuck this" attitude through a whole review if any product regardless of manufacturer is judged to be just okay. I've also seen them compare amd and nvidia and come up with "fuck ALL this" more than once. That was always the overwhelming impression I got.

tbh, this is the first time i've heard them accused of favoring amd, though I did notice they are consistently cold on rt and could have inferred the beef as I know that's proxy-war (how important rt is). Not knowing, I saw them with gn as the anti-hype duo. Their slant against bs is constantly emphasized and they are very popular for it. Their slant towards a brand, though? I never noticed it after hours and hours of viewing. Not that it can't be there, just that it would have to be very little to not pick up on it at all.

Anyhow if they are seen as the amd-leaning side of the spectrum to any degree and they are big enough to be perceived as influencing a narrative then it doesn't matter if they give a lot of ammo or a little. Less ammo would just be that much more precious. That's how tech drama works.
 
Last edited:

Buggy Loop

Member
I can't say I disagree with them for not warming up to ray tracing as quickly as others. I've felt RT, up to this point, was far too much of a drain on resources. At the same time, someone can acknowledge that and also say this technology is a work in progress or "slow-moving" as you say, but some have said they were completely dismissive and that's not a good take either. Whether or not HU did that, personally I don't know. As I said before, I think their comment about RT results being sponsored was off-base unless they can prove it which I don't think they can. Just seemed petty and frankly, they are giving ammo to those who claim HU is Nvidia haters.

Back in Ampere reviews, Gamers Nexus also had the same opinion of RT as hardware unboxed, that it's not really a show stopper, Steve clearly said this, but then he went on to bench a light-medium-heavy path tracing games, presented the data and shut up about it. That's it. You're investing big money in cards, reviewers should present a wide range of possibilities. I think that 4k on PC is useless and steam hardware survey seems to be siding with me on this. But who cares about my opinion? Please do show 4k benchmarks, because they can and should. Problem with hardware unboxed is how they went on to make false claims on the ampere RT core performances vs Turing (and oh boy, nowadays we see how fucking wrong he was). Just fucking present the data. Don't theorize shit you don't know about on a brand new architecture.

In fact, Steve from Hardware unboxed knows the business all too well. There's more clicks for AMD. That's it. Everyone roots for the underdog, everyone wants to see evil Nvidia fall flat on their face. I get it. Steve also has his main PC with Intel+Nvidia, so he knows what's up. But I don't know why but the whole AMD rumour mill on youtube, it gets clicks, it gets business. They have marginal market share but are the loudest motherfuckers in the tech industry. Just look how /r/AMD would make you believe they're the majority with how loud they are, but it's... just an impression.

Anyway, enough about HWUB

I was really curious to avg the babeltech review since it has a nice list of 41 games (i removed elden ring as it seems it didn't bench?), from vulkan, modern dx12, older dx12 and dx11, with lots of low RT, to medium RT. No path tracing games but ok.


I did the avg for 4k only, i'm lazy. 7900 XTX as baseline (100%)

So, games with RT only ALL APIs - 20 games
4090 168.7%
4080 123.7%

Dirt 5, Far cry 6 and Forza horizon 5 being in the 91-93% range for 4080, while it goes crazy with Crysis remastered being 236.3%, which i don't get, i thought they had a software RT solution here?

Rasterization only ALL APIs - 21 games
4090 127.1%
4080 93.7%

Rasterization only Vulkan - 5 games
4090 138.5%
4080 98.2%

Rasterization only DX12 2021-22 - 3 games
4090 126.7%
4080 92.4%

Rasterization only DX12 2018-20 - 7 games
4090 127.2%
4080 95.5%

Rasterization only DX11 - 6 games
4090 135.2%
4080 101.7%

I really love that site as it's my go-to for VR benchmarks, but they really need some help for the data presentation. What happens when you bench a dozen games is that it's very easy to skew one way or the other. I could easily pick games that would favor AMD massively, just like i could for Nvidia. 41 games across all APIs, if game has RT, they enable it, light or heavy RT.
 

ToTTenTranz

Banned
I checked that hardware unboxed review expecting... something after the comments here. I've watched the channel plenty before so I should have known the test and conclusion would be more sensible than sensational. There's clearly a beef here. So they didn't jump on the rtx train right off the bat but gradually warmed up to it in line with the rate that it gets more relevant. Is that stepping out of line or something? Plenty of people feel that way.

I think it's really just a handful of very prolific AMD-haters / Nvidia-lovers in this thread that are giving away that perception.
Ever since HWU got blacklisted by Nvidia for not following their "RT is the most important thing" script back in December 2020 and got Nvidia massively shat on for doing so, many of Nvidia's fans (or astroturfers) made it their goal to shit on them at every opportunity.

HWU actually puts a lot more effort in testing RT than for example LTT or Jay2Cents. It looks like the more audience (and therefore editorial independence) a publication has, the less effort they put into following GPU makers' review guidelines, and for GPUs that just means they test RT in games a lot less. Linus specifically is quite vocal on RT in most games not being worth the performance deficit.

But, because HWU are rather small they're easy targets for forum hate and astroturfing.

And then there's this:






In fact, Steve from Hardware unboxed knows the business all too well. There's more clicks for AMD.
"There's more clicks for AMD" when they have 10% of the discrete GPU marketshare. Yes that makes total sense, people love being told they bought the GPU from the "wrong" IHV.

https://cdn.mos.cms.futurecdn.net/ga4ysdRtWufiCuUx3tFmHD-1200-80.png.webp
 

Tams

Gold Member
I'm no expert in this area, and this is only four cards of 48 reported cases. So I don't know if this really is a serious issue, or if he's going above his station and exaggerating. Roman's usually pretty solid though.

 

Buggy Loop

Member
"There's more clicks for AMD" when they have 10% of the discrete GPU marketshare. Yes that makes total sense, people love being told they bought the GPU from the "wrong" IHV.

https://cdn.mos.cms.futurecdn.net/ga4ysdRtWufiCuUx3tFmHD-1200-80.png.webp

Same reason that r/AMD has more members than r/Nvidia, you can look it up. Same reason we had a poll here on neogaf for next gen cards and AMD was >50% intention to buy over Nvidia, which I mean yea, AMD wished it could expect market share based on that..

It’s not related to market share. Even when Nvidia has 85% of market share, a huge chunk of them will not even find an Nvidia sub or a forum to discuss their cards, it works, it gets the job done, shit they might not even know they have an Nvidia card, what’s a GPU? Can’t expect too much from mainstream. Probably a lot of prebuilts/laptops that skew the market share than pure hardcore gamers. While AMD market share is so small that and has almost no space in business, that it can basically fit entirely in a subreddit or almost with hardcore AMD fans that have tattoos of Lisa Su.

All this « welcome to team red » (cmon, there’s no teams, that’s cringe as fuck, as cringe as a billion dollar company telling their employees they’re part of a family), the marketing team attacking Nvidia on Twitter while Nvidia doesn’t even give them the time of day, the half a dozen of YouTube channels that exist solely for AMD rumours (can you even find that on Nvidia? I can’t even think of one). Like I said, the amd subreddit is loud and with even more subscribers than Nvidia, which does not represent reality. Even Nvidia some Nvidia owners buy the cards reluctantly because of CUDA or something but would prefer AMD to have the « Nvidia killer », that yearly hopium rumour mill that creates excitement and clicks. Just look how this place had a huge boner at the thought that nvidia fucked up prices and had melting cables, fire risk for your home, while AMD had an Nvidia killer coming up, at spitting distance of the 4090 for $600 less? Evil Nvidia to be defeated! Rejoice! Finally competition!

There’s business around that, otherwise MLID and many others would not exist. And you know what? That rumour mill is actually hurting AMD.

Reminds me a lot of Qanon to be honest. It’s a tech cult. I have nothing against AMD for the company, I have a lot of negative things to say about the community surrounding it though. I was ATI/AMD from mid 90’s to like 2014. I tell you this, something changed, it didn’t used to be like that. There were intelligent discussions on rage3d or other forums for AMD architectures and how they can find a place against nvidia, but nothing like the tech YouTubers that throw a bunch of shit rumours on the wall to see what sticks.
 

ChorizoPicozo

Gold Member
I think normal people just want competition in this space, because is fucking ridiculous.

so Intel failing to deliver something and AMD struggling to gain market share plus, Nvidia no messing around kind of sucks.
 

Dr.D00p

Member
I think normal people just want competition in this space, because is fucking ridiculous.

so Intel failing to deliver something and AMD struggling to gain market share plus, Nvidia no messing around kind of sucks.

The $800 406070Ti will at least force AMD into dropping the price on the 7900XT pretty quickly to $700-$750
 

Irobot82

Member
Same reason that r/AMD has more members than r/Nvidia, you can look it up. Same reason we had a poll here on neogaf for next gen cards and AMD was >50% intention to buy over Nvidia, which I mean yea, AMD wished it could expect market share based on that..

It’s not related to market share. Even when Nvidia has 85% of market share, a huge chunk of them will not even find an Nvidia sub or a forum to discuss their cards, it works, it gets the job done, shit they might not even know they have an Nvidia card, what’s a GPU? Can’t expect too much from mainstream. Probably a lot of prebuilts/laptops that skew the market share than pure hardcore gamers. While AMD market share is so small that and has almost no space in business, that it can basically fit entirely in a subreddit or almost with hardcore AMD fans that have tattoos of Lisa Su.

All this « welcome to team red » (cmon, there’s no teams, that’s cringe as fuck, as cringe as a billion dollar company telling their employees they’re part of a family), the marketing team attacking Nvidia on Twitter while Nvidia doesn’t even give them the time of day, the half a dozen of YouTube channels that exist solely for AMD rumours (can you even find that on Nvidia? I can’t even think of one). Like I said, the amd subreddit is loud and with even more subscribers than Nvidia, which does not represent reality. Even Nvidia some Nvidia owners buy the cards reluctantly because of CUDA or something but would prefer AMD to have the « Nvidia killer », that yearly hopium rumour mill that creates excitement and clicks. Just look how this place had a huge boner at the thought that nvidia fucked up prices and had melting cables, fire risk for your home, while AMD had an Nvidia killer coming up, at spitting distance of the 4090 for $600 less? Evil Nvidia to be defeated! Rejoice! Finally competition!

There’s business around that, otherwise MLID and many others would not exist. And you know what? That rumour mill is actually hurting AMD.

Reminds me a lot of Qanon to be honest. It’s a tech cult. I have nothing against AMD for the company, I have a lot of negative things to say about the community surrounding it though. I was ATI/AMD from mid 90’s to like 2014. I tell you this, something changed, it didn’t used to be like that. There were intelligent discussions on rage3d or other forums for AMD architectures and how they can find a place against nvidia, but nothing like the tech YouTubers that throw a bunch of shit rumours on the wall to see what sticks.
Wouldn't r/AMD include both CPU fans and GPU fans? Seems a little misleading.
 

rnlval

Member
It isn't when you factor out Windows and DirectX and instead use Proton/Linux in many cases.

Just like AMD CPUs were a 2nd class citizen to Intel on Windows over the years and only now with immense levels of compute and massive CPU caches is becoming harder for the Wintel MO to playout as normal. When Microsoft redeveloped DirectX for the original Direct-X-box Nvidia provided not only the GPU but provided Nvidia CG which used HLSL as a unified shader language to be used both with Nvidia CG and DirectX IIRC, and has been that way ever since. Nvidia's hand in DirectX makes them a first class citizen for the API - which is inferior as a hardware agnostic API to Opengl, Mantle and Vulkan - whereas AMD is effectively a 2nd class citizen, and the Windows vs Linux benchmarks differences support that IMHO.

Nvidia have something like 80% of the Windows gaming market, which has 95% of PC gaming market, so blaming AMD for the rigged game where they are always playing driver catch up hardly seems fair IMO. I haven't bought an AMD card - or ever bought an AMD CPU for myself - since they were ATI, so I'm not an AMD fanboy saying this, but I do recognise that benchmarking on WIndows with DirectX games isn't a reflection of the hardware or even the efforts AMD make with their drivers most of the time, and even Intel eluded to the additional performance in comparison their Arc can get using Vulkan based benchmarks.

Look at how Valve are getting way above AMD APU on Windows results with the SteamDeck APU and look at modern games like Calisto Protocol benchmarks - which was optimised for the PS5 Vulkan style API to see a better comparison of the hardware, even if it doesn't solve the reality that it is parity or likely worse product situation than buying Nvidia to use with Windows for gaming.
Intel Alderlake's Windows 11 multithreading scheduler also benefits AMD's 8-core CCD cluster when Ryzen SKU has two CCDs i.e. Intel Alderlake's multi-threading affinity is biased towards the first 8 P-Cores which also works for keeping the game's multi-threads within Zen 4's first CCD cluster.

For Ryzen Zen 4 SKUs with two CCDs i.e. Ryzen 9 7900X and Ryzen 9 7950X, the 1st CCD cluster has superior silicon quality when compared to the 2nd CCD cluster.

DX10.0 was for the GeForce 8000 series. Radeon HD 2900X's MSAA functions are partly shader-emulated, hence this is the Radeon VLIW team's design fault.

DX11 was for Radeon HD 5000 series.

DX12 Feature Level 12_0 was for GCN Bonaire, Tonga, and Hawaii. The problem with AMD GCN prior to VEGA GCN is L2 cache is not connected to RBE (ROPS), hence AMD's push for compute/TMU path (Async Compute PR) that is connected to the L2 cache while NVIDIA Maxwell pixel shader/ROPs and compute shader/TMU paths are connected to the L2 cache, hence Maxwell GPUs have superior consistent performance. ROPS not connected to the L2 cache is the Radeon team's design fault.


DX12 Feature Level 12_2 (aka DX12 Ultimate) was for NVIDIA RTX. RDNA 2's RT cores are missing transverse hardware acceleration, hence RDNA 2 is 2 of 3 hardware RT accelerated. Missing transverse hardware acceleration is the Radeon team's design fault.
 

PaintTinJr

Member
Intel Alderlake's Windows 11 multithreading scheduler also benefits AMD's 8-core CCD cluster when Ryzen SKU has two CCDs i.e. Intel Alderlake's multi-threading affinity is biased towards the first 8 P-Cores which also works for keeping the game's multi-threads within Zen 4's first CCD cluster.

For Ryzen Zen 4 SKUs with two CCDs i.e. Ryzen 9 7900X and Ryzen 9 7950X, the 1st CCD cluster has superior silicon quality when compared to the 2nd CCD cluster.

DX10.0 was for the GeForce 8000 series. Radeon HD 2900X's MSAA functions are partly shader-emulated, hence this is the Radeon VLIW team's design fault.

DX11 was for Radeon HD 5000 series.

DX12 Feature Level 12_0 was for GCN Bonaire, Tonga, and Hawaii. The problem with AMD GCN prior to VEGA GCN is L2 cache is not connected to RBE (ROPS), hence AMD's push for compute/TMU path (Async Compute PR) that is connected to the L2 cache while NVIDIA Maxwell pixel shader/ROPs and compute shader/TMU paths are connected to the L2 cache, hence Maxwell GPUs have superior consistent performance. ROPS not connected to the L2 cache is the Radeon team's design fault.


DX12 Feature Level 12_2 (aka DX12 Ultimate) was for NVIDIA RTX. RDNA 2's RT cores are missing transverse hardware acceleration, hence RDNA 2 is 2 of 3 hardware RT accelerated. Missing transverse hardware acceleration is the Radeon team's design fault.
Despite the name of DirectX being born out of "direct" access to the hardware, the performance is not "direct", and worse, not transparently through a HAL on Windows.

You keep saying "design fault", as though the inferior Graphics API does the job correct - despite its design only serving MS and Nvidia needs when push comes to shove - and the people that have influenced and guided the OS agnostic leading GPU APIs are wrong, and that AMD's hardware was wrong to align to the larger body of stakeholders and knowledge they bring.

Long term AMD's design situation is superior to Nvidia's and Intel's, because the RT acceleration is locked to "effectively" fixed-path thinking in the the RT problem domain on those two GPU mfrs solutions and when the RT feature isn't in use it can readily end up with unutilised silicon.
 

ToTTenTranz

Banned







At the end of the day, we don't think you should go out and purchase either of these GPUs at their $1,000 - $1,200 MSRPs, but it does seem like many gamers are interested and willing to part with the less expensive 7900 XTX in this particular price segment, which seems reasonable but we'd be a lot happier if we see a price drop in the near future.

My thoughts exactly.





Same reason that r/AMD has more members than r/Nvidia, you can look it up.
AMD makes CPUs, APUs, Motherboard chipsets and discrete GPUs, has 1.5M subscribers.
Nvidia makes discrete GPUs, has 1.3M subscribers.

How do you assume AMD gpus are more popular on reddit? AMD's CPUs are a lot more popular than their GPUs, given they hold >60% of that marketshare. Most of those subscribers are there for the CPU and platform discussions.


Same reason we had a poll here on neogaf for next gen cards and AMD was >50% intention to buy over Nvidia
Was that poll made right after Nvidia announced their RTX40 prices?


Rejoice! Finally competition!
And this is supposed to be bad?!
 
Last edited:

winjer

Gold Member

 
Last edited:

DonkeyPunchJr

World’s Biggest Weeb

According to some people on r/AMD, AMD support is offering refunds or exchanges if you’re affected.

Sucks. I’ve heard of some cards that get much worse temps in upright configuration (e.g. video output facing up or down) but never one that had problems in standard horizontal orientation.
 

rnlval

Member
1. Despite the name of DirectX being born out of "direct" access to the hardware, the performance is not "direct", and worse, not transparently through a HAL on Windows.

2. You keep saying "design fault", as though the inferior Graphics API does the job correct - despite its design only serving MS and Nvidia needs when push comes to shove - and the people that have influenced and guided the OS agnostic leading GPU APIs are wrong, and that AMD's hardware was wrong to align to the larger body of stakeholders and knowledge they bring.

3. Long term AMD's design situation is superior to Nvidia's and Intel's, because the RT acceleration is locked to "effectively" fixed-path thinking in the the RT problem domain on those two GPU mfrs solutions and when the RT feature isn't in use it can readily end up with unutilised silicon.
1. I live through Amiga's "hit the metal" games and the difficulty of maintaining backward software compatibility with hardware updates.

You have missed AMD's Shader Intrinsic Functions that bypass abstracted API. This is applicable for DirectX11, DirectX12, and Vulkan.



NVIDIA has its Intrinsic Function access methods e.g. old NVAPI version 343 from October 2014 and earlier versions have support for intrinsic (https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl).

Both PC and Xbox are not limited to PS4's 18 CU/PS4 Pro's 36 CU(2X scale from PS4 18CU)/PS5(2X scale from PS4 18 CU) backward compatibility topology requirements.


2. Regardless of API, close-to-metal, and hit-the-metal methods, the fact remains that GCNs before VEGA GCN has RBE (ROPS) not connected to the L2 cache.

VdwmlFM.jpg


The Radeon team was behind on micro-tiled cache render methods.

No amount of hit-the-metal programming will fix Radeon HD 2900X's missing full MSAA hardware acceleration! A shader workaround will reduce the available shader resource!

NVIDIA Maxwell has superior delta color compression (DCC) while AMD's Hawaii GCN is missing DCC and Tonga/Vega's DCC implementation was inferior.

For RDNA, AMD made a big PR spill about DCC Everywhere.

EStlslt.png

The Radeon team is late again.

My second point is about external memory bandwidth conservation hardware design.


3. AMD's "RDNA 31" has the full RT core acceleration that includes RT transverse acceleration. :messenger_grinning_sweat: For RDNA 3, welcome to NVIDIA's Ampere accelerated RT generation.

"RDNA 2" RT implementation has a blocking behavior against other concurrent shader programs.

It took a while for AMD's Radeon team to fix their pixel shader/ROPS path performance. RDNA 2 and RDNA 3 rasterization are competitive.
 
Last edited:

rnlval

Member
According to some people on r/AMD, AMD support is offering refunds or exchanges if you’re affected.

Sucks. I’ve heard of some cards that get much worse temps in upright configuration (e.g. video output facing up or down) but never one that had problems in standard horizontal orientation.
Reference 7900 XTX's cooling issue is not a problem for non-reference SKUs. On the positive side, AMD is not competing against their AIB partners' non-reference SKUs. :messenger_grinning_sweat:

My last discrete Radeon SKU was MSI's Gaming X Trio OC R9-290X which is nearly R9-390X reference's clock speeds.
 
Last edited:

rnlval

Member

FYI, Wave64 is legacy GCN backward compatibility mode.

NVIDIA CUDA has Warp32 which is similar to RDNA's Wave32 i.e. both hardware supports Shader Model 6's wavefront size of 32. NVIDIA CUDA hardware doesn't support a wavefront size of 64.

RDNA 3 CU's asymmetric behavior is also applicable to NVIDIA's Ampere SM and Ada SM i.e. about half of the stream processors within the SM unit can execute integer datatypes.

RDNA's Wave32 and CUDA's Warp32 are suited for Shader Model 6.0's wavefront size of 32.

AMD needs to speed up legacy Wave64 GpGPU software stack migration toward Wave32.
 
Last edited:

Leonidas

Member

I might consider one if it drops below $800.

They just released the RGB tool for reference cards... which also updates the firmware of the graphics card itself.
It's a shame the XT misses out on the RGB, way too much was cut from the XT for only being $100 cheaper.
 
Last edited:

PaintTinJr

Member
1. I live through Amiga's "hit the metal" games and the difficulty of maintaining backward software compatibility with hardware updates.

You have missed AMD's Shader Intrinsic Functions that bypass abstracted API. This is applicable for DirectX11, DirectX12, and Vulkan.



NVIDIA has its Intrinsic Function access methods e.g. old NVAPI version 343 from October 2014 and earlier versions have support for intrinsic (https://developer.nvidia.com/unlocking-gpu-intrinsics-hlsl).

Both PC and Xbox are not limited to PS4's 18 CU/PS4 Pro's 36 CU(2X scale from PS4 18CU)/PS5(2X scale from PS4 18 CU) backward compatibility topology requirements.


2. Regardless of API, close-to-metal, and hit-the-metal methods, the fact remains that GCNs before VEGA GCN has RBE (ROPS) not connected to the L2 cache.

VdwmlFM.jpg


The Radeon team was behind on micro-tiled cache render methods.

No amount of hit-the-metal programming will fix Radeon HD 2900X's missing full MSAA hardware acceleration! A shader workaround will reduce the available shader resource!

NVIDIA Maxwell has superior delta color compression (DCC) while AMD's Hawaii GCN is missing DCC and Tonga/Vega's DCC implementation was inferior.

For RDNA, AMD made a big PR spill about DCC Everywhere.

EStlslt.png

The Radeon team is late again.

My second point is about external memory bandwidth conservation hardware design.

Are you saying that Intrinsic functions are a security risk by allowing the graphics driver to access hardware directly without being at the mercy of an opaque HAL in the Windows kernel - which was the selling point of Vista and has been the solution since?

Why would anyone waste that amount of silicon on hardware MSAA since the advent of shaders?

Aliasing in 99.9999% of rendering situations is not even the 2nd biggest signal to noise ratio issue, IIRC in the days of the X360 that silicon was intentionally misused as an intended technique to gain floating point precision in some calculations superior to the default precision of the Xenos. Suggesting AMD are doing it wrong by seeing MSAA as a dedicated hardware technique as lower priority than saving that silicon for generalised GPUcompute for emerging techniques - that can operate across the image, but also through the frames with motion vectors too and do inference like in Sony's Cognitive TVs that use AMD silicon in all likelihood - suggests your idea of the choices as "best" for silicon use aren't well considered in graphics as they are today.

3. AMD's "RDNA 31" has the full RT core acceleration that includes RT transverse acceleration. :messenger_grinning_sweat: For RDNA 3, welcome to NVIDIA's Ampere accelerated RT generation.

"RDNA 2" RT implementation has a blocking behavior against other concurrent shader programs.

It took a while for AMD's Radeon team to fix their pixel shader/ROPS path performance. RDNA 2 and RDNA 3 rasterization are competitive.
Nvidia's RT doesn't support more than async lite so is limited in itself going forward and is its limiting factor for UE5 workloads, and you are making statements about RDNA2 RT blocking behaviour that isn't shared by at least one of the RDNA2 consoles(PS5), so your general point about where AMD were at with RT in RDNA2 is factually untrue. And the same can be said about the ROPs, when RDNA2 PS5 is now 2-4years old in R&D terms and yet the PS5 GPU has a pixel-rate/texture rate performance between RTX 3070-3080 and the PS5 at launch bested the then RTX 2080 in the complex Slug text rendering benchmark.

https://www.notebookcheck.net/PlayS...previous-gen-consoles.508462.0.html#7192658-2

Nvidia's design's inability to get a 2x increasing in RPM (heavily used in UE5) and do so with full fat async (heavily used in UE5) is a trade-off that they've made for running old last-gen games with RT faster. IMHO the AMD design is more flexible, future proof, better use of silicon area and still close enough to the best Nvidia offers even disadvantaged with MS/Nvidia's DirectX, on PC through a HAL, while still being at a cheaper price, even more so if considering the silicon offering of the new consoles like the PS5 that is already sold at a profit for less than just a RTX 3060 ti card price.
 

rnlval

Member
1. Are you saying that Intrinsic functions are a security risk by allowing the graphics driver to access hardware directly without being at the mercy of an opaque HAL in the Windows kernel - which was the selling point of Vista and has been the solution since?

2. Why would anyone waste that amount of silicon on hardware MSAA since the advent of shaders?

Aliasing in 99.9999% of rendering situations is not even the 2nd biggest signal to noise ratio issue, IIRC in the days of the X360 that silicon was intentionally misused as an intended technique to gain floating point precision in some calculations superior to the default precision of the Xenos. Suggesting AMD are doing it wrong by seeing MSAA as a dedicated hardware technique as lower priority than saving that silicon for generalised GPUcompute for emerging techniques - that can operate across the image, but also through the frames with motion vectors too and do inference like in Sony's Cognitive TVs that use AMD silicon in all likelihood - suggests your idea of the choices as "best" for silicon use aren't well considered in graphics as they are today.


3. Nvidia's RT doesn't support more than async lite so is limited in itself going forward and is its limiting factor for UE5 workloads,

4. you are making statements about RDNA2 RT blocking behaviour that isn't shared by at least one of the RDNA2 consoles(PS5), so your general point about where AMD were at with RT in RDNA2 is factually untrue.

5. And the same can be said about the ROPs, when RDNA2 PS5 is now 2-4years old in R&D terms and yet the PS5 GPU has a pixel-rate/texture rate performance between RTX 3070-3080 and the PS5 at launch bested the then RTX 2080 in the complex Slug text rendering benchmark.

https://www.notebookcheck.net/PlayS...previous-gen-consoles.508462.0.html#7192658-2

Nvidia's design's inability to get a 2x increasing in RPM (heavily used in UE5) and do so with full fat async (heavily used in UE5) is a trade-off that they've made for running old last-gen games with RT faster. IMHO the AMD design is more flexible, future proof, better use of silicon area and still close enough to the best Nvidia offers even disadvantaged with MS/Nvidia's DirectX, on PC through a HAL, while still being at a cheaper price, even more so if considering the silicon offering of the new consoles like the PS5 that is already sold at a profit for less than just a RTX 3060 ti card price.

1. AMD's Intrinsic functions allow native GCN userland stream instructions to be supplied by game developers. Both RDNA and GCN support Wave64.

Half of NAVI 31 stream processors support the Wave64 instructions set. RDNA supports both Wave32 and Wave64 instruction sets.


2. AMD's GCN and RDNA RBE (ROPS) have MSAA hardware.

Using Shaders for MSAA will reduce the available shader resource for other workloads.


Radeon HD 38x0 SKUs have restored the full MSAA hardware.

Radeon HD VLIW5 and VLIW4-based GPU designs have been replaced by SIMD-based GCN and RDNA. NVIDIA's VLIW GPU design only existed for the GeForce FX family.

Xbox 360's Xenos has a SIMD-based GPU design, not Radeon HD 2900/30x0/48x0/5xx0/6xxx VLIW GPU designs.


3. https://docs.unrealengine.com/5.0/en-US/ray-tracing-performance-guide-in-unreal-engine/

Unreal Engine 5 has support for hardware-accelerated BVH-based raytracing.

Does Unreal Engine 5 support ray tracing?

Image result for Unreal Engine 5 hard accelerated raytracing


The NVIDIA Branch of Unreal Engine 5 (NvRTX 5.0) is now available. This feature-rich branch is fully compatible with Unreal Engine 5 and has all of the latest developments in the world of ray tracing.


4. PS5's RDNA has RDNA 2's RT cores implementation with RDNA 1 primitive shader and missing RDNA 2's variable-rate shading hardware. PS5's kitbash RDNA 2 is inferior to NVIDIA's Ampere RT core implementation.

The reason for blocking behavior is due to the transverse RT workload being done on shader cores instead of shifting the workload on a separate transverse RT hardware,
hence using Shaders for RT transverse will reduce the available shader resource for other workloads.

At a given generation, NVIDIA RTX Ampere (GA102 vs NAVI 21) and Ada Lovelace (AD102 vs NAVI 31) handle AMD's DXR Tier 1.1 defined spec with NVIDIA winning performance results. LOL


RDNA 3 has hardware-accelerated transverse RT hardware.

AD102 has 512 TMUs (load-store units for compute shaders) and 144 RT cores (GeForce RTX 4090 SKU has 128 RT cores). NVIDIA has yet to release the full AD102 RTX 4090 Ti and slightly cut down/lower cost AD102 RTX 4080 Ti SKUs.

NAVI 31 has 384 TMUs (load-store units for compute shaders) and 96 RT cores (NAVI 31 has 96 CU). Happened to the 128 CU/128 RT cores version? On a per CU basis, why AMD didn't scale TMUs by 2X when shader cores were scaled by 2X?


5. " PS5 GPU has a pixel-rate/texture rate performance between RTX 3070-3080" is FALSE.

RTX 2080 has 64 ROPS with 1897 Mhz average and 448 GB/s https://www.techpowerup.com/review/nvidia-geforce-rtx-2080-founders-edition/37.html RTX Turing competed against RDNA 1 generation.

PS5's 64 ROPS with about 2230 Mhz and 448 GB/s memory bandwidth. RDNA 2 competed against the RTX Ampere generation.

RTX 3080 FE (cutdown GA102) has 96 ROPS with 1931 Mhz that is backed by 760.3 GB/s memory bandwidth. https://www.techpowerup.com/review/nvidia-geforce-rtx-3080-founders-edition/32.html
RTX 3070 Ti FE (full GA104) has 96 ROPS with 1861 Mhz that is backed by 608.3 GB/s memory bandwidth. https://www.techpowerup.com/review/nvidia-geforce-rtx-3070-ti-founders-edition/35.html
RTX 3070 FE (cutdown GA104) has 96 ROPS with 1882 Mhz that is backed by 448 GB/s memory bandwidth. https://www.techpowerup.com/review/nvidia-geforce-rtx-3070-founders-edition/32.html

RTX 3070 Ti (MSI Suprim X, full GA104 out-of-the-box OC) has 96 ROPS with 1929 Mhz that is backed by 608.3 GB/s memory bandwidth. https://www.techpowerup.com/review/msi-geforce-rtx-3070-ti-suprim-x/35.html

Prove PS5 has beaten RTX 3070 (GA104), RTX 3070 Ti (full GA104), and RTX 3080 (cutdown GA102).



doom-eternal-3840-2160.png


RX 6800 is superior when compared to PS5's GPU configuration.

https://www.eurogamer.net/digitalfoundry-2021-doom-eternal-next-gen-patch-tested

Next up, there's the 120Hz mode, which works best on HDMI 2.1 displays, allowing for the game's full resolution to successfully resolve at full frame-rate. This looks to offer something akin to the last-gen Doom Eternal experience at twice the performance level. Xbox Series X operates at a dynamic 1800p, while PlayStation 5 tops out at 1584p - and it is visibly blurrier.

AMD will need a 4 nm revision for RDNA 3.5 ASAP since Mobile Ryzen 7040 APU is AMD's 1st 4 nm SKU on TSMC's 4 nm process tech.


NVIDIA supports the full Async Compute since Volta and Turing.
 
Last edited:

GreatnessRD

Member

I might consider one if it drops below $800.


It's a shame the XT misses out on the RGB, way too much was cut from the XT for only being $100 cheaper.
AMD really ain't shit for that. A $900 graphics card and it doesn't have ARGB? They real lame for that. Absolutely no excuse for that.
 

MikeM

Member
Getting some pretty high memory temps on my 7900xt. Yesterday they hit 102 degrees. Where does it list the throttle limits for this? Can't find it anywhere online besides the generic 110 degrees line.
 

thuGG_pl

Member
I was willing to give money to AMD this time around, but they fucked it up.
I was hoping that at least it will be faster than 4080, but it's abut equal in raster. Also it gets spanked in RT, VR performance is horrbile and the efficency is meh.
I'm starting to consider 4090 :O
 

Thebonehead

Banned
I was willing to give money to AMD this time around, but they fucked it up.
I was hoping that at least it will be faster than 4080, but it's abut equal in raster. Also it gets spanked in RT, VR performance is horrbile and the efficency is meh.
I'm starting to consider 4090 :O
star wars hate GIF
 

Kenpachii

Member
I think it's really just a handful of very prolific AMD-haters / Nvidia-lovers in this thread that are giving away that perception.
Ever since HWU got blacklisted by Nvidia for not following their "RT is the most important thing" script back in December 2020 and got Nvidia massively shat on for doing so, many of Nvidia's fans (or astroturfers) made it their goal to shit on them at every opportunity.

HWU actually puts a lot more effort in testing RT than for example LTT or Jay2Cents. It looks like the more audience (and therefore editorial independence) a publication has, the less effort they put into following GPU makers' review guidelines, and for GPUs that just means they test RT in games a lot less. Linus specifically is quite vocal on RT in most games not being worth the performance deficit.

But, because HWU are rather small they're easy targets for forum hate and astroturfing.

And then there's this:







"There's more clicks for AMD" when they have 10% of the discrete GPU marketshare. Yes that makes total sense, people love being told they bought the GPU from the "wrong" IHV.

https://cdn.mos.cms.futurecdn.net/ga4ysdRtWufiCuUx3tFmHD-1200-80.png.webp


Lol dude they where shilling hard for AMD for along time.
 

PaintTinJr

Member
1. AMD's Intrinsic functions allow native GCN userland stream instructions to be supplied by game developers. Both RDNA and GCN support Wave64.

Half of NAVI 31 stream processors support the Wave64 instructions set. RDNA supports both Wave32 and Wave64 instruction sets.
So the answer to the question was, no because userland functions can't directly access the hardware and are still at the mercy of the kernel, which in linux and with opengl/vulkan wouldn't be an issue as the kernel is opensource and can be debugged with the graphics userland calls, but on windows it is closed source and you can't debug the kernel;s DX HAL that handles all GPU access, so if it haphazardly misses timings - eg with AMD drivers -which result in a cascade of latencies there is nothing anyone but Microsoft (or maybe nvidia) can do to eliminate those issues,

2. AMD's GCN and RDNA RBE (ROPS) have MSAA hardware.

Using Shaders for MSAA will reduce the available shader resource for other workloads.
But again you side step the point about only needing resource for MSAA if it is the optimal solution - it isn't because newer TSR, FSR, DLSS and cognitive image enhancement yields higher PSNR measured over one or successive frames which MSAA can't do, so committing more bandwidth and silicon for what will be a deprecated ASIC feature isn't a win for Nvidia as a design choice.
3. https://docs.unrealengine.com/5.0/en-US/ray-tracing-performance-guide-in-unreal-engine/

Unreal Engine 5 has support for hardware-accelerated BVH-based raytracing.

Does Unreal Engine 5 support ray tracing?

Image result for Unreal Engine 5 hard accelerated raytracing


The NVIDIA Branch of Unreal Engine 5 (NvRTX 5.0) is now available. This feature-rich branch is fully compatible with Unreal Engine 5 and has all of the latest developments in the world of ray tracing.
I didn't say it didn't have the features, I was pointing to the efficiency with which UE5 (systems nanite/lumen) runs on the best of RDNA2 hardware designs (PS5) compared to Nvidia's RTX 20xxx or 30xx series and the nvidia hardware fails to scale with its numbers, because nanite runes lighter on PS5 than all other designs AFAIK. RDNA2 with RPM gets twice the micro polygons throughput of their native FLOP/s that compare to Nvidia cards without a 2x gain of RPM.

Nvidia cards have the full API for async, but as all the documentation mentions, using async on RTX is for light use because of excessive bandwidth use quickly causing negative performance gain on RTX with Async, and UE5 heavily exploits AMD's full bandwidth async that is used in the RDNA2/Series and PS5.

4. PS5's RDNA has RDNA 2's RT cores implementation with RDNA 1 primitive shader and missing RDNA 2's variable-rate shading hardware. PS5's kitbash RDNA 2 is inferior to NVIDIA's Ampere RT core implementation.
Yeah, that has been debunked here on GAF many times, and we are still awaiting comprehensive info about the PS5 GPU beyond the broad stroke numbers. We know that the lead architect Cerny choose the RT solution based on flexibility to lever more efficient RT algorithms like those he holds patents for, and believes it is an area that will undergo massive software advancement through the generation and that a less flexible set of core accelerators competing for CU memory bandwidth as in the RTX solution won't serve the problem nearly as well at the PS5 performance tier.
The reason for blocking behavior is due to the transverse RT workload being done on shader cores instead of shifting the workload on a separate transverse RT hardware, hence using Shaders for RT transverse will reduce the available shader resource for other workloads.
In Cerny's Road to PS5 reveal he states the shading and RT work are done at the same time on the PS5 GPU, where you kick of a BVH accelerator query and continue shading until the result is returned, whether that is achieved via the superior Async on AMD GPUs or via some other unrevealed aspect specific to the PS5 RDNA2 GPU design that isn't clear, but async compute on AMD GPUs is to use the gaps in between the graphics shader work, so it isn't a resource the graphics shader can use anyway to take the resource away.
At a given generation, NVIDIA RTX Ampere (GA102 vs NAVI 21) and Ada Lovelace (AD102 vs NAVI 31) handle AMD's DXR Tier 1.1 defined spec with NVIDIA winning performance results. LOL

RDNA 3 has hardware-accelerated transverse RT hardware.

AD102 has 512 TMUs (load-store units for compute shaders) and 144 RT cores (GeForce RTX 4090 SKU has 128 RT cores). NVIDIA has yet to release the full AD102 RTX 4090 Ti and slightly cut down/lower cost AD102 RTX 4080 Ti SKUs.

NAVI 31 has 384 TMUs (load-store units for compute shaders) and 96 RT cores (NAVI 31 has 96 CU). Happened to the 128 CU/128 RT cores version? On a per CU basis, why AMD didn't scale TMUs by 2X when shader cores were scaled by 2X?


5. " PS5 GPU has a pixel-rate/texture rate performance between RTX 3070-3080" is FALSE.
techpowerup has the pixelrate for the PS5 in its database too, go compare yourself.
....


NVIDIA supports the full Async Compute since Volta and Turing.
I'm not talking about the API, but the asynchronous utilisation of the hardware at low latency high bandwidth use - as cutting edge game rendering imposes on the problem. async on Nvidia is consider lite because it can only be used sparingly or when doing deep processing with low bandwidth needs at high latency access from what I've read.
 

RoboFu

One of the green rats
Just a PSA, don’t forget to enable resizable bar. I forget to on my new build and it makes a big difference.
 
Last edited:

rnlval

Member
1. So the answer to the question was, no because userland functions can't directly access the hardware and are still at the mercy of the kernel, which in linux and with opengl/vulkan wouldn't be an issue as the kernel is opensource and can be debugged with the graphics userland calls, but on windows it is closed source and you can't debug the kernel;s DX HAL that handles all GPU access, so if it haphazardly misses timings - eg with AMD drivers -which result in a cascade of latencies there is nothing anyone but Microsoft (or maybe nvidia) can do to eliminate those issues,


2. But again you side step the point about only needing resource for MSAA if it is the optimal solution - it isn't because newer TSR, FSR, DLSS and cognitive image enhancement yields higher PSNR measured over one or successive frames which MSAA can't do, so committing more bandwidth and silicon for what will be a deprecated ASIC feature isn't a win for Nvidia as a design choice.

I didn't say it didn't have the features, I was pointing to the efficiency with which UE5 (systems nanite/lumen) runs on the best of RDNA2 hardware designs (PS5) compared to Nvidia's RTX 20xxx or 30xx series and the nvidia hardware fails to scale with its numbers, because nanite runes lighter on PS5 than all other designs AFAIK. RDNA2 with RPM gets twice the micro polygons throughput of their native FLOP/s that compare to Nvidia cards without a 2x gain of RPM.

3. Nvidia cards have the full API for async, but as all the documentation mentions, using async on RTX is for light use because of excessive bandwidth use quickly causing negative performance gain on RTX with Async, and UE5 heavily exploits AMD's full bandwidth async that is used in the RDNA2/Series and PS5.


4. Yeah, that has been debunked here on GAF many times, and we are still awaiting comprehensive info about the PS5 GPU beyond the broad stroke numbers. We know that the lead architect Cerny choose the RT solution based on flexibility to lever more efficient RT algorithms like those he holds patents for, and believes it is an area that will undergo massive software advancement through the generation and that a less flexible set of core accelerators competing for CU memory bandwidth as in the RTX solution won't serve the problem nearly as well at the PS5 performance tier.

In Cerny's Road to PS5 reveal he states the shading and RT work are done at the same time on the PS5 GPU, where you kick of a BVH accelerator query and continue shading until the result is returned, whether that is achieved via the superior Async on AMD GPUs or via some other unrevealed aspect specific to the PS5 RDNA2 GPU design that isn't clear, but async compute on AMD GPUs is to use the gaps in between the graphics shader work, so it isn't a resource the graphics shader can use anyway to take the resource away.

5. techpowerup has the pixelrate for the PS5 in its database too, go compare yourself.

I'm not talking about the API, but the asynchronous utilisation of the hardware at low latency high bandwidth use - as cutting edge game rendering imposes on the problem. async on Nvidia is consider lite because it can only be used sparingly or when doing deep processing with low bandwidth needs at high latency access from what I've read.
1. Your open-source argument that degrades Linux's memory protection and multi-user design is meaningless for the majority of the desktop Linux use cases.

Linux followed Unix's supervisor and userland apps model and MMU usage is mandatory for the mainstream Linux kernel.

Linux has its own design issues e.g. (Linus Torvalds on why desktop Linux sucks).

For SteamOS, Valve's graphics API preference is DirectX API on top of Vulkan via Proton-DXVK. SteamOS effectively killed native Linux games for cloned Windows Direct3D-based APIs. Gabe Newell was the project manager for Windows 1.x to 3.x and Windows 9x's DirectX's DirectDraw. Proton-DXVK effectively stabilized Linux's userland APIs.

On PS4 and PS5, game programmers operate within the userland environment on a thin graphics API layer and Direct3D-like layer. 3rd party PS4/PS5 game programers don't have X86's Ring 0 (Kernel level) access.

AMD GCN and RDNA support hardware virtualization.

2. Facts: Radeon HD 3870 has the full MSAA hardware.

NVIDIA's DLSS is powered by fixed-function Tensor cores that are separate from shader-based CUDA cores.

For Ryzen 7040 APU, AMD added "Ryzen AI" based on Xilinx's XDNA architecture that includes Xilinx's FPGA fabric IP. Expect AMD's future SKUs to include XDNA architecture from Xilinx (part of AMD) e.g. AMD also teased that it will use the AI engine in future Epyc CPUs and future Zen 5 SKUs would include XDNA for desktop AMD Ryzens since all Ryzen 7000 series are APUs (Ref 1). AMD ported Zen 4, RDNA 3, and XDNA IP blocks into TSMC's 4 nm process node for the mobile Ryzen 7040 APU.

Ref 1. https://www.msn.com/en-us/news/tech...er-ai-ambitions-in-cpu-gpu-roadmap/ar-AAYhYv6
It also teased the next-generation Zen 5 architecture that will arrive in 2024 with integrated AI, and machine learning optimizations along with enhanced performance and efficiency.

For AMD's AI-related silicon, AMD has multiple teams from CDNA's (WMMA, Wave Matrix Multiply-Accumulate hardware), Xilinx's XDNA architecture, and Zen 4's CPU AVX-512's VNNI extensions. AMD promised that it will unify previously disparate software stacks for CPUs, GPUs, and adaptive chips from Xilinx into one, with the goal of giving developers a single interface to program across different kinds of chips. The effort will be called the Unified AI Stack, and the first version will bring together AMD's ROCm software for GPU programming (GCN/RDNA/CDNA-WMMA), its CPU software (AVX/AVX2/AVX-512), and Xilinx's Vitis AI software.

Like dedicated MSAA hardware, dedicated AI and RT cores hardware are to reduce the workload on GPU's shader cores.

3. For the given generation, Async Compute extensive Doom Eternal shows Ampere RTX and ADA RTX performance leadership.

https://www.techpowerup.com/review/amd-radeon-rx-7900-xtx/15.html

doom-eternal-3840-2160.png


Hint: Async Compute Shader path extensively uses TMU read/write IO instead of ROPS read/write IO. RTX 4090 (512 TMUs) 's superiority gap reflects its ~33% TMU superiority over RX 7900 XTX (384 TMUs). Fully enabled AD102 (e.g. future RTX Geforce 4090 Ti) has 576 TMUs.

For GA102, 1 RT core: 4 TMU ratio.
For AD102, 1 RT core: 4 TMU ratio.

For NAVI 21, 1 RT core (missing transverse): 4 TMU ratio.
For NAVI 31, 1 RT core: 4 TMU ratio.

Spider-Man Remastered DX12 Raster

1pZkdmc.png

Like Doom Eternal, RTX 4090 (512 TMUs) 's superiority gap reflects its ~33% TMU superiority over RX 7900 XTX (384 TMUs)


Spider-Man Remastered DX12 DXR

v4djHxf.png

Like Doom Eternal, RTX 4090 (112 RT cores, 512 TMUs) 's superiority gap reflects its ~33% RT superiority over RX 7900 XTX (96 RT cores, 384 TMUs).

From https://www.kitguru.net/components/graphic-cards/dominic-moass/amd-rx-7900-xtx-review/all/1/


4. Prove it. PS5's RT results are within the RDNA 2 RT power rankings e.g. Xbox Series X's Doom Eternal RT performance results are superior when compared to PS5's.

PS5's RT core is the same as any other RDNA 2 RT core i.e. missing the transverse feature. Prove Me Wrong!

PS5's RT cores did NOT deliver RDNA 3's RT core results.

One of the RDNA 2 RT optimizations (for XSX, PS5, and PC RDNA 2) is to keep the transverse dataset chunks to be small.



5. Pure GPU pixel rate is useless without the memory bandwidth factor. Read GDC 2014 lecture on ROPS alternative via TMU path.
 
Last edited:

ToTTenTranz

Banned
Chipsandcheese's Clamchowder did an excellent piece with low-level testing of the Navi 31's memory and ALU subsystems.


Everything seems to be working as expected and there's no hardware fault identified.
The new dual-issue functionality of the FP32 units is working as intended but as I mentioned it's dependent on compiler optimization, so we should expect some performance boosts as time goes on.
rdna3_fp_vk.png

However it seems to work only on a limited number of instructions which is why AMD isn’t claiming these FP32 units as "dual", since the number of situations where performance is boosted may also be limited.


If anything, it's a further indicator on how Navi 31 was never meant to compete with the Ada GA102 4090 since its execution units and memory subsystems are much closer to GA103 / RTX 4080 in scale.
And the clocks are more or less aligned with Ada's, considering the N5 -> 4N improvements. The belief that these GPUs would reach 4GHz seem to be at the fake leakers' fault.
 
Last edited:

M1chl

Currently Gif and Meme Champion
Are you saying this is mainly a software issue?
With AMD a lot of their problems is in software department. So yeah, it is. But devs have to take it and chew it, which is why most of the compute env are programmed with CUDA instead of whatever AMD has.
 

GHG

Gold Member
Shit, the 7900xtx power color is like 30 euros less than the 4080...

I'm just going to say it, get the 4080 and don't look back.

As far as I'm concerned AMD can't be trusted at the moment as far as their GPU division goes.

At least with Nvidia the only issue is pricing.

 
Last edited:

GymWolf

Member
I'm just going to say it, get the 4080 and don't look back.

As far as I'm concerned AMD can't be trusted at the moment as far as their GPU division goes at the moment.

At least with Nvidia the only issue is pricing.
It doens't matter the fact that the one i found is made by power color? one of the best amd third parties?!
 
Top Bottom