• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

RX 7900XTX and 7900XT review thread

Leonidas

Member
It's weird to use that data for a FPS per watt calculation. Max out the cards and use those figures.

What's posted is just pointing out the power scaling issues that the 7900 series currently has. It doesn't measure the performance per watt offered by the card.
Unless those issues are solved it's a real problem though. You're using up to 2x more power on the 7900 XTX in real world scenarios.
 

winjer

Gold Member
Those prices make way more sense. Still on the high side depending on real world prices and if what Nvidia does when the ampere stock dries up. Does 4080 get a little cut or a big one? Because maybe it should be $150-$200 more than the 7900 but at these prices neither is a great deal.

NVidia now doesn't have much of a reason to lower prices on the 4080.
 



Rather scathing insight into the performance results and unfulfilled promises from AMD by MLID. Considering they were one of the outlets providing early benchmarks for these cards from leaks & sources, it's good to see them outright say they bought into (and sold) the hype and admit they were wrong about expectations. They speculate a few reasons as to what's going on but I'm sure some ITT have arrived at some similar points to speculate on.

If I had to guess, maybe it's due to some inherent limitations with the chiplet setup, or maybe AMD's implementation of the chiplet setup? There's an inherent latency with chiplets that monolithic dies don't suffer from, and maybe the shader clocks (which would affect the fixed-function backend logic clockrate too, in RDNA3's case) isn't fast enough to quite overcome the inherent latency, regardless how small it may be? Maybe there isn't enough cache on the MCDs to assist in that?

Just taking a few guesses.
 
Last edited:



Rather scathing insight into the performance results and unfulfilled promises from AMD by MLID. Considering they were one of the outlets providing early benchmarks for these cards from leaks & sources, it's good to see them outright say they bought into (and sold) the hype and admit they were wrong about expectations. They speculate a few reasons as to what's going on but I'm sure some ITT have arrived at some similar points to speculate on.

If I had to guess, maybe it's due to some inherent limitations with the chiplet setup, or maybe AMD's implementation of the chiplet setup? There's an inherent latency with chiplets that monolithic dies don't suffer from, and maybe the shader clocks (which would affect the fixed-function backend logic clockrate too, in RDNA3's case) isn't fast enough to quite overcome the inherent latency, regardless how small it may be? Maybe there isn't enough cache on the MCDs to assist in that?

Just taking a few guesses.


Nvidia likely pour tons cash and resources into the R&D of their chip design and architecture. They didn't go with an MCM approach in Lovelace, and it's been heavily rumoured now that Blackwell (50 Series) will stick with a monolithic design. Which says a lot, it means they can hit their design and performance goals without a chiplet approach.
 

DaGwaphics

Member
Unless those issues are solved it's a real problem though. You're using up to 2x more power on the 7900 XTX in real world scenarios.

Not really. Because you don't spend $1k to play Doom at 144hz. You are either looking for much higher frame rates than that or you are playing heavier games, both of which would utilize the max power of the card, changing the results quite a bit. If Doom at 144hz is your thing, get a weaker card to start with, don't get a 1K card and use it at half power.
 

PaintTinJr

Member
Came here to share the same thing. Yikes.

Some techtubers claimed that AMD saw Nvidia's Lovelace as "hot, loud and noisy", MLiD even claimed that Nvidia would struggle with diminishing returns in the future with Blackwell (50 Series) because they're cramming everything into Lovelace.

It's so laughable now.
The things is when you consider that Lovelace is using 40% more die area than this XTX and the 4080 variant is on par with the XtX with a modern game like Calisto Protocol on performance/power draw maybe its design would be hot/loud if it was trying to fit the same die area, or to put it differently, if AMD had enough market share to be able to move this card to the same area as a 4080/4090, then it would have space to run much cooler, with less power draw.

Even if AMD aren't in a position to have all they need in market position to release a superior product in benchmark terms at a superior price, the die area, versus performance and power draw comparison definitely suggest it is a superior engineered architecture - from a theoretical stand point - IMO, and I wouldn't be surprised if Nvidia's next iteration is hitting diminishing returns and they need a new architectural design sooner rather than later.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not



Rather scathing insight into the performance results and unfulfilled promises from AMD by MLID. Considering they were one of the outlets providing early benchmarks for these cards from leaks & sources, it's good to see them outright say they bought into (and sold) the hype and admit they were wrong about expectations. They speculate a few reasons as to what's going on but I'm sure some ITT have arrived at some similar points to speculate on.

If I had to guess, maybe it's due to some inherent limitations with the chiplet setup, or maybe AMD's implementation of the chiplet setup? There's an inherent latency with chiplets that monolithic dies don't suffer from, and maybe the shader clocks (which would affect the fixed-function backend logic clockrate too, in RDNA3's case) isn't fast enough to quite overcome the inherent latency, regardless how small it may be? Maybe there isn't enough cache on the MCDs to assist in that?

Just taking a few guesses.

This guy should straight up be a banned source on NeoGAF.
 

DaGwaphics

Member
The things is when you consider that Lovelace is using 40% more die area than this XTX and the 4080 variant is on par with the XtX with a modern game like Calisto Protocol on performance/power draw maybe its design would be hot/loud if it was trying to fit the same die area, or to put it differently, if AMD had enough market share to be able to move this card to the same area as a 4080/4090, then it would have space to run much cooler, with less power draw.

Even if AMD aren't in a position to have all they need in market position to release a superior product in benchmark terms at a superior price, the die area, versus performance and power draw comparison definitely suggest it is a superior engineered architecture - from a theoretical stand point - IMO, and I wouldn't be surprised if Nvidia's next iteration is hitting diminishing returns and they need a new architectural design sooner rather than later.

Could you say that about 4080 though? It's only 79mm2 bigger vs. just the GCD in the 7900xtx, but that includes the memory interface and the cache on the Nvidia chip.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
It is fucking maths.

6900XT has a 300W TBP. A card designed to hit 450W with a 50% perf/watt gain has 1.5x the power and 1.5x the performance at that TBP equating to 2.25x overall improvement.

The 6900XT gets 77fps so 2.25x that is 173.25 fps.

If the perf/watt improvement is the same as 5700XT to 6900XT which was 1.64x then the maths is 1.5x power * 1.64x performance per watt for 2.46x performance which is 77 * 2.46 = 189.42 fps. This is an extreme upper bound though.

Now there are obvious caveats. 1) Maybe TBP won't be 450W so 1.5x power multiplier is wrong. 2) maybe the design goal was 375W but the clocks are getting pushed up the v/f curve so at 450W the perf/ watt advantage falls off of a cliff. 3) Maybe the baseline is the reference 6950XT instead at 335w again changing the power multiplier.

Now the way AMD have advertised their perf/watt gains in the past was prior flagship vs new SKU. For RDNA it was Vega64 vs 5700XT and for RDNA2 it was 5700XT vs 6800XT at 1.54x perf/watt and 5700XT vs 6900XT at 1.64x perf/watt.

As such I expect the >1.5x perf/watt claim is vs the 6900XT or 6950XT and using one of the top N31 SKUs.

So TLDR: maths is just maths and AMDs perf/watt claims have historically been a bit conservative meaning AMD will need a 375-400W TBP to match the 4090 and higher TBPs (provided that was the initial design goal and not a later in the day juicing of the clock speeds) will be faster than a 4090.

Edit: just to be clear I am talking pure raster performance here, no clue on how RT will shake out.
Chiplet designed GPU will blow nvidias ancient monolithic architecture out of the water.
We will see but if be more surprised if the 7900 xt loses in any rasterized game that isn’t Just straight nvidia favored


What say yee now?

7900XT beating the 4090 by alot, because "its fucking maths" and has "chiplets".

Mate it looks like its a generation behind.
Yall drank the koolaid

kool-aid-man.jpg
 
Last edited:

DaGwaphics

Member



Rather scathing insight into the performance results and unfulfilled promises from AMD by MLID. Considering they were one of the outlets providing early benchmarks for these cards from leaks & sources, it's good to see them outright say they bought into (and sold) the hype and admit they were wrong about expectations. They speculate a few reasons as to what's going on but I'm sure some ITT have arrived at some similar points to speculate on.

If I had to guess, maybe it's due to some inherent limitations with the chiplet setup, or maybe AMD's implementation of the chiplet setup? There's an inherent latency with chiplets that monolithic dies don't suffer from, and maybe the shader clocks (which would affect the fixed-function backend logic clockrate too, in RDNA3's case) isn't fast enough to quite overcome the inherent latency, regardless how small it may be? Maybe there isn't enough cache on the MCDs to assist in that?

Just taking a few guesses.


Had no idea that there was still that much issue with yields. I figured at least half would be full chips.

Shows you I have no idea how this stuff works, LOL. When the foundries give their percentages they always act like results are better than that now. Weird that they have so many 7900xt cards.
 
Last edited:

PaintTinJr

Member
Could you say that about 4080 though? It's only 79mm2 bigger vs. just the GCD in the 7900xtx, but that includes the memory interface and the cache on the Nvidia chip.
That's a fair point, but surely Nvidia would have copied the superior option if they architecturally could have - presumably - because their chip would run cooler and lower power, and then clock higher, and Nvidia customers at the top two tiers aren't really price sensitive if it added another $50 for an extra 10%-20% performance IMO.

Going by what I've read about the dedicate DL/RT features and the async lite capability using the tensor cores in software terms, architecturally the memory interface/caches on the Nvidia chip aren't going to shareable and movable - further from the cores and maintain bandwidth -without a major redesign, unless I got the wrong end of the stick.

In simple terms the Nvidia solution with DL/RT comes at the cost of hardware design flexibility and losing out on full async compute functionality.
 

Leonidas

Member
You are either looking for much higher frame rates than that or you are playing heavier games
Seems silly to go above your max refresh. I play a range of games.

I enjoy demanding games too. Games with RT are the most demanding of all and sadly the 7900-series falls way behind in the most demanding RT games (Control, Cyber Punk, etc.)

If Doom at 144hz is your thing, get a weaker card to start with, don't get a 1K card and use it at half power.
If it did it in Doom Eternal it most likely does it in many other games with a similar load. And the situation could be even worse if you play less demanding titles like indies on occasion. It seems silly to ignore all those great games just because it's not pushing your GPU.

Why should someone spending $1000 avoid less demanding titles? That makes no sense.

I'm not saying that's all I'm playing, I enjoy demanding games on PC (Control, Cyber Punk, etc.) as they offer an experience that can't be had anywhere else. I also enjoy many less demanding games.
 

//DEVIL//

Member
is there an AIB reviews compared to the shitty reference card? between the power limit of 2 pins / annoying coil whine and fan noise, this card shouldn't be touched.

I really want to see the performance of AIB OC card out of the box compared to this mess.
 

DaGwaphics

Member
Seems silly to go above your max refresh. I play a range of games.

I enjoy demanding games too. Games with RT are the most demanding of all and sadly the 7900-series falls way behind in the most demanding RT games (Control, Cyber Punk, etc.)


If it did it in Doom Eternal it most likely does it in many other games with a similar load. And the situation could be even worse if you play less demanding titles like indies on occasion. It seems silly to ignore all those great games just because it's not pushing your GPU.

Why should someone spending $1000 avoid less demanding titles? That makes no sense.

I'm not saying that's all I'm playing, I enjoy demanding games on PC (Control, Cyber Punk, etc.) as they offer an experience that can't be had anywhere else. I also enjoy many less demanding games.

Obviously you don't ignore games. You play whatever you want to play.

But still, the concern for the average desktop PC gamer is whether or not the PSU they have can handle the max power draw spikes of the card they've got not about how efficient the thing is in a low power state. For laptops or office productivity machines this is more of an issue. The max power of the card and the performance you get under that condition are much more important than the power scaling. Though I do think the idle usage is a bit poor and needs to be fixed, as does the power draw for media playback. Though even there that's more of an acoustics issue than anything else.

But I'm in the states and we don't have the energy issues in my region that much of the EU does, so, I can see where things might change based on location.
 
Last edited:
Nvidia likely pour tons cash and resources into the R&D of their chip design and architecture. They didn't go with an MCM approach in Lovelace, and it's been heavily rumoured now that Blackwell (50 Series) will stick with a monolithic design. Which says a lot, it means they can hit their design and performance goals without a chiplet approach.

I mean, sooner or later they WILL have to move on from monolithic chips for their top-end designs, that's just inevitable. A ceiling exists there and chiplets (alongside more advanced 3D packaging techniques) are collectively answers to it, but maybe people speculating that the death of monolithic chips would be this year were reaching way too hard after all.

In a way it's pretty impressive the improvements Nvidia are able to continue getting with monolithic designs, and it's a tried and tested approach. They'll push it as far as possible before going to chiplets and while AMD would appear to have a head start there, I don't think a company as big as Nvidia have been just resting on their laurels, ignoring R&D into chiplet designs for the future when in fact the time comes to move on from monolithic designs.

I think that's an approach they're able to take given their size that a company like AMD isn't quite able to do since they aren't flushed with as much cash. However, I'm also wondering how much of AMD's GPU advances are thanks to R&D investments from the console makers, specifically Sony & Microsoft. We know they both contributed a LOT to RDNA2 R&D in general in their own ways and AMD have taken ideas from them to move forward with general RDNA2 and RDNA3, but maybe the lack of having say Sony R&D there in the process for RDNA3 more explicitly puts a bit of a hamper on what ways AMD themselves can make up for that given their smaller size in resources compared to Nvidia?

Which, I guess if mid-gen Pro consoles are a real thing, could coincide with good news for RDNA4 being a lot closer to whatever Nvidia have ready for that time similar to RDNA2 vs. Ampere. But mid-gen Pro refreshes are just rumor at this point.
 

Buggy Loop

Member
Nvidia likely pour tons cash and resources into the R&D of their chip design and architecture. They didn't go with an MCM approach in Lovelace, and it's been heavily rumoured now that Blackwell (50 Series) will stick with a monolithic design. Which says a lot, it means they can hit their design and performance goals without a chiplet approach.

Nvidia has MCM for server workload GPUs, but while for non real-time productivity tasks MCM scales well, for gaming it’s a problem. Apple also faced the same problem. While their CPU practically double performances in gaming, the GPU has a mere +50% with a 2.5TB/s low latency link.

Also according to Kopite7kimi, Nvidia had both a monolithic and MCM solution for Ada Lovelace and waited on TSMC yield results (probably assuming worst case using Samsung) to decide. Tweet was deleted but the trace of the news is still on Reddit

See here

Clearly they were impressed by the output of the monolithic.

The problems with MCM gaming latency with the links are for multi GCD so not applicable for RDNA 3 chiplets. AMD’s engineer kind of covered that with the press that it’s more tricky to have multi GCD on GPUs than CPU CCDs. As of now the OS have native multi CPU support and it’s well understood how the system handles multi tasks over multiples of them. There’s no such thing for GPUs, it has to be handled on driver side, which is a big yikes.. but time will tell.
 
Last edited:

HeisenbergFX4

Gold Member
Via r/pcmasterrace

zkygu3l8hq5a1.jpg

Glad to see other platforms posting the same reactions about this generation's pricing structure. Can we go back to $4-5 per fps again?
With people (like myself) willing to spend that amount of money for bleeding edge performance?

Nvidia isn't getting cheaper unless AMD forces them to and this round aint it
 

Amiga

Member
I never claimed it makes the game look better by default. I'm simply saying it cannot be hand-waved away once (for instance) more than half of all new games feature ray tracing. Plus, not all games run on Unreal Engine 5 (and not all games will run on UE5, a lot of devs/pubs don't want to give Epic a cut of their sales and will use their own engines - see Bethesda, EA, Ubisoft, Sony (Decima etc.)).

Yes. This is the point. RT isn't a consistent advantage. On the other hand raster is a consistent advantage for AMD.
 

Crayon

Member
With people (like myself) willing to spend that amount of money for bleeding edge performance?

Nvidia isn't getting cheaper unless AMD forces them to and this round aint it

Mmm I think there's too much fault placed on amd for Nvidia's pricing. Really it looks more like Nvidia determines the trend and AMD follows, if anything.
 

//DEVIL//

Member
AMD has lost the battle and honestly I think they did for the next to 6 years if not more. If their best card can’t beat an 80 series, they are done for.
 

//DEVIL//

Member
Looking at couple of games… the OC accounts for 2 to 3 frames max at 4K ? Heh . And they charge 200$ more … I give up . Was really thinking at one point of selling my 4090 for profit even and get one of those but nope not even close.
 
What say yee now?

7900XT beating the 4090 by alot, because "its fucking maths" and has "chiplets".

Mate it looks like its a generation behind.
Yall drank the koolaid

kool-aid-man.jpg
Im willing to admit I was wrong I genuinely thought the goal was either have this current performance level for 5-700$ or outperform the 4090 in raster at the current price but clearly amd chased margins. Looks like im waiting for rdna 4
 
who the fuck plays games at 200fps-600fps? some PC game runs 280fps on 4090, 272fps on 4080, and 260fps on 7900xtx so AMD sucks balls and they overpriced and can go fuck themselves.

I don't give a fuck.

Get those fucking RDNA3 compute units in mini-PC's, ultrathin laptops, all-in-one desktops so I can play 1440p 60fps-80fps and upscale to 4k and have a decent experience at a reasonable price and appreciate PC gaming, cause AMD got something that shits on Intel cause they nowhere close to what they got, and NVIDIA got nothing unless you buy discrete and sell your left ball.
 

supernova8

Banned
Yes. This is the point. RT isn't a consistent advantage. On the other hand raster is a consistent advantage for AMD.
RT has been a advantage for Nvidia for the last three consecutive generations, that sounds pretty consistent. In contrast, AMD only seems to have a small raster advantage in some games (synthetic benchmarks are pointless).

This is the point, AMD's very small (and not always there) raster advantage is (IMO) already not enough to warrant hand-waving the significant RT disadvantage especially at the high-end where people are spending around $1000 or more. This will only become more the case unless AMD figures out what to do with RT.
 
Last edited:

supernova8

Banned
who the fuck plays games at 200fps-600fps? some PC game runs 280fps on 4090, 272fps on 4080, and 260fps on 7900xtx so AMD sucks balls and they overpriced and can go fuck themselves.

I don't give a fuck.

Get those fucking RDNA3 compute units in mini-PC's, ultrathin laptops, all-in-one desktops so I can play 1440p 60fps-80fps and upscale to 4k and have a decent experience at a reasonable price and appreciate PC gaming, cause AMD got something that shits on Intel cause they nowhere close to what they got, and NVIDIA got nothing unless you buy discrete and sell your left ball.
You're absolutely spot on.

Youtuber ETA prime (pretty damn awesome youtuber, very consistent, no nonsense gets straight to the point) recently upgraded his 6900HX based mini PC with 6000 Mhz DDR5 RAM (yeah it's expensive but the point is it can be done) and he got much better performance (seemingly a good 20% uplift). He was able to play Forza Horizon at 1080p medium at around 100fps which is pretty insane (you'd get around 70fps with stock 4800mhz RAM I think).



I suppose AMD might do it if they decide making low-end discrete GPUs just isn't worth it anymore, until then they might be weary of cannibalizing their own low end GPUs sales. Plus, if they can sell someone on an APU with a RDNA3 iGPU, they've locked that person into the AMD ecosystem for some time. Meanwhile, people buying Radeon GPUs can pretty easily switch to Nvidia if they feel like it.

Even before that, (I could be missing some technical bottleneck but) why don' they just double or triple the CUs on the current top end APUs (e.g. 6900HX) since that APU's default power draw is only about 50w anyway? For comparison, the 5900X runs is 105w. Surely they don't need RDNA3 CUs to do what you're suggesting. If so, it's a matter of them not wanting to offer such a product for business reasons.
 
Last edited:

DaGwaphics

Member
Even before that, (I could be missing some technical bottleneck but) why don' they just double or triple the CUs on the current top end APUs (e.g. 6900HX) since that APU's default power draw is only about 50w anyway? For comparison, the 5900X runs is 105w. Surely they don't need RDNA3 CUs to do what you're suggesting. If so, it's a matter of them not wanting to offer such a product for business reasons.

Probably would be bandwidth starved unless they got very creative with the memory (a normal dual channel ddr5 6000 setup is under 100GBs plus shared with the CPU). Maybe quad channel 7000 or something, but it would still be working with less than a 6600.
 

Leonidas

Member
Hindsight is a beautiful thing but maybe they should have called these two cards the 7800 XT and the 7700 XT, leaving themselves room later to release a 7900 XT.
Agreed. It's going to look bad if the 4070 Ti, Nvidia's 3rd-tier chip, ends up close to the 7900 XT.

It's going to look bad if the 7800 XT loses 4 GB of RAM vs 6800 XT (as rumors suggest) and is barely faster than the 6800 XT.

It would look bad if the 7800 XT loses in performance to the 4070 Ti.

AMD GPU naming is meaningless...

It's funny how Nvidia was lambasted for the 4080 12GB but AMD did the same thing, at least it got called out in reviews I guess...
 

DaGwaphics

Member
Hindsight is a beautiful thing but maybe they should have called these two cards the 7800 XT and the 7700 XT, leaving themselves room later to release a 7900 XT (equivalent to the rumored 7950 XTX?). I think they fucked themselves with the whole XTX thing if they didn't have the performance to back it up. Plus if they really did have hardware issues that would give them extra time to iron those out.

One would hope they didn't want to bump the 7700xt to the $900+ tier.
 

Crayon

Member
N33 might shed some light on what went on here. It's supposed to be monolithic so it'll be interesting to see if it's getting a bigger uplift from n23 than 31 did from 21. That might say something about how well this first round of mcm worked out.
 

tusharngf

Member


Since Nvidia Turing launching Nvidia ahead of AMD in Computing and Mining as well.

this is just GCN again. AMD needs better engineer's and higher RnD budget for RDNA 4. I had used hd4850 and hd6850 in the past and never purchased AMD card again.
 
Last edited:

supernova8

Banned
RT not consistently implemented well in games.
But the trend of more games using RT is definitely consistent. Plus, Nvidia has no major say in how RT is implemented, so the idea of being implemented "well" is irrelevant (take that up with game creators, not Jetset "leather jacket" massiveWang. The only thing we can say for sure is that first, second, and third gen RTX cards all offer better native (ie removing noise from DLSS or FSR) performance when RT settings are turned on compared to RDNA 1, 2, or 3.
 
Last edited:

supernova8

Banned
Agreed. It's going to look bad if the 4070 Ti, Nvidia's 3rd-tier chip, ends up close to the 7900 XT.

It's going to look bad if the 7800 XT loses 4 GB of RAM vs 6800 XT (as rumors suggest) and is barely faster than the 6800 XT.

It would look bad if the 7800 XT loses in performance to the 4070 Ti.

AMD GPU naming is meaningless...

It's funny how Nvidia was lambasted for the 4080 12GB but AMD did the same thing, at least it got called out in reviews I guess...

Plus at least the 4080 and 4090 do actually seem to offer the level of performance Nvidia (pre-launch) claimed they would.
Could argue that AMD either lying or being grossly incompetent with managing its driver development is way worse than what Nvidia got slammed for.
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Looking at couple of games… the OC accounts for 2 to 3 frames max at 4K ? Heh . And they charge 200$ more … I give up . Was really thinking at one point of selling my 4090 for profit even and get one of those but nope not even close.
Wait what?
You were thinking maybe sell the 4090 and get a 7900XTX and eat the profit.
Come on mane.
Keep the 4090 and glide through the generation.

P.S:
There does seem to be rumblings that careful overclocking and voltage control of the 7900XTX can indeed get it to match 4090 levels of performance in gaming.
The real reason to buy Strix and SuprimX boards is actually their overclocking potential.
So if you really are in the mindset of selling your 4090 for profit in exchange for an XTX ASUS and MSI are your board partners, put in a good overclock and enjoy whatever chunk of change you get from your 4090.

P.P.S:
I dont think MSI and ASUS make Strix and SuprimX versions of AMD cards anymore, so TUF and X Trio are likely your only options.
Im willing to admit I was wrong I genuinely thought the goal was either have this current performance level for 5-700$ or outperform the 4090 in raster at the current price but clearly amd chased margins. Looks like im waiting for rdna 4
Mate it looks like you are never building that computer you were planning on building cuz you are gonna be waiting forever for that magic bullet.
You should probably just get a Switch, PS5 and XSX for the same money then call it a day.

I dont think PC gaming is for your mindset.

Wanting 4090 performance but not being able to spend 4090 money is gonna hard on the psyche.
Even if RDNA4 and 5070 were to give you 4090 performances for say $700 dollars.
You'd look at the 5090 and think "damn, I want 5090 performance, so i should wait for the RDNA5 and 6070".
Waiting for the Zen4X3D chips will only lead to you waiting for MeteorLake, which in turn will have you waiting for Zen5 and ArrowLake.

If you were serious about getting into "high" PC gaming and dont suffer from FOMO.
Now is pretty much as good a time as any.

A 136K or 7700X will glide you through the generation.
The 7900XTX isnt a bad card by any right, if you can find it at MSRP, you are sorted till the PS6 easy.
Otherwise find a secondhand 3090 or 3090Ti.

If you dont need best of the best 4K native everything (DLSS/XeSS/FSR2 are all your friend).
Then you could easily get by with a 12400/13500 + 3070 or 3080 and with the change from not going higher, get yourself a PS5 and/or Switch.
 

//DEVIL//

Member
Wait what?
You were thinking maybe sell the 4090 and get a 7900XTX and eat the profit.
Come on mane.
Keep the 4090 and glide through the generation.

P.S:
There does seem to be rumblings that careful overclocking and voltage control of the 7900XTX can indeed get it to match 4090 levels of performance in gaming.
The real reason to buy Strix and SuprimX boards is actually their overclocking potential.
So if you really are in the mindset of selling your 4090 for profit in exchange for an XTX ASUS and MSI are your board partners, put in a good overclock and enjoy whatever chunk of change you get from your 4090.

P.P.S:
I dont think MSI and ASUS make Strix and SuprimX versions of AMD cards anymore, so TUF and X Trio are likely your only options.

Mate it looks like you are never building that computer you were planning on building cuz you are gonna be waiting forever for that magic bullet.
You should probably just get a Switch, PS5 and XSX for the same money then call it a day.

I dont think PC gaming is for your mindset.

Wanting 4090 performance but not being able to spend 4090 money is gonna hard on the psyche.
Even if RDNA4 and 5070 were to give you 4090 performances for say $700 dollars.
You'd look at the 5090 and think "damn, I want 5090 performance, so i should wait for the RDNA5 and 6070".
Waiting for the Zen4X3D chips will only lead to you waiting for MeteorLake, which in turn will have you waiting for Zen5 and ArrowLake.

If you were serious about getting into "high" PC gaming and dont suffer from FOMO.
Now is pretty much as good a time as any.

A 136K or 7700X will glide you through the generation.
The 7900XTX isnt a bad card by any right, if you can find it at MSRP, you are sorted till the PS6 easy.
Otherwise find a secondhand 3090 or 3090Ti.

If you dont need best of the best 4K native everything (DLSS/XeSS/FSR2 are all your friend).
Then you could easily get by with a 12400/13500 + 3070 or 3080 and with the change from not going higher, get yourself a PS5 and/or Switch.
I was thinking of doing it when I saw the AMD slides during presentations and comparing it to what results are in for the 4090. Like 1.7 performance of 6950xt in cyber punk and stuff made me think the card is 20 to 25% more powerful than 4080 which makes it 20% less powerful than 4090 or so. In my head it was a small calculation if I can sell the 4090 for 1800$ or so and get the 7900 xtx for 1100, then that’s 700$ US in pocket which is almost 80% saving etc.

But seeing the performance of the card. That 4090 is staying till the 5000 series for sure. As I am not interested in 4090ti for 2000$ next year when the 5000 series is gonna be cheaper and way more powerful in 2024
 

Outlier

Member
I was really hoping for the best case scenario that AMD would have the best standard performance per dollar, but as I feared... they screwed this up.

Thankfully I can wait to see what happens with prices for maybe 6 months, as I'm ready to experience high end PC gaming. Been almost 3 years with my mid ranger (5700XT+Rz3700X).
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
I’d stay away from the Samsung cards. If you want nvidia make sure it’s TSMC, which is practically all the 4000 cards on sale currently
3090s came out in 2020.
Weve been using them for 2 years, why should we avoid them now after 2 years.

TSMC is "practically" all the 4000 cards?
Are any Ada chips being forged by someone else?
 
Last edited:

Assaulty

Member
Seems silly to go above your max refresh. I play a range of games.

I enjoy demanding games too. Games with RT are the most demanding of all and sadly the 7900-series falls way behind in the most demanding RT games (Control, Cyber Punk, etc.)


If it did it in Doom Eternal it most likely does it in many other games with a similar load. And the situation could be even worse if you play less demanding titles like indies on occasion. It seems silly to ignore all those great games just because it's not pushing your GPU.

Why should someone spending $1000 avoid less demanding titles? That makes no sense.

I'm not saying that's all I'm playing, I enjoy demanding games on PC (Control, Cyber Punk, etc.) as they offer an experience that can't be had anywhere else. I also enjoy many less demanding games.

Exactly. I am playing Brotato on my 6900xt. :messenger_tears_of_joy: And most of the games I play (on a 144hz screen) don't use more than 60% of the gpu at 1440p even at higher framerates. But it sure is hella nice to know that I still have a lot of headroom left for future triple A games at 1440p.
 
Top Bottom