• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Navi 21 possibly runs at 2.2GHz with 80CUs, Navi 22 at 2.5GHz with 40 CUs

as you can see below, the rtx 2080 ti is on average 56% faster than the 5700 xt.

It's 46% at 1440p (the resolution people actually use these cards for). With a minimum of 30% clock speed increase and some IPC improvements, Navi 22 should be on a 2080Ti level.
 

duhmetree

Member
Big if true, unironically. 2x the CUs, 15% more clock frequency and likely 10ish% more IPC performance compared to the 5700XT... if true, it should be quite a bit faster than the 3080.

Price it at $599 or less and it will be a very good deal.
Damn... is AMD about to make a move on the GPU market as well...

Imagine that... the 5900xt approaching the 3090. Is there much room for nvidia here to improve? These cards seem to be pushed hard already. What would a ti model improve?
 
Last edited:
Damn... is AMD about to make a move on the GPU market as well...

Imagine that... the 5900xt approaching the 3090. Is there much room for nvidia here to improve? These cards seem to be pushed hard already. What would a ti model improve?

Nvidia is betting on better RT performance and of course DLSS. It's a good bet because they seem to be getting all the AAA heavy hitters on board, see Cyberpunk and Watch Dogs. So Nvidia cards still might be better value. We'll see.
 

duhmetree

Member
Nvidia is betting on better RT performance and of course DLSS. It's a good bet because they seem to be getting all the AAA heavy hitters on board, see Cyberpunk and Watch Dogs. So Nvidia cards still might be better value. We'll see.
Either way... we, the consumer/gamer win.

AMD is finally forcing competition on all fronts
 
How bout that Vega 64 vs. 1080ti eh?

I understand perfectly fine, however tflops don’t tell the whole story. Continue to shill.

You obviously don't understand. No one who does would call Ampere TFLOPS "misleading" and constantly attack them as "according to Nvidia promotional slides" unless they had no idea what they're talking about. Ampere has the number of TFLOPS it has because that's how many it has. Nvidia didn't come up with some shady or new way of calculating them just for Ampere.

As I've already said, the TFLOPS for Ampere are arrived at by the EXACT SAME calculation you use to for RDNA 2 TFLOPS or Xbox TFLOPS or Playstation TFLOPS. Ampere architecture is simply ovefflowing with FP32 shaders. That's why and that's all.

TFLOPS is NOT an accurate way to guage performance (unless the 2 GPUs being compare are the same or very similar architectures), but you believe that it "must be" so when the numbers don't line up with that false assumption, your mind believes something shady is going on.

And when you consider Nvidia's modern architectures for RTX, TFLOPS make even less sense.

Nvidia GPUs are made up of 3 main components - CUDA Cores ( shaders ), Tensor Cores ( AI ) and RT cores ( Raytracing ).

The TFLOPS calculation ONLY takes into account the number of shader cores and COMPLETELY IGNORES the other 2/3 of the GPU.
 
Last edited:
Anyone here using tflops to judge performance across different architectures please stop....

OT RDNA2 is going to be very competitive indeed, 2.2Ghz (assuming retail clocks are around this level) 80CU card at much lower power draw than 3080 with similar performance...game changer.

Also shows how next gen console frequencies are so high (PS5s) without being massively inefficient.
 
Damn... is AMD about to make a move on the GPU market as well...

Imagine that... the 5900xt approaching the 3090. Is there much room for nvidia here to improve? These cards seem to be pushed hard already. What would a ti model improve?

There's very little room to improve 3090 performance and there's only a 10% gap between it and the 3080.

I think Nvidia will just release a 20GB 3080 with everything else identical, and a 16GB 3070 Ti/S that's 10% faster than 3070 to compete with Navi 22.

Either way, we know now why Nvidia scrambled to release Ampere; they're worried about RDNA2.
 
Last edited:
But I never said that it is 70% faster. I said it has 17 tflops based on the in game clocks ive seen in benchmarks though I concede that they might have been overclocking it. from the flight sim benchmarks I just searched on youtube hovers in the 1950s though ive seen some that are in the 1890s. it seems to be either silicon lottery or cooling solutions that allow it to high higher clocks.

regardless, tflops are tflops. higher clocks will give you more performance, and the same logic should apply when comparing a 12.8 tflops rdna gpu to a 12-13 tflops to a 16 tflops rtx 2080 ti. the point is that a 12.8 tflops rdna gpu is not gonna match a 2080ti unless AMD has managed to improve IPC gains by another 25%.

as you can see below, the rtx 2080 ti is on average 56% faster than the 5700 xt. you are not going to make that up by just increasing tflops by 25%-30%. there is still 25-30% performance left.

relative-performance_3840-2160.png




What's interesting is that we do indeed see that even nvidia was struggling to get 1:1 performance scaling when increasing shader processors. 56% is not 65% by your tflops calculations and yet here we are. so while AMD hit that 64 cu ceiling with the vega 56, it seems that Nvidia has gotten there with the 2080 ti and now the entire 3080 which has 8700 shader processors, 3x that of 2080 and is only offering 2x more performance.

it will be interesting to see if the 80 cu rdna 2.0 can offer double the performance of the 40 cu 5700xt. as for the 40 cu 2.5 ghz navi 22, it will need a whole lot more than 2.5 ghz to come even close to the 2080 ti. IPC gains are a must if that were to happen.

Consider the reason that the 2080Ti has 70% more TFLOPs but at most 50% higher performance (by the way that gap has been reduced due to driver optimisations for Navi, since the TechPowerUp reviews).

Why is that?
The answer is shader occupancy. The 2080Ti is a much wider GPU than the 5700XT. It has 4352 "lanes" that can be fed with data. But in typical gaming loads it is very very rare that all of those lanes receive any traffic.
Of course even the 5700XT with it's 2560 lanes doesn't see 100% occupancy. If RDNA2 can improve the architecture with say for example a better cache hierarchy which enables it to have better utilisation, or architecture improvements that allow for better "IPC", and then also sees a 30% increase in clockspeeds, it could most certainly get uncomfortably close to the 2080Ti. Particularly at resolutions lower than 4K (Higher resolutions favours more shaders - this is why Fiji and Vega tended to close the gap to Maxwell and Pascal at higher resolutions).

For evidence of this, you need only look at Pascal to Turing.

The 2080 with 2944 shaders beat the performance of a 1080Ti with 3584 shaders at similar clockspeeds, because of architectural improvements/increased "IPC".

The 2070S (and 5700XT too if you look at recent benchmarks) with 2560 shaders matches a 1080Ti with 3584 shaders due to a combination of architecture improvements and higher clockspeeds.

Note: Remember TFLOPS is literally number of shaders * 2 Operations per clock (fused multiply-add) * clockspeed.

A 40CU Navi 22 at 2500MHz might not have as many teraflops as a 2080 Ti at 1900MHz, but that doesn't mean it's impossible for it to match it's performance. It is possible.
Whether or not it succeeds in being able to do so as yet remains to be seen. But it's not impossible.
 

evanft

Member
Those are some pretty crazy specs. I'd love to see AMD shock everyone with a compelling product.

Hopefully they price it well. Honestly, they need to be >$100 lower than the equivalent nVidia card to get my attention.
 

jonnyp

Member
I don't trust AMD GPU leaks. They are usually wrong and only in one direction.

But wouldn't it be nice if they were true and Nvidia vs AMD was actually a choice you had to think about for once.

If getting a 20GB 3080 wasn't a no brainer and AMD actually brings worthy competition to the table, I would buy AMD GPU for no other reason than that. My last AMD GPU was a 7970 GHz Edition. ( or a 280X for my wife ).

Indeed, sounds too optimistic as per usual.
 

Ascend

Member
Indeed, sounds too optimistic as per usual.
Normally there was no other hardware to support any of these things. The consoles do hint in that direction...


In other news... It seems like Newegg knows something... And it falls exactly in line with the leaks, but the base clocks are quite low... It could be that the clocks are placeholders. Why else would they be lower than the 5700XT?

L0t890K.png


 

Allandor

Member
That's the GPU itself, not counting the RAM (or other components like fans, RGB etc). It does seem a bit low though. But we'll see.

As for the clocks, I don't consider 2.5 GHz unrealistic for a 40CU on PC, if a console with 40CU (with 4 disabled) can reach 2.2 GHz.
1. RAM doesn't need that much power.
2. Even Cerny stated that the GPU can't reach stable 2GHz at all settings. So 2.5 is really a bit to far off. If a 40 CU GPU can reach 2.5 GHz, PS5 wouldn't have any problems to reach 2.2x GHz stable.

The "leaked" number just make no sense at all.
 
1. RAM doesn't need that much power.
2. Even Cerny stated that the GPU can't reach stable 2GHz at all settings. So 2.5 is really a bit to far off. If a 40 CU GPU can reach 2.5 GHz, PS5 wouldn't have any problems to reach 2.2x GHz stable.

The "leaked" number just make no sense at all.
1. Wrong. GDDR6 can use quite a lot of power. 20-30 W. Add higher capacities and power keeps going up.
2. Console environment is not the same as a dGPU. A dGPU can have as much and more power allocated to just the GPU, as there is no on-board CPU. PC cases are also much bigger than even the chonker PS5 and consequently much larger dedicated heat sinks for the GPU.

A 40CU GPU at 2500MHz can have 250W all by itself. Meanwhile the PS5 has to share 250W between the GPU and the CPU (which is likely to consume about 40-50W) and the SSD and all the other shit in the console. They have variable clocks on PS5 to make sure it stays within the allotted power budget.
 

BluRayHiDef

Banned
Normally there was no other hardware to support any of these things. The consoles do hint in that direction...


In other news... It seems like Newegg knows something... And it falls exactly in line with the leaks, but the base clocks are quite low... It could be that the clocks are placeholders. Why else would they be lower than the 5700XT?

L0t890K.png



The bandwidth of the 6900XT doesn't give me the impression that it'll be able to compete with the RTX 3080 (512 GB/s vs 760 GB/s), regardless of its Infinity Cache.
 

Ascend

Member
The bandwidth of the 6900XT doesn't give me the impression that it'll be able to compete with the RTX 3080 (512 GB/s vs 760 GB/s), regardless of its Infinity Cache.
I'm suspecting they are using some sort of variant of FRC. FRC by itself is focused on GPGPU, rather than graphics for gaming, but ultimately AMD might have discovered that it helps for gaming as well.
What is FRC? Here;

--------------
Finally, once simulation accuracy is increased, this thesis proposes a novel approach, called FRC (Fetch and Replacement Cache), which highly improves the GPU computational power by enhancing main memory-level parallelism. The proposal increases the number of parallel accesses to main memory by accelerating the management of fetch and replacement actions corresponding to those cache accesses that miss in the cache. The FRC approach is based on a small auxiliary cache structure that efficiently unclogs the memory subsystem, enhancing the GPU performance up to 118% on average compared to the studied baseline. In addition, the FRC approach reduces the energy consumption of the memory hierarchy by a 57%.

L2 cache misses can be handled by either normal cache entries or FRC entries, however, FRC handles misses faster than the L2 cache since, part of the main memory latency is hidden by moving eviction and invalidation actions out of the critical path. In other words, the higher the number of misses handled by FRC the better the performance.

--------------

Chapter 6.6 is particularly interesting, where they tested cache misses with Polaris and Vega. The funny thing is that FRC increases performance even on Vega, despite its astronomical bandwidth;

-------------
The Vega64 presents better [Operations Per Cycle] values with respect to the RX540 and RX570 across all the studied benchmarks thanks to the improved computational and memory capabilities and higher memory bandwidth. This fact does not prevent the FRC from boosting the OPC over the baseline cache in most applications. Although the average OPC improvements are not as high as those from the RX540 and RX570 GPUs, the FRC still boosts the OPC from 16% (+[4 entries]) to 54% (+[512 entries]) compared to the 256KB L2 cache. In this study, the 4× sized cache also reaches an average OPC improvement of 54% over the baseline. However, such a performance would be achieved with a greater energy consumption and area as discussed above.
------------

Source
 
Last edited:

Bboy AJ

My dog was murdered by a 3.5mm audio port and I will not rest until the standard is dead
I would love to get an AMD card but if it isn’t competitive with the 3080, pass. And that’s not just raw power, either. I have to see if they can also answer DLSS.
 
It is a bit strange if Big Navi tops out at a very mid-range 256-bit bus. Still, I'd rather have 16GB of VRAM at 512GB/s than only 10 at 760 GB/s. If there isn't a 20GB version of the 3080, I won't buy it.

Those New Egg clocks must be place holders.
 

BluRayHiDef

Banned
I'm suspecting they are using some sort of variant of FRC. FRC by itself is focused on GPGPU, rather than graphics for gaming, but ultimately AMD might have discovered that it helps for gaming as well.
What is FRC? Here;

--------------
Finally, once simulation accuracy is increased, this thesis proposes a novel approach, called FRC (Fetch and Replacement Cache), which highly improves the GPU computational power by enhancing main memory-level parallelism. The proposal increases the number of parallel accesses to main memory by accelerating the management of fetch and replacement actions corresponding to those cache accesses that miss in the cache. The FRC approach is based on a small auxiliary cache structure that efficiently unclogs the memory subsystem, enhancing the GPU performance up to 118% on average compared to the studied baseline. In addition, the FRC approach reduces the energy consumption of the memory hierarchy by a 57%.

L2 cache misses can be handled by either normal cache entries or FRC entries, however, FRC handles misses faster than the L2 cache since, part of the main memory latency is hidden by moving eviction and invalidation actions out of the critical path. In other words, the higher the number of misses handled by FRC the better the performance.

--------------

Chapter 6.6 is particularly interesting, where they tested cache misses with Polaris and Vega. The funny thing is that FRC increases performance even on Vega, despite its astronomical bandwidth;

-------------
The Vega64 presents better [Operations Per Cycle] values with respect to the RX540 and RX570 across all the studied benchmarks thanks to the improved computational and memory capabilities and higher memory bandwidth. This fact does not prevent the FRC from boosting the OPC over the baseline cache in most applications. Although the average OPC improvements are not as high as those from the RX540 and RX570 GPUs, the FRC still boosts the OPC from 16% (+[4 entries]) to 54% (+[512 entries]) compared to the 256KB L2 cache. In this study, the 4× sized cache also reaches an average OPC improvement of 54% over the baseline. However, such a performance would be achieved with a greater energy consumption and area as discussed above.
------------

Source

Does the following summarize what you've explained or have I misunderstood you?

In short, Infinity Cache will minimize or even eliminate the typical errors that occur when data is being retrieved and stored in VRAM, thereby more efficiently utilizing bandwidth. Hence, because of the more efficient usage of bandwidth, the gap between the real-world performance of the 6900 XT's bandwidth of 512 GB/s and that of the RTX 3080's 760 GB/s will be smaller than it would otherwise be (i.e. 512 GB/s will perform more like 1,116 GB/s [118% of 512 GB/s = 604 GB/s -> 604 GB/s + 512 GB/s = 1,116 GB/s]).
 
Last edited:

BluRayHiDef

Banned
It is a bit strange if Big Navi tops out at a very mid-range 256-bit bus. Still, I'd rather have 16GB of VRAM at 512GB/s than only 10 at 760 GB/s. If there isn't a 20GB version of the 3080, I won't buy it.

Those New Egg clocks must be place holders.

VRAM capacity is more important than memory bandwidth? I'm genuinely curious.
 

JohnnyFootball

GerAlt-Right. Ciriously.
It's hard to get excited about the hype for AMD GPUs, because they never end up being quite as good as hoped, although in recent years they end up being very good overall. The 5700 and 5700XT merely treaded water and my expectation is that the 6000 series will be good and do the same. Treading water is good, but we are long overdue for a knock it out of the park home run in the GPU market. I doubt we will ever get a 9700 Pro level bombshell that completely caught nvidia off guard.
 

CuNi

Member
For 4K gaming it is

Whose hype I never got tbh.
Even the push for 8k by mentioning it etc. is very concerning for me. We had a time in GPU technology where more powerful GPUs were used to deliver better and more complex worlds, but for some time now, 90% of the power uplift gets tunneled into increased resolution. I don't get this trend honestly. I'd prefer for them to stick to 1440p at most push cards to their limits there instead of waste all this computing power on 4k/8k. Exception would be VR where you need high ppi to eliminate SDE but those games aren't as demanding for the time being when it comes to VRAM anyway.
 

tusharngf

Member
It's hard to get excited about the hype for AMD GPUs, because they never end up being quite as good as hoped, although in recent years they end up being very good overall. The 5700 and 5700XT merely treaded water and my expectation is that the 6000 series will be good and do the same. Treading water is good, but we are long overdue for a knock it out of the park home run in the GPU market. I doubt we will ever get a 9700 Pro level bombshell that completely caught nvidia off guard.


I think they will be competitive this time. Navi is going with more VRAM and smart cache this time as rumor suggests. 128mb smart cache is inside the GPU which will scale the card in performance just like what Microsoft did with Xbox 360 back in the old days. AMD is not hyping their cards these days to give some competition to Nvidia. There is also a rumor that top end big navi could reach close to 3090 levels of performance.
 

llien

Member
I'd wait for official sources.
Surely 2.2 is doable, based n what we know about PS5, but 2.2Ghz on full 80CU is not the same as on 36CU.

People who expect competitive high end GPUs will get what they want. People who are after cards beating competitor at half the price are poised to be disappointed.

Can someone translate those numbers into English?

Good enough to make fancy TAA upscaling called DLSS, RT and other buzzwords much "importanter" than they are now, to justify buying green.
 

cucuchu

Member
Some solid points made here. There is still a lot AMD needs to address though. I need to see practical performance to believe a lot of this and even if AMD can deliver on performance, I don't see how they can touch NVIDIA RT which more and more games look to be implementing. Its the big selling buzz word of the upcoming gen (at least it will be during the first few years while these GPU's are current). On top of that, DLSS needs to be answered and I've yet to see anything substantive pointing to AMD being able to answer it.

And then my largest concern is the drivers for AMD 6000 series. They shit the bed for a solid 8 months last time....

All that being said, I want AMD to deliver performance at the 3080 level. I want legit competition and if AMD can answer all my concerns, I will consider going AMD when the 7000 series arrive.
 

pawel86ck

Banned
I don't think the 2080Ti could reach 17TF under anything but LN2. I'd say 15 is accurate. Feel free to prove me wrong.


The 2080Ti is a 'mere' 35%-40% faster than the 5700XT on average. Do you really think that a 100% increase in the CUs (from 40 to 80) and a 15% (1.9 to 2.2) clock boost is going to net just 40%-50% in performance increase...? And that is ignoring the rumored 15% increase in IPC...

And before someone does it... You definitely can't compare TFLOPS from the RTX 3000 series to Navi. You can't even compare them to Turing (2000 series). In order to compare it to Turing, you basically have to multiply the TFLOPS by 0.7, and you get the Turing equivalent TFLOPS. So a 30 TFLOP Ampere GPU is about a 21 TFLOP Turing GPU.


Asus Strix 2080ti 2GHz on air, 17.4TF




watercooled 2130MHz, 18.5TF




2080ti is around 50% faster compared to 5700XT in really demanding games. 2080ti with good air cooling will run around 1900MHz out of the box thanks to GPU Boost.

0ic5uKO.jpg
 
Last edited:

JohnnyFootball

GerAlt-Right. Ciriously.
I think they will be competitive this time. Navi is going with more VRAM and smart cache this time as rumor suggests. 128mb smart cache is inside the GPU which will scale the card in performance just like what Microsoft did with Xbox 360 back in the old days. AMD is not hyping their cards these days to give some competition to Nvidia. There is also a rumor that top end big navi could reach close to 3090 levels of performance.

I'll believe it when I see it and even then I probably won't believe it. It's just too much a stretch that they go from 95-97% 2070 Super level performance on their best card (5700XT) all the way to 3090 level performance. For me, it will be a huge win if they sell a card under $500 that offers legit 2080 Ti level performance. I cannot and will not allow myself to get caught into that level of hype.
 
Seeing that looks like the ps5 is a cut down version of navi 22? 40cu @2.5 which puts it on par with series x 12.15tf
Have no clue y sony didnt do that to the ps5 so where stuck with a 36cu @2.2 10.15tf so if sony decided to use all the cu at 40 could atleast get 11.15tf
 
I'll believe it when I see it and even then I probably won't believe it. It's just too much a stretch that they go from 95-97% 2070 Super level performance on their best card (5700XT) all the way to 3090 level performance. For me, it will be a huge win if they sell a card under $500 that offers legit 2080 Ti level performance. I cannot and will not allow myself to get caught into that level of hype.

Believe the hype, 'small Navi', (or RDNA1) was slightly strange as it wasn't a full stack release, for whatever reason AMD decided not to release anything to compete with 2080 and 2080 Ti. Therefore it is a tiny chip.

Big Navi will scale from high end to low, it's covering the full product stack eventually.
 

SantaC

Member
I'll believe it when I see it and even then I probably won't believe it. It's just too much a stretch that they go from 95-97% 2070 Super level performance on their best card (5700XT) all the way to 3090 level performance. For me, it will be a huge win if they sell a card under $500 that offers legit 2080 Ti level performance. I cannot and will not allow myself to get caught into that level of hype.
You think that 5700XT is the best card that they could make? They all waited for RDNA to mature.
 

JohnnyFootball

GerAlt-Right. Ciriously.
You think that 5700XT is the best card that they could make? They all waited for RDNA to mature.
The 5700XT wasn't a terrible card. It was much better than nvidias $400 offering (2060 Super) and performed within a few percentage points of the 2070 Super in many instances. Lacking RT hurt it marketting wise, but at those price points RT wasn't really a factor until you went to the 2080 and really, the 2080 Ti. It would have been a game changer if it had been priced at $300 or even $350.
 

duhmetree

Member
Does the following summarize what you've explained or have I misunderstood you?

In short, Infinity Cache will minimize or even eliminate the typical errors that occur when data is being retrieved and stored in VRAM, thereby more efficiently utilizing bandwidth. Hence, because of the more efficient usage of bandwidth, the gap between the real-world performance of the 6900 XT's bandwidth of 512 GB/s and that of the RTX 3080's 760 GB/s will be smaller than it would otherwise be (i.e. 512 GB/s will perform more like 604 GB/s [118% of 512 GB/s = 604 GB/s]).
512GB with Infinity Cache would be the equivalent to 1.1TB of traditional Bandwidth... in theory at least.

I'm assuming the PlayStation has a form of this? Waiting for October 28th?
 
Last edited:
If it does not have good RT and things like DLSS it's absolutely useless card, performance in this case really does not matter. I hope that AMD is focusing on that.


Nvidia doesn't have 'good' RT. And DLSS wouldn't need to exist if they did.

AMD's will be shit too, you only have to look at the next gen consoles to see where they're at.

The silicon simply isn't there yet.

As for DLSS upressing games to 4k... I'd argue if those nvidia chips weren't dedicating/festooned/shackled to that tensor core bullshit, and had more ROPs and shaders... their Rasterization performance would demolish 4k anyway. Alas nvidia loves margin, and will always focus on the enterprise market premium they can charge, and filter that architecture down to gamers... Or not, judging by how shit availability has been so far lol.
 

ZywyPL

Banned
Those clocks look a bit suspicious, up to 2.5GHz while all their previous 7nm-based GPUs could barely hit 2GHz, it's really hard for me to believe AMD could all of a sudden make such a huge jump, especially within the same process node. Still, the biggest unknown and the most interesting/important aspect is the RT performance, that's where the next-gen battleground will be held. Crysis Remastered and CP2007 will show what those cards are made of.
 

Ascend

Member
Does the following summarize what you've explained or have I misunderstood you?

In short, Infinity Cache will minimize or even eliminate the typical errors that occur when data is being retrieved and stored in VRAM, thereby more efficiently utilizing bandwidth. Hence, because of the more efficient usage of bandwidth, the gap between the real-world performance of the 6900 XT's bandwidth of 512 GB/s and that of the RTX 3080's 760 GB/s will be smaller than it would otherwise be (i.e. 512 GB/s will perform more like 604 GB/s [118% of 512 GB/s = 604 GB/s]).
You calculated with 18% instead of 118%.
 

geordiemp

Member
Performance-wise. 1.8GHz is slow comparatively.

Have you seen the performance comparison then ?

Your not relying on floppies again are you, that doesnt work.

There is no XSX equivalent in the leaks so far, AMD has not shoved 14 CU into an array yet, it seems to be kept at 10.
 
Last edited:

Ascend

Member
Do you think that the average will at least match the bandwidth of the RTX 3080 (760 GB/s)?
It should, but, I don't know how it is implemented. If everything is in hardware and does everything automatically, it should work quite well. If they have to rely on drivers for this... Yeah... I'm not as optimistic.
 

LordOfChaos

Member
And this is allegedly at lower wattage than Ampere?

Jeeze, that switch to Sammy's inferior but cheaper node could /plausibly/ lose Nvidia the efficiency crown. We'll see!
 
Top Bottom