• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD's Radeon Navi Review Thread: Series 5700.

Azurro

Banned
They seem to be really good cards for the pricepoint and usually either beat or are very competitive against the 2060, 2060 Super and 2070. This is so weird, why are people like Remij and ethomaz having a meltdown over this? It's good to have competition, no?

I mean, it's not like anyone is going to use RT with those Nvidia cards when they take away 40% of the performance, it's basically a party trick for those cards.
 
They seem to be really good cards for the pricepoint and usually either beat or are very competitive against the 2060, 2060 Super and 2070. This is so weird, why are people like Remij and ethomaz having a meltdown over this? It's good to have competition, no?

Because Nvidia got people by the balls. They've created this marketing that they are miles ahead of AMD so when something comes along that disproves this, it's hard for people to accept, especially when these peeps just had their pants pulled down paying a premium for an Nvidia card and/or Gsync monitor :messenger_winking:
 

Marlenus

Member
Nedded 7nm to that lol... it just shows they are way behind yet and not matching so my comments is true.
They are forced to price cut.
False... it has worst mins fps but of course you will say it is drivers ;)

The last commends makes no sense lol

Computer base.de compared the 5700 and rtx2070 (they have the same number of shaders) with both gpus clocked at 1.5Ghz and they are neck and neck, the 5700 wins by 1%.

That means RDNA matches Turing at equal clockspeed and shader counts. Unfortunately they did not compare power consumption of this setup to see which was more efficient.
 
Last edited:

ethomaz

Banned
Computer base.de compared the 5700 and rtx2070 (they have the same number of shaders) with both gpus clocked at 1.5Ghz and they are neck and neck, the 5700 wins by 1%.

That means RDNA matches Turing at equal clockspeed and shader counts. Unfortunately they did not compare power consumption of this setup to see which was more efficient.
In a lower process... that is not exactly matching.
 
Last edited:

ethomaz

Banned
The process affects the clockspeeds that can be reached, not the performance per clock.
Process node directly affect performance per clock.

You can even look at the famous case of Apple A9 made by TSMC process vs Samsung process.
 
Last edited:

FireFly

Member
Process node directly affect performance per clock.

You can even look at the famous case of Apple A9 made by TSMC process vs Samsung process.
How does it affect performance per clock?

I had a quick look at the A9 case you mention, and it seems the only difference is due to power consumption?

"As expected, there's no discernible peak performance difference between the two different A9 models. All of the CPU, GPU, and system performance scores show less than a 2 percent difference, which lies within the margin of error for these tests. "

"Based on the results of our testing, it's clear that both versions of Apple's A9 SoC deliver the same level of performance, but Samsung's 14nm FinFET process appears to offer slightly better power efficiency, extending battery life between 3.5-10.8 percent. "

 

Marlenus

Member
Process node directly affect performance per clock.

You can even look at the famous case of Apple A9 made by TSMC process vs Samsung process.

No, performance was the same with the Samsung one having slightly better battery life.

From what I can tell there were no direct ipc comparisons made, just overall performance.
 

ethomaz

Banned
There is performance difference.
And yes process node affect performance.

Apple-iPhone-6s-with-TSMC-vs-iPhone-6s-with-Samsung-A9-processors.jpg
 
Last edited:

Marlenus

Member
There is performance difference.
And yes process node affect performance.

Apple-iPhone-6s-with-TSMC-vs-iPhone-6s-with-Samsung-A9-processors.jpg

Someone posted a review from Tom's showing the performance differential was within margin of error. Even if there was a verifiable performance difference that does not mean the IPC is different it just means one version can clock higher before reaching thermal or power limits.

Node affects overall performance (although far less so now than in the past), it does not impact instructions per clock.
 
  • Like
Reactions: psn

ethomaz

Banned
Someone posted a review from Tom's showing the performance differential was within margin of error. Even if there was a verifiable performance difference that does not mean the IPC is different it just means one version can clock higher before reaching thermal or power limits.

Node affects overall performance (although far less so now than in the past), it does not impact instructions per clock.
It does not affect IPC (instructions per clock)... I never said that.

It does affect performance per clock.
 
Last edited:

Marlenus

Member
It does not affect IPC (instructions per clock)... I never said that.

It does affect performance per clock.

Those two things are the same. That is why the OG PS4 and the PS4 slim perform the same.

As I said it can affect overall performance since generally smaller nodes allow for greater transistor density so you can fit more features in the same area as older designs.

Another interesting point is that the 5700XT and the 2070 have similar transistor counts. Looks like AMD used them to add more shaders (5700XT has more shaders than the 2070) and nv used them to add rtx features.

So with equal transistors, equal clocks, equal memory bandwidth and equal shaders the 5700 (vanilla) and 2070 are neck and neck on average. It is a shame that there were no power consumption figures with that setup to see if one was more efficient than the other but there are not.
 
  • Like
Reactions: psn

Ascend

Member
How does it affect performance per clock?

I had a quick look at the A9 case you mention, and it seems the only difference is due to power consumption?

"As expected, there's no discernible peak performance difference between the two different A9 models. All of the CPU, GPU, and system performance scores show less than a 2 percent difference, which lies within the margin of error for these tests. "

"Based on the results of our testing, it's clear that both versions of Apple's A9 SoC deliver the same level of performance, but Samsung's 14nm FinFET process appears to offer slightly better power efficiency, extending battery life between 3.5-10.8 percent. "

He doesn't know what he's talking about. He doesn't even understand the term performance per clock, and yet pretends that he knows better... That seems to be very common among a certain crowd. The more fanatic they are, the more prominent this behavior.

It does not affect IPC (instructions per clock)... I never said that.

It does affect performance per clock.
Can't wait to see you try and explain the difference.
 

ethomaz

Banned
Those two things are the same. That is why the OG PS4 and the PS4 slim perform the same.

As I said it can affect overall performance since generally smaller nodes allow for greater transistor density so you can fit more features in the same area as older designs.

Another interesting point is that the 5700XT and the 2070 have similar transistor counts. Looks like AMD used them to add more shaders (5700XT has more shaders than the 2070) and nv used them to add rtx features.

So with equal transistors, equal clocks, equal memory bandwidth and equal shaders the 5700 (vanilla) and 2070 are neck and neck on average. It is a shame that there were no power consumption figures with that setup to see if one was more efficient than the other but there are not.
They are not.

IPC is basically how many instructions a chip can do per cycle of clock.
Performance per clock is the performance of chip can reach for determined task per clock (in our case a determined game).

A chip can better IPC can delivery lower performance because game performance is not solely based in IPC.
People often label Performance per clock incorrectly as IPC and seems like the case here.
A chip made in different nodes can have the same designed IPC but due how the electric current travel between the silicon it can be affect the overall performance of the chip so increasing or decreasing the performance per clock.

It is basically impossible the same chop A delivery the same performance with different process nodes.

Number of transistor differs from design to design... you can have widen transistors or more compacted... so the same node/process and with the same chip size can have more or less transistors based how it was designed... it is normal to have a smaller chip with more transistors than a big chip... it just means the design is different where one is using a more compact transistor while the other are using a more widen one.
 
Last edited:

Vlightray

Member
I am satasfied with AMD happy to see them bring the heat as a current 1060 owner. Next Gpu and Cpu look to be AMD for me. Price is a big consideration and Nvidia are just too damn high.
 

CrustyBritches

Gold Member
The only thing that matters is performance and price.

What do you want to play? What kind of performance do you want? How much do you want to spend? That's all.
Bang for buck is huge, but it can be more nuanced than that. Noise levels, heat, and power consumption are a big deal. Additionally overclocking performance and driver stability. I side-graded from a R9 390 to RX 480 and the difference in power consumption and heat was noticeable. I've had loud fans in the past that are way past annoying.

Navi is actually very competitive on bang for buck, but overclocking is broken on reference models, they're hot, and very loud. Power consumption on 5700 is great, on 5700 XT good enough to where it doesn't matter in comparison to the competition. Nobody should be buying these reference cards. Just wait for the better partner AiBs.
 

Marlenus

Member
They are not.

IPC is basically how many instructions a chip can do per cycle of clock.
Performance per clock is the performance of chip can reach for determined task per clock (in our case a determined game).

A chip can better IPC can delivery lower performance because game performance is not solely based in IPC.
People often label Performance per clock incorrectly as IPC and seems like the case here.
A chip made in different nodes can have the same designed IPC but due how the electric current travel between the silicon it can be affect the overall performance of the chip so increasing or decreasing the performance per clock.

It is basically impossible the same chop A delivery the same performance with different process nodes.

Number of transistor differs from design to design... you can have widen transistors or more compacted... so the same node/process and with the same chip size can have more or less transistors based how it was designed... it is normal to have a smaller chip with more transistors than a big chip... it just means the design is different where one is using a more compact transistor while the other are using a more widen one.

Still the same thing. IPC is a measure of work done in a given time frame (a clock cycle).

Process nodes do not impact the IPC of an architecture if no changes have been made. It does impact power consumption which in turn impacts thermals so if you are running a variable clockspeed part then the part on the better node can perform better due to running at a higher clockspeed but the IPC is constant for each given task.

When we talk about IPC we tend to mean the average over all workloads unless specifying a niche or specific piece of software / instruction set.

Performance per clock is some nonsense you have made up.
 
Last edited:

ethomaz

Banned
Still the same thing. IPC is a measure of work done in a given time frame (a clock cycle).

Process nodes do not impact the IPC of an architecture if no changes have been made. It does impact power consumption which in turn impacts thermals so if you are running a variable clockspeed part then the part on the better node can perform better due to running at a higher clockspeed but the IPC is constant for each given task.

When we talk about IPC we tend to mean the average over all workloads unless specifying a niche or specific piece of software / instruction set.

Performance per clock is some nonsense you have made up.
Nope.

Process node impact performance per clock like I said before.

I even showed a real example... Apple A9 have different performance per clock even being in similar process (just different manufacturer).

Nonsense is anybody that label performance per clock incorrectly as instructions per clock lol
 
Last edited:

Marlenus

Member
Nope.

Process node impact performance per clock like I said before.

I even showed a real example... Apple A9 have different performance per clock even being in similar process (just different manufacturer).

Nonsense is anybody that label performance per clock as instructions per clock lol

Someone posted a side by side test of both parts and the performance was within margin of error and in any event, since those parts are variable clockspeed, performance differentials come from clockspeed differences instead of IPC differences.

Another real world example is PS4 to PS4 slim (Xbox one s has a clockspeed boost so is not like for like) and performance there is the same.

Previous consoles have also had node shrinks and the performance stays the same.

Here is the Google results for performance per clock. Please stop making shit up.
 
Last edited:

FireFly

Member
I even showed a real example... Apple A9 have different performance per clock even being in similar process (just different manufacturer).

Nonsense is anybody that label performance per clock incorrectly as instructions per clock lol
The two A9s don't have the same clock speeds, because the clock speed depends on the power consumption, which varies between the two chips.

"Real-world use cases other than intense gaming do not run the CPU and GPU at 100% for extended periods. Instead, the CPU and GPU run at much lower voltages and frequencies the majority of the time and only ramp up to their maximum clock speeds for short bursts of activity. Because an SoC's power versus frequency relationship is nonlinear (meaning each additional 100 MHz of frequency requires larger and larger increases in core voltage), Samsung’s advantage during normal use will be less than what we measured in our tests and is likely to be very close to Apple’s 2-3% figure. "

 
  • Like
Reactions: psn

CrustyBritches

Gold Member
So....is 5700 worth upgrading from RX 580 ?
I'll be upgrading from RX 480 and GTX 1060 and 5700 is definitely an option I'm considering, if only because I'm wanting to stay under $400 and I'm expecting 5700-series AiBs(cards) to run $30-50 more depending on the config.

2060 6GB has been as low as $310 for single fan and $320 for dual fan lately. Hopefully we'll see a decent 5700 option from Sapphire like their RX 480/580 Nitro that runs ~$379. Then there's the 2060S at $399. 5700 XT AiBs will probably be $30-50 more and the $499 2070S is too rich for me.

I'm buy in anticipation of DOOM Eternal and Cyberpunk. What games do you play the most? Look at the benchmarks, or benchies for games on the same engine, and see if that's worth it.

My most played game over this gen...
 
Last edited:

psn

Member
Nope.

Process node impact performance per clock like I said before.

I even showed a real example... Apple A9 have different performance per clock even being in similar process (just different manufacturer).

Nonsense is anybody that label performance per clock incorrectly as instructions per clock lol
If I benchmark my smartphone and the smartphone of my girlfriend (same models), we are having slightly different scores as well. That is within the margin of error dude. Look at every smartphone release thread, look at the different benchmark scores everyone gets. The difference is so small...
 

JohnnyFootball

GerAlt-Right. Ciriously.
This one's a very interesting video



The video above is coincidentally supported by this one;

The video is interesting, but I can tell you one thing that is absolutely NOT true: nvidia is not scared of Navi. There is no way. If AMD bombshelled something like a 5700XT matching a 2080S at $399, yeah nvidia might have some concerns. He also says some things that aren't true. The 5700XT does not beat the 2070S in performance.

Also, aside for Steve from GamersNexus, I can't say any of the reviews of Navi were overly harsh. JayzTwoCents, Pauls Hardware, and Hardware Unboxed actually talked a lot of good about Navi.

Steve was very hard on Navi for their choice of a blower and heat output.
 
Last edited:

Pagusas

Elden Member
Good to see AMD being price wise when they can’t match feature parity. Should be good for gamers on a budget. If only users like thelastword didn’t put such a bad taste in everyone’s mouth making them hate AMD.
 
Also, aside for Steve from GamersNexus, I can't say any of the reviews of Navi were overly harsh. JayzTwoCents, Pauls Hardware, and Hardware Unboxed actually talked a lot of good about Navi.

Steve was very hard on Navi for their choice of a blower and heat output.

yeah GN only shat over the blower design and not the architecture. rightfully so if you ask me. Jay is only talking nice over the last AMD launches because he doesn't want to come over as a nvidia shill.
 

llien

Member
So....is 5700 worth upgrading from RX 580 ?

As per TPU review, 5700 is 66% faster than RX580 (1080p, 1440p, 4k) and consumes about 40 watt less power.
Basically you get no-compromise faster than GTX 1080 card for $349 MSRP.

I'd wait for AIBs though. Never understood why AMD/NV even bother with ref cards, AMD with its "we can't guarantee thermals otherwise" blower design in particular. It's a fucking 2019.

I can tell you one thing that is absolutely NOT true: nvidia is not scared of Navi.
Supers were no doubt NVs response to Navi. Pricing wise, it was the first time AMD "opened its cards" so early, and it looks like it has outplayed NV.

As of April 2019, NV had 80% inventory growth Y/Y, average inventory processing period of 116 days, vs 71 in 2019, doing not much better than in Jan this year, when they admitted sales were 25% lower than expected.

So while a company is struggling to keep up the obscene margins (to keep the inflated stock that sharply dropped in Jan from 250 at still hefty 160) its competitor is releasing great, competitive products.

Based on power consumption, die size of released chips, I'd be scared of what 5800/5900 could do, if I were nVidia, unless Ampere is just around corner (which it doesn't look like it is coming earlier than Q2 2020). Having AMD on 2070 already harms, but bigger Navi would get AMD to way pass 2080 levels that could easily cost about $549, imagine "nightmare" (OMG, dropping the price to lower the margins or keeping the price, and losing market share) choices that NV's Huang would need to make.
 
Last edited:
Just reading that a 5700XT can be overclocked to near 2100Mhz core clock. 7nm process node combined with a much more efficient arch in RDNA has made this a real competitive release. It wasn't expected, there was too much negativity from Nvidia owners post Radeon 7 release that gave the feeling that this was going to be a disaater, because they were foolishly basing Navi efficiency on Radeon 7.
 

CrustyBritches

Gold Member
Didn't see this posted yet...
Custom Radeon RX 5700-series Only by Mid-August: AMD



Hmm. Blowers are shit, Scott. You need one running full tilt while you're trying to play a game(you probably don't game, so let's say while watching Netflix). The Super series reviews show even a 2-fan setup takes a crap on a blower setup. 2070S is only 73°C/33dBA while 5700 XT is 92°C/43dBA.

Ever since the Navi PCB leak, guys like Buildzoid and Steve from GN were up in arms over the blower config. I think reviewers have gone easy on AMD in this regard, these blower cards are hot, loud, and overclocking is broken. Now partner AiBs might be a month out?
 
Last edited:
Early 7nm vs mature 12nm... what a joke kkkkk... next time you will say 12nm (that is actually 16nm++ marketed as 12nm) is better than 7nm :D :D :D

Yeap... yes.

XiFRnAEJGXGT2BdJgSWXhe-650-80.png
YwVEKGSkEzfzo9ykYcdkHo-650-80.png
2Ah92FHMDqqqzz58mUyWeH-650-80.png


1562576267-vdcjo9syrkm5eclh7q6bkh-650-802.png
1562576321-syuv47b6fxkdoz9r3ugfkh-650-802.png
1562576229-u5srmsiynwu2mqqjthgchh-650-802.png


xco4ohHWEzYiKMYqb8QrfH-1200-80.png
nCaVkH9CAWyR8jTbNaJmfH-650-80.png
HabzWqyaGddSnvjnCUVufH-650-80.png


8U3Hv92Q9JeisQGbeAvWnH-650-80.png
uxzCTpxJ6AuifJhsZh7gpH-650-80.png
D6V5LtkaWaZ72YcnUKHSjH-650-80.png


3fMY9PnwwakhL9wTRagNiH-650-80.png
2pviHa9tYZ6PEwZekECetH-650-80.png
99ey66Je5VSvfNmHNeB5WA-650-80.png


Indeed.

Man, can't wait for non blower Navi cards, al these peeps who post benches with 2 fps difference yelling how much better Super is can shut the fuck up. :messenger_tears_of_joy:
 
Last edited:

thelastword

Banned
So....is 5700 worth upgrading from RX 580 ?
Absolutely, 5700 beats the 2060 Super, the 2060, Vega 64, GTX 1080, Vega 56, 1070ti...….If you upgrade from a 580, you will have a pretty competent 1440p card...….


Anyway, for your benefit and others, I'm sure you heard about the price to performance metrics for the 5700 and how these cards are up there at the top of the table......Here is a benchmark to further solidify my response to you above...


Here is a $350 Radeon 5700 vs a $400 RTX 2060 Super…...Paired with a Ryzen 3600, clocked at 4.2Ghz.

 

Armorian

Banned
Hm, curious that in games like AC, GTA, quite notable 1080p difference is gone at 1440p.

AMD drivers (at least in dx11) hammers CPU more than Nvidia, so in lower resolutions GTX/RTX cards can have higher framerate. It is known for years and never fixed by AMD.

Example from HR, all AMD cards hit the wall with oced 9900K

Adnotacja20190709125.png
 
Last edited:

thelastword

Banned
First OC breakdown on Navi, with watercooling......I'm not sure if he said there is a clock limit of 2100Mhz on the core, some-one familiar with German, please chime in on that, and maybe give us English speakers a summary on what we may have missed....



Some slides from the video...

YMG8RNf.jpg


DLqMXss.jpg



gUWd9sE.jpg


Looking at the XT vs the 2070 Super, Navi's 1% lows are better even at stock clocks....

The other thing I wanted to touch on is that by simply maxxing the power limit, you get more consistent clocks from the Navi cards, well that goes for every Radeon card since Regan was in Power....So, I hope most reviewers are doing that at least....or is it too much? (FE's are already OC'd out the box anyway)….

Here's an instance, where the only thing done was to raise the power limit on the 5700XT and clocks averaged way over 1900Mhz throughout the tests.....Just a simple slider...

 

thelastword

Banned
Radeon 5700XT vs 2080ti = 6 Hevc encodes vs 2
Radeon 5700XT vs 2080 =6 Hevc encodes vs 1

The Navi cards are really encoding and streaming beasts.....Now I know few techtubers have touched on streaming+gaming, which happens to be huge now, especially with all these esports titles......Ryzen is at the top for streaming whilst playing, so a bit miffed there is not much coverage there....Yet the Radeon NAVI GPU's are really on a next level there....So I always told folk, even before these products came out that Ryzen+Navi would be a good combo, because AMD would ensure that pairing the two together would net you even better performance....Essentially they would work very well together, maybe at optimum.

So then, why is the tech press not talking more about Navi's encoding+streaming capabilities, why aren't we hearing more of CAS(Fidelity FX), which is here day 1......I heard so much about DLSS from the tech press and how it would change gaming, way before Turing launched...and I heard more about it at launch, when no game had it implemented......Yet here, at Navi's launch we've have CAS(Fidelity FX) games and you hear little about it, and the feature is quite impressive too unlike DLSS...….Why isn't the press talking about Radeon Image Sharpening, which works on every game day 1, it's impressive
and requires no developer input, it just works...…..Why isn't the press talking about Radeon Anti-Lag? Which is proven, not be Nvidia's Fast Sync, because AMD had a Fast sync technology too all these years, but RAL (Radeon Anti-Lag) is much more advanced and the results prove that.......Yet so little coverage....

The media downplays PCIE4 because AMD has it, but many of the benefits you see with encoding/streaming can be attributed to PCIE4......Such technologies and encoding/streaming will also be vital for Stadia Streaming, XCLOUD Streaming, PSNOW Streaming and streaming/encoding/gamecapture on future consoles (PS5/XB2) and said features are going to be so much faster and better on future consoles vs current ones....4k 60fps encodes and streams, very high quality with H.265 is a lock, but people are just stuck in the NVIDIA loop are not thinking or looking at the results and possibilities...….They just claim that AMD has no features, yet all AMD features are highly functional, effective and they work well...

So much for the press trying to laugh at Fidelity FX, Anti Lag and PCie4...….It's shameful, that this is the type of coverage we're getting tbh, for persons who should be jolted by tech rather than take the side with more cash and lootbags atm, it's concerning that the coverage is not more professional and thorough.....I guess, I'll leave this here, but I just want to clue the tech press in; that PCIE4 will also be vital for Raytracing on future cards, (mindblown.gif), surely you will need lots of bandwidth and speed to do RT worth a damn at high framerates and rez...or to push more than just one RT feature per game.....And another thing it will be good for, is guess what? Physics.....I can see future graphics cards coming with a physics co processor or core, of course, with an Infinity Fabric setup on a GPU, that's very likely and you will need all that bandwidth to run IF or multi-core GPU setups...….Whilst everybody is so fixated on ho-hum hybrid raytracing that's hardly functional on monolithic dies or current Turing GPU's, where the tech press fellates RTX to a clean shine with little results to show, few RTX games in circulation...…...On the flip, they minimize or obscure great day 1 features for AMD and run the babble that AMD has no "Features" and that Nvidia is just swimming in them....For shame....


So yes, we need to hear more about these....
Ewdybkp.png


IwXZjxM.jpg




CAS (Content Adaptive Sharpening) 1440p vs 4k Native

1lDoR01.jpg



BQyLJGE.jpg



LjO4tTg.jpg


8LGe6W0.jpg


The big bonus for RIS/CAS/Fidelity FX is when you use GPU scaling in Radeon settings and use RIS.....It's much more impressive than traditional upscaling...



And of course the performance uplift with CAS/RIS+GPU upscaling on a 5700 vs Native 4K....Double the framerate with a sharper image than traditional 1440p upscaled to 4k....

FzSFKzg.jpg



Then we have Radeon Anti-Lag

AMD offers 7ms less lag over Nvidia normally and in the worse case scenario for NV, AMD offers 16ms less lag over Nvidia..

xahJrE2.jpg



Be sure to check this video out, one of the best and most informative videos out there on Youtube for the Navi Cards and their "features"...

VERY INFORMATIVE
 
Top Bottom