• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Radeon RX Vega thread

Papacheeks

Banned
The 970 launch MSRP was $329, not $399. The 1070 is, and has always been, a horrible value. The sales numbers reflected this, and were quite sluggish pre-crypto boom, evidenced by slow gains in Steam HW surveys and reputedly why Nvidia dropped the 1080 and 1070 MSRP's to $499 and $349 earlier this year. 1070 only recently started to make large gains when custom cards began routinely streeting at sale prices around $350, and of course the crypto-boom.

Yea, I was thinking something else looked up the 970 again. And wow how the fuck do you go from $329 for gtx 970 to $459.99 for 1070?
 

Papacheeks

Banned
1070 launched at $380. At least if we're comparing MSRPs here and not something else.

But that's not the price they were though. MSRP was suppose to be 380 but Founder editions were all there were for the time before AIB versions launched and they were $460 and even after the signature series you couldn't find a card for under $399.

The MSRP didn't mean shit that's why a lot of people including myself were so put off

Because the 1070 was basically the 980 replacement. Same as the 1060 was the 970 replacement.

Makes no sense stop trying to re-write what the cards were.

The GTX 1070 replaced the 970, the gtx 1080 repalced the 980, the gtx 1080ti replaced the 980ti.

The GTX 1060 replaced the gtx 960.

Like where is your logic?
 

Bolivar687

Banned
Makes no sense stop trying to re-write what the cards were.

The GTX 1070 replaced the 970, the gtx 1080 repalced the 980, the gtx 1080ti replaced the 980ti.

The GTX 1060 replaced the gtx 960.

Like where is your logic?

Pascal was a major value leap over Maxwell. 1060=980, 1070=980Ti.
 
They could switch to ARM like Nintendo did. The question is if they will or not. x86 is looking like a dead end in terms of future consoles though.

Well looking at how great reception Ryzen has I don't really see reason to change camps though. Not to mention BC is pretty much must with this gen transition so going full AMD again is going to make that far easier.
 

SRG01

Member
Perhaps if you undervolt/underclock you will reach near Polaris performance...

How about overclock RX 580 to the same power draw of Vega 56 or Vega 64??? Which performance level it will reach???

Polaris for sure won't even come close. The Vega 64 LE was drawing high-300ish watts under power saving mode with much better performance.

What I was really getting at was: AMD's future APUs will use Vega architecture for the GPU. How well does Vega scale down in terms of performance per watt?
 
D

Deleted member 17706

Unconfirmed Member
It seems like there are some games/situations in which the RX Vega 64 comes ahead of the 1080 just a bit, but in general, the 1080 is the stronger card at the moment, especially considering the greatly reduced power consumption.
 
It seems like there are some games/situations in which the RX Vega 64 comes ahead of the 1080 just a bit, but in general, the 1080 is the stronger card at the moment, especially considering the greatly reduced power consumption.

It's important to note, a majority of the benchmarks are against the 1080 FE. Any third party variation of the card will be substantially better due to even factory overclocks. The FE cards were clocked pretty conservatively, and aren't great in the cooling department. Get something like a Classified or Strix variation of the card with a decent OC and a GTX 1080 is likely better or as good in 90% of scenarios.
 

Marmelade

Member
Am I right in assuming this should beat the RX Vega across the board?
https://www.newegg.com/Product/Product.aspx?Item=N82E16814127942

Vega64 is very slightly behind or at best equal to a 1080FE.
Any custom 1080 will perform even better.

2017-08-1602_48_14-rab2pyo.png


Either buy a 1080 or wait for custom Vega.
Unless you have a specific need for a blower, buying a reference Vega card isn't a good idea, even less so at $600.
 

ZOONAMI

Junior Member
1070 launched at $380. At least if we're comparing MSRPs here and not something else.

No it didn't. It was $450 at launch because the FE was the only card you could get. Source: myself, I bout a 1070 at launch for $450 because that was the only option. Prices still haven't really hit $380 unless you're talking about an aftermarket on sale (with about the shittiest cooler you can get), but generally the aftermarkets and FE are well over $400. Edit: FE still at $469. I just bought an SC for $440 like a month ago too. $380 launch price my ass.
 

ZOONAMI

Junior Member
Because the 1070 was basically the 980 replacement. Same as the 1060 was the 970 replacement.

Eh, to be fair though the 970 was a hell of an overclocker. Not so much with the 1070. The 970 was right in line with a 780 ti too so I'm not really sure what's surprising about a pretty big performance boost in a generation (although AMD hasn't really managed to figure that out since the 290 series really). The 1070 was the 970 replacement at a $100+ higher price point.
 

thelastword

Banned
youre a clown
A 13.7 TFLOP GPU with 27.4 TFLOPs in FP16 supports my statement. You can be as dense as you want, but RX 480 was 15% behind the 1060 at launch but look at it now....

You are really adamant about this whole Vega 64 > 1080ti thing aren't you? All evidence points to the Ti smashing the 64, and yet you just keep on believing. Its not drivers, its not bad benchmarks by literally everyone that has done benchmarks. Its the card. Its just not as powerful, its time to come back to reality.
The Ti is better atm, but I can see Vega 64 catching up with better drivers and eventually outpacing the 1080ti with games developed for it's unique architecture. HBCC and RPM including the raw compute advantage Vega has over Pascal is sure to come into play eventually...

Lets look at the Vega 56 as an example, if it is already solidly beating the 1070 in every test, even Nvidia optimized games how much will it stretch that lead in 3 months with better drivers, yet Vega 56 was being compared to an AIB 1070 and still beats it in every game minus PUBG and another Nvidia optimized game Overwatch.....In a little bit, Stock Vega 56 will absolutely demolish the 1070 due to better drivers and AIB Vega 56 will be going toe to toe and even surpassing the GTX 1080, mark my words on this one.. It will be even more profound when developers make use of HBCC and RPM in their titles...

I see the same for Vega 64, I see huge increases forthcoming there as well, if AMD allows AIB partners full access beyond bios and current vrm restrictions, then I can see some pretty impressive improvements on Vega 64 AIB's, notwithstanding the driver gains we should expect........ As it stands, AMD is focused on DX12 and the Vulkan API as opposed to DX11, this is where the cards will shine and really show their power, not so much for yesterday's games but for future titles....

Therefore. simply saying NV is faster is not really accurate by extension, the same was said of the 1060 being a faster card over Rx 480, but it's really not that simple.....

Something strange about those benchmarks, they don't fit with benchmarks from other sites.
Yeah. I'm looking at other benchmarks. 1080Ti is the current king in benches over the 64, but the 64 wins some too and yet there are quite a few of the games where the 1080Ti does not have a substantial lead, which is where the 64 can catch up with mature drivers. In some benchmarks, especially NV focused titles the margins are greater no doubt, but I'm not sure anything can be done about that. I imagine future titles that make use of the Vega features will be a hard pill for the Pascal cards In comparison as well....On the top end, NV is in front, well with the 1080ti at least... I do see that gap closing though.

You should also note that HBC and Primitive shaders are not yet enabled for what it's worth...Sorry to say, but AMD GPU's are never firing on all cylinders on day one, but I'm sure lots will be ironed out eventually. I even expect some impressive gains for the Frontier Edition cards in gaming mode eventually...I guess it's true that they just wanted these cards out this quarter and they would iron out the kinks....a bit later;).....

Hitman, Titanfall 2, Battlefront...? Sorry, how are they optimized for Nvidia? GTA V is pretty agnostic. Metro LL, no idea - perhaps so. Care to clarify?
I didn't mean these.....In any case; hitman, titanfall and battlefront always did well on NV cards....GTA5 just prefers the Intel+NV combo, that can't even be argued, (just watch any benchmark at all....), things may change with RDR 2 or GTA6 since the GPU+CPU landscape has changed a bit now.....Metro always had a solid lead on NV cards in every test I've seen of it....If an AMD GPU is beating a comparable NV card in Metro with beta drivers, then it must be a supremely beastly card in terms of raw power....



OTOH


I'm hearing news of a new RX Vega firmware update??????
 

TSM

Member
The Ti is better atm, but I can see Vega 64 catching up with better drivers and eventually outpacing the 1080ti with games developed for it's unique architecture. HBCC and RPM including the raw compute advantage Vega has over Pascal is sure to come into play eventually...

Meanwhile in reality let's see what the AMD driver team has been busy working on during the all important launch window to reassure buyers:

https://hothardware.com/news/amd-radeon-rx-vega-mining-block-chain-ethereum

There's that raw compute advantage you were talking about. Hope you like mining.
 

Reallink

Member
Pascal was a major value leap over Maxwell. 1060=980, 1070=980Ti.

The 970 traded blows with the 780ti just as well as the 1070 did the 980ti. Difference is the 970 did so at more than $100 less than the 1070...LOL @ major value leap? The 960 was pretty middling performance wise, but competed with the 770 at only $199...a pretty far cry from the $260-270+ the 6GB 1060's sold for.
 
I am still wholly baffled by how AMD always seems to find new ways to bring back old problems. What exactly is the issue with AA this time?

https://www.hardocp.com/article/2017/08/14/amd_radeon_rx_vega_64_video_card_review/17

An issue that we weren't expecting, is traditional Multi-Sample or Super Sample Anti-Aliasing performance.

Based on our testing there is indication that MSAA is detrimental to AMD Radeon RX Vega 64 performance in a big way. In three separate games, enabling MSAA drastically reduced performance on AMD Radeon RX Vega 64 and the GTX 1080 was faster with MSAA enabled. In Deus EX: Mankind Divided we enabled 2X MSAA at 1440p with the highest in-game settings. The GeForce GTX 1080 was faster with 2X MSAA enabled. However, without MSAA, the AMD Radeon RX Vega 64 was faster. It seems MSAA drastically affected performance on AMD Radeon RX Vega 64.

In Rise of the Tomb Raider we enabled 2X SSAA at 1440p. Once again, we see AMD Radeon RX Vega 64 drop in performance. GeForce GTX 1080 was faster with 2X SSAA compared to Radeon RX Vega 64 with SSAA. Finally, in Dirt 4, which is playable at 8X MSAA, was faster on GTX 1080.

This is combined evidence enough that enabling forms of anti-aliasing like MSAA or SSAA are for some reason performance-impacting on AMD Radeon RX Vega 64. We need to do more testing on this for sure.

The conclusion so far is thus, when using shader based AA methods like SMAA, or FXAA or Temporal AA, or CMAA AMD Radeon RX Vega 64 performs much better and can compete with GTX 1080. However, if enabling any level of MSAA or SSAA then performance will decrease more on AMD Radeon RX Vega 64 and GTX 1080 will give more performance in that scenario. Therefore currently, AMD Radeon RX Vega 64 is best played with shader based AA methods versus traditional MSAA or SSAA in games for now. It will be interested to see if this can get addressed in a driver update.

inb4 "Nobody uses MSAA anymore past DX9 and deferred rendering engines became the standard"

(Which is largely true, however if SSAA is broken that also means downsampling is broken. Not that downsampling has ever really worked on the AMD side, VSR has ridiculous limitations which makes it useless 99% of the time. Nvidia has had DSR working with any arbitrary resolution for years now.)

Meanwhile in reality let's see what the AMD driver team has been busy working on during the all important launch window to reassure buyers:

https://hothardware.com/news/amd-radeon-rx-vega-mining-block-chain-ethereum

There's that raw compute advantage you were talking about. Hope you like mining.

AMD FineMine™ Driver Technology
 

Durante

Member
inb4 "Nobody uses MSAA anymore past DX9 and deferred rendering engines became the standard"

(Which is largely true ...)
Actually, in this age of VR being one of the major reasons to buy a high-end GPU it's less true than it has been in a long time.

Almost all the high-end VR games use MSAA. AMD was already behind in real-world VR performance, this will just make that worse (unless fixed).
 
Miners will make the VEGA 64 impossible to get for months. I fucking hate Bitcoin.

Sigh , I guess I'll just dump more money and get a 1080 ti. I really don't want to because Nvidia's driver support is absolutely terrible the last year or so. But seriously unless I get advance knowledge of Vega 64 stock and still with my finger on the refresh button for hours before hand and get LUCKY on top of that the odds of beating out some scummy Bitcoin miner are nil. I really want this card , I can only imagine how much performance to gain there will be via tweaks. Plus I'm buying the 12 core Threadripper pretty soon and an all AMD system with a new case and a solid and a tasteful red LED design on the inside of my new case really appeals to me. It'll take months for the Vega 64 to be easy to buy online without having to deal with scalpers.
 
Miners will make the VEGA 64 impossible to get for months. I fucking hate Bitcoin.

Sigh , I guess I'll just dump more money and get a 1080 ti. I really don't want to because Nvidia's driver support is absolutely terrible the last year or so. But seriously unless I get advance knowledge of Vega 64 stock and still with my finger on the refresh button for hours before hand and get LUCKY on top of that the odds of beating out some scummy Bitcoin miner are nil. I really want this card , I can only imagine how much performance to gain there will be via tweaks. Plus I'm buying the 12 core Threadripper pretty soon and an all AMD system with a new case and a solid and a tasteful red LED design on the inside of my new case really appeals to me. It'll take months for the Vega 64 to be easy to buy online without having to deal with scalpers.

Miners? Shit seems to just be under stocked all around. Grabbed mine 4 minutes after it was live. I don't know if anyone has gotten a shipping date for ordering off amazon. Supposedly reps are telling people it won't ship out until September.
 
This sneaky price increase is pretty shady if true, lol.

From what ive seen that extra 100 msrp is for the "black pack" which includes 2 games. Still shady. Not many had just the card at $499. I'm so close to just grabbing a gsync monitor to use with my 970 and waiting a few more months for black friday. The whole experience has soured me on AMD.
 

llien

Member
That 2% boost (techpowerup measured 1%) from going from "balanced" to "turbo" mode, while consuming 70W extra, which bright mind at AMD thought that it makes sense?


Raven Ridge APU will be interesting. It could get into premium notebooks, so with those margins AMD could afford to slap in underclocked Vega (so power consumption won't be an issue).


Miners will make the VEGA 64 impossible to get for months. I fucking hate Bitcoin.

No availability problems in Germany. 650 Euro and it is yours (price was quickly bumped by 50 Euros)
Mining "rumors" appeared to be BS. (I don't get why people expected RX to perform 3 times better than FE at mining)

I'd wait for AIB cards, anyhow.

This sneaky price increase is pretty shady if true, lol.

What do you mean? I don't see anything special going on. Some shop figured it could sell it at +50 Euro, others followed the suit.
If supply isn't a problem, will drop to reasonable levels soon.
 
Some info on Vega 56 pricing/UK pricing for those in Blighty:

Gibbo said:
OK let me start with AMD have rumoured! A launch price of $399, judging by how AMD seem to be favouring the £££ so much one can hope that will mean a launch price of £349, but if they allow us to sell 100, 1000 or 10,000 at this price is still unknown. As of now we don't even have stock on this part. But at £349 this thing will sell like crazy, thats not much more than some of the faster clocked RX 580's went for before the whole mining craze started.

As such if we see £349 in the UK, AMD are being very kind, I'd not be surprised to see a more 1:1 conversion on that part where $399 is £399, but for sure fingers crossed for £349 and they let OcUK sell thousands at that price.

But remember this, as of right NOW, AMD are saying LAUNCH PRICE of $399, that means once the volume they set is sold, the price will be at the pack prices, which to me is unknown but you can bet they are like $449-$499

No chance imo that these release at £350. I'm pretty sure they'll launch at £400, which is cheaper than the 1070s that are now starting from around £430. Hopefully custom Vega 56s will sell for close to £400, but when they release in September, Nvidia could easily lop £50 off the 1070 MRRP.
 

ISee

Member
OTOH
I'm hearing news of a new RX Vega firmware update??????

Probably just to improve compatibility with overclocking and measuring software, but who knows. Even wattman seems to barely work with vega. I wouldn't bet my money on any kind of performance improvements. Especially because reviews and tests are already up.
If we are lucky the newest firmware will include a bios that allows Vega 56 to use more than 300W of power, but I'm not even holding my breath for that.
 
Probably just to improve compatibility with overclocking and measuring software, but who knows. Even wattman seems to barely work with vega. I wouldn't bet my money on any kind of performance improvements. Especially because reviews and tests are already up.
If we are lucky the newest firmware will include a bios that allows Vega 56 to use more than 300W of power, but I'm not even holding my breath for that.

There is large scope for performance to improve over time with Vega though, more so than with Polaris which went from 10-12% behind the 1060 at launch to 0-4% about 6 months later. You've got the HBC and rapid packed math inside Vega as well. I reckon these will mature very nicely over time.
 

ISee

Member
There is large scope for performance to improve over time with Vega though, more so than with Polaris which went from 10-12% behind the 1060 at launch to 0-4% about 6 months later. You've got the HBC and rapid packed math inside Vega as well. I reckon these will mature very nicely over time.

Hmm most claims that the Polaris is faster now then before seem to be based on newer RX 580 benchmarks. The problem with that is that the 580 is basically an overclocked 480. Some people even tend to oc a rx 580 even further and benchmark it against a 1060FE, which is probelmatic.
It's hard to find conclusive evidence for the "polaris improved significantly" statement and I tried. Let's take a look at some benchmarks from pcgh.de from 2016 (480 release) and 2017 (580 release).

1. Witcher 3 (different benchmark scenes, but the delta shouldn't be effected by this in a big way)
2016 said:
The GTX 980 is 13% faster than the RX 480

2017 said:
The GTX 980 is 10% faster than a RX 480, and 6% faster than a RX 580

2. GTA V
2016 said:
+13% for the GTX 980

2017 said:
+16 for the GTX 980 over the RX 480, but just 9% for the 980 over the 580.

3. Crysis 3
Crysis 3 said:
+25% for the 980

2017 said:
+23% for the 980 over the RX 480, +14% for the 980 over the RX 580


I picked 980 results because we all know that Nvidia doesn't care about old tech once new stuff come out and there aren't any kind of driver improvements for the 9xx series anymore (not actually true, but wildy believed). In the end, from what I see, there are performance improvements because of driver updates. There always are, no matter from what vendor. But it's not as significant as most people try to tell me. The huge gains came from the higher clocks on the RX 580 pver the RX 480. Which is fine, but the huge driver improvements are rather questionable. Look for yourself (2017 benchmarks) ; (2016 benchmarks) The 480 is mostly behind the 1060/980 in 2016/17 while the 580 is mostly very close to the 980/1060.
 

Locuza

Member
Computerbase has a great Update with Vega 64 vs. Fury X, the HBCC impact and seperate overclocking of GPU and memory:
https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/7/#abschnitt_vega_64_vs_56_vs_fury_x_bei_gleichem_takt


1) Vega @ 1,05 Ghz with 512 GB/s vs. Fury X with 1,05 Ghz and 512 GB/s.
On average Vega (Gen 5) is only 6% per clock faster than the Fury X (Gen 3).
Polaris 10 (GCN Gen 4) was already 7% faster than Tonga (Gen 3) (But keep in mind that there might be other games and settings):
https://www.computerbase.de/2016-08/amd-radeon-polaris-architektur-performance/2/#abschnitt_die_gpugenerationen_im_benchmark

Looking at the games there is a big variance in performance.
Ashes of the Singularity and Gears of War 4 perform 6% better on the Fury X, while on Battlefield 1 Vega is winning with 25% better performance per clock!
Another big winner is Titanfall 2 with 24%.

perclock9qqsx.jpg


-----

2.) HBCC.
CB testet the HBCC with 8GB system memory allocated (From 32GB system memory) and the power-target was maxed.
On average the results where nearly indendical between the performance and the 99th percentile but again, there is a variance.

Certain games perform worse overall with the HBCC, certain games are perfoming worse but have better 99th percentile and some games perform better or identical but with worse 99th percentile.

It's quite interesting to see that there can be a bit better performance and up to 12% better 99th percentile, so there is potential even without a strict VRAM limit.

This feature should improve with time. (fine wine and so on)

hbccn0p3t.jpg


----

3.) Seperate overclocking of the GPU and memory show little scaling on both side and fortunately Vega 64 isn't bandwidth starved.
15% more bandwidth only results in 3% better performance on average.
There are games like Watch Doge 2 and Titanfall 2 where the scaling is 6-7%.
Roughly 4,4% more GPU clock result in 2% more performance.
Together 6% can be won on average.
 
There is large scope for performance to improve over time with Vega though, more so than with Polaris which went from 10-12% behind the 1060 at launch to 0-4% about 6 months later. You've got the HBC and rapid packed math inside Vega as well. I reckon these will mature very nicely over time.
You realise HBC is only useful if you run out of VRAM right? On an 8GB card that isn’t happening any time soon.
 

horkrux

Member
Hmm most claims that the Polaris is faster now then before seem to be based on newer RX 580 benchmarks. The problem with that is that the 580 is basically an overclocked 480.

I don't even think it's that. This seems to be the most popular one claiming that, but the thing is that the metrics are kinda skewed since they added new benchmarks. Or maybe that's not actually the impression they were meaning to give and people are just drawing the wrong conclusions from it.
The biggest gains to the average performance came through improvements in DX12 titles and newer games that weren't tested back when it first came out (if I'm looking at this right). Looking at the also previously tested DX11 games only, there doesn't seem to be much of a change.

So yes, it's gotten better, but not really like the ripening banana as people are suggesting.
 

Dehnus

Member
Meanwhile in reality let's see what the AMD driver team has been busy working on during the all important launch window to reassure buyers:

https://hothardware.com/news/amd-radeon-rx-vega-mining-block-chain-ethereum

There's that raw compute advantage you were talking about. Hope you like mining.

Aaand they even call it "Fortunate". Greaaat. so fortunate that this thing will soon reach 45mh/s forthe 56 model. The model that gamers will want as there is no need for top of the line models unless you wish to brag.

Sigh, why even bother, let the miners just hvae it all, I'll be focusing on Raspberry Pi gaming from now on. Far more fun in both building, programming yourself and games. I"m THROUGH with AMD for Graphics cards.

For a moment I had hoped that this thing would totally suck for miners. But meh... its Advanced Mining Devices.
 

dr_rus

Member
Computerbase has a great Update with Vega 64 vs. Fury X, the HBCC impact and seperate overclocking of GPU and memory:
https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/7/#abschnitt_vega_64_vs_56_vs_fury_x_bei_gleichem_takt


1) Vega @ 1,05 Ghz with 512 GB/s vs. Fury X with 1,05 Ghz and 512 GB/s.
On average Vega (Gen 5) is only 6% per clock faster than the Fury X (Gen 3).
Polaris 10 (GCN Gen 4) was already 7% faster than Tonga (Gen 3) (But keep in mind that there might be other games and settings):
https://www.computerbase.de/2016-08/amd-radeon-polaris-architektur-performance/2/#abschnitt_die_gpugenerationen_im_benchmark

Looking at the games there is a big variance in performance.
Ashes of the Singularity and Gears of War 4 perform 6% better on the Fury X, while on Battlefield 1 Vega is winning with 25% better performance per clock!
Another big winner is Titanfall 2 with 24%.

perclock9qqsx.jpg

AotS is most likely a result of the lower actual memory bandwidth considering how b/w heavy this engine is and it cannot be hidden by the DCC.

BF1 result is likely related to the DCC improvements between GCN3 and 5 though.

2.) HBCC.
CB testet the HBCC with 8GB system memory allocated (From 32GB system memory) and the power-target was maxed.
On average the results where nearly indendical between the performance and the 99th percentile but again, there is a variance.

Certain games perform worse overall with the HBCC, certain games are perfoming worse but have better 99th percentile and some games perform better or identical but with worse 99th percentile.

It's quite interesting to see that there can be a bit better performance and up to 12% better 99th percentile, so there is potential even without a strict VRAM limit.

This feature should improve with time. (fine wine and so on)

hbccn0p3t.jpg

A useless feature for gaming, no idea why AMD felt the need to even expose it in the driver. No game in the PC market will be ignoring VRAM limitations any time soon. System memory paging isn't exactly lightning fast either so chances are that games which would will run significantly worse than those that wouldn't.

It is certainly a nice feature for the Instinct and SSC lines though.
 

Locuza

Member
AotS is most likely a result of the lower actual memory bandwidth considering how b/w heavy this engine is and it cannot be hidden by the DCC.
Actually the bandwidth results from Vega seem fine now.
PCGH used AIDA as a second bandwidth test which results in 375 GB/s for the RX Vega 64.
Vega FE only achieved 303 GB/s and the Fury X shows 366 GBs.
http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/

Carsten Spille from PCGH makes the assumption that the micro benchmark from the Beyond3D-Suite might be not long enough to achieve the practical peak results for Vega.

Also the game isn't bandwitdh hungry, it wins just 3% performance out of 15% more bandwith with max GPU clocks:
https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/7/#diagramm-ashes-of-the-singularity-escalation-takt-skalierung

BF1 result is likely related to the DCC improvements between GCN3 and 5 though.
And/or Tessellation, DSBR, less Cache-Flushes or whatever might have been optimized in addition.
Edit: I think I'm too soft here.
I hope you understand how absurd the theory is that Vega would beat Fury X about 25% @ 1 Ghz GPU clocks just because the DCC improves the bandwidth.
Related of course but under the hood there happened something much more significant.

A useless feature for gaming, no idea why AMD felt the need to even expose it in the driver. No game in the PC market will be ignoring VRAM limitations any time soon. System memory paging isn't exactly lightning fast either so chances are that games which would will run significantly worse than those that wouldn't.
And Computerbase testet it with positive and negative results.
It's not a useless feature in general and clearly shows gains.
 

ethomaz

Banned
Isn't that rather VERY bad scaling though?
So, Fury X * 1.06 (6% higher IPC) * 1.55 (55% higher clock) = 1.64%
Which renders 25-30% higher perf than Fury X, quite in line with 4.4% from 2%.

But 1080 got 24% higher 3dmark score with 22% overclock:
https://www.eteknix.com/oc-gtx-1080-scales-well-3dmark/
Vega is pretty on limit in terms of clock... more clock won't show big gains.

Or perhaps is better to say GCN is on limit in terms of clock.
 

llien

Member

It's not like nvidia cards weren't drastically affected:

1080 Ti drops from 126/81 to 93/70
1080 drops from 100/72 to 71/58
1070 AIB drops from 85/65 to 61/50
1070 FE drops from 83/64 to 60/50

https://www.youtube.com/watch?time_continue=250&v=zEfRi5pBQr4


Vega is pretty on limit in terms of clock... more clock won't show big gains.

Or perhaps is better to say GCN is on limit in terms of clock.

The 4.4% clock for 2% more perf is seen not only in 1600-1700 range (where some termal things could have kicked in) but all the way from 1050 to 1630Mhz, quite consistently.
 

ISee

Member
The biggest gains to the average performance came through improvements in DX12 titles and newer games that weren't tested back when it first came out (if I'm looking at this right). Looking at the also previously tested DX11 games only, there doesn't seem to be much of a change.

So yes, it's gotten better, but not really like the ripening banana as people are suggesting.

Sounds reasonable.
 

thelastword

Banned
Probably just to improve compatibility with overclocking and measuring software, but who knows. Even wattman seems to barely work with vega. I wouldn't bet my money on any kind of performance improvements. Especially because reviews and tests are already up.
If we are lucky the newest firmware will include a bios that allows Vega 56 to use more than 300W of power, but I'm not even holding my breath for that.

There is large scope for performance to improve over time with Vega though, more so than with Polaris which went from 10-12% behind the 1060 at launch to 0-4% about 6 months later. You've got the HBC and rapid packed math inside Vega as well. I reckon these will mature very nicely over time.

Here's the thing about some of the reviews, they pit Vega 56 (a reference blower style) card against some of the best 1070 AIB cards out there. Hardware Unboxed did this and Vega 56 still won there by 2%, he had a follow-up video explaining why he did it but his reasoning does not make much sense...I mean how are you going to benchmark Battlefield in DX11 on AMD hardware when there's a DX12 option that offers significant gains? Still, I'm not here to knock him, I like how he does his tests generally and how many games he uses, just some issues I have with some early reviews, which explains why they have Vega 56 just barely beating or matching the 1070 ATM....

I mean if you are going to put reference Vega against AIB NV cards, at least there are some basic things you can do to make it semi-fair, For e.g. just maxing the power limit slider alone on Vega gives you a big boost and that was the case with the rx480/580 too, so I'm not sure why that was not done. The card can then be undervolted to keep temps down and you can get a nice boost on the memory and a slight boost on the core whilst keeping temps below 75 degrees....boom!..much better performance with only a few clicks of the mouse....Now keep in mind that this is a reference cooler with restrictions to the end user in bios......but look at Joker's results against an AIB 1070....so far less the perf uptick we will get, when AIB cards come out.

Video


I judt think It's unacceptable that tech sites are not doing such basic things, but have no issues pairing it up with AIB NV cards...

OTOH

Apparently, Vega 56 crushes the 1080ti in Dirt4 at 1080p and also has significant higher minimums at 1440p, that's due to the AA method mostly (CMAA)..Apparently, NV can do better in dirt with MSAA on, however Vega 56 still beats the 1070 with 8x MSAA on....It's just strange that some tech sites will use 8xMSAA to do a dirt test against reference Vega, yet the Vega 56 at 2x to 4x MSAA still matches the GTX1080 in dirt and crushes it with CMAA or NO AA on. Vega 56 also matches a GTX 1080 in Warhammer at 1440p, and we're talking the reference Vega 56 vs AIB GTX 1070 and 1080 cards...8xMSAA dirt from hardOCP though..smh........


https://www.youtube.com/watch?v=lQCal8t-qvQ

https://www.youtube.com/watch?v=zEfRi5pBQr4


Watch these two videos back to back for reference. Now if he did the little tweaks Joker did, you don't even have to imagine how better his results would be in light of Vega, because Joker's benchmarks already prove what it is...

On the flip Paul's Hardware has been tinkering and has since suggested he can get Vega 56 to 1650-1680MHZ clockspeeds with higher voltage, not sure how he is getting higher voltages over 1.2V though...although he indicates that his fans are on max.....

As I've said before, I think it's early days, so many benches were rushed for Vega 56 and even Vega 64 to some extent...The coming days and weeks will be interesting...I don't see many quotes or even a thread for DF's testing either, those were fair tests done for the Vega 56 against the NV FE and no one is talking about it.

Hmmm....I've also seen some interesting benchmarks for Bioshock Infinite as well ;)
 

TSM

Member
Judging by all the article testing the performance of these new cards with AMD's beta mining driver I think the specs of the card for gaming almost become irrelevant. These cards will be perpetually sold out at above MSRP for as long as the mining bubble doesn't pop.
 

ethomaz

Banned
The 4.4% clock for 2% more perf is seen not only in 1600-1700 range (where some termal things could have kicked in) but all the way from 1050 to 1630Mhz, quite consistently.
I consider the base clock of Vega already pushing ahead the GCN limits... that is why the power draw go to hell.

1200-1300Mhz (peharps lower?) is what Vega should be running for best perf/watt but that could be killed the card and AMD needed push it beyond it limits.

It is clear the pushed clocks no matter what to reach some "ideal" performance... GCN was not made to run at these high clocks and that shows bad power draw and scale performance.
 

dr_rus

Member
Actually the bandwidth results from Vega seem fine now.
PCGH used AIDA as a second bandwidth test which results in 375 GB/s for the RX Vega 64.
Vega FE only achieved 303 GB/s and the Fury X shows 366 GBs.
http://www.pcgameshardware.de/Radeon-RX-Vega-64-Grafikkarte-266623/Tests/Benchmark-Preis-Release-1235445/3/

Carsten Spille from PCGH makes the assumption that the micro benchmark from the Beyond3D-Suite might be not long enough to achieve the practical peak results for Vega.

Also the game isn't bandwitdh hungry, it wins just 3% performance out of 15% more bandwith with max GPU clocks:
https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/7/#diagramm-ashes-of-the-singularity-escalation-takt-skalierung
AotS is very bandwidth hungry due to how its engine functions. It's also hungry in compute part of the rendering process which means that no bandwidth saving technique work there and thus Fiji can have a lead.

I would be cautious of memory OC benchmarks indicating much in case of Vega as it has been stated that OCing memory (or GPU) doesn't affect the clocks of IF which connects them and thus it's unclear that memory clock change even affect the actual bandwidth here.

And/or Tessellation, DSBR, less Cache-Flushes or whatever might have been optimized in addition.
Edit: I think I'm too soft here.
I hope you understand how absurd the theory is that Vega would beat Fury X about 25% @ 1 Ghz GPU clocks just because the DCC improves the bandwidth.
Related of course but under the hood there happened something much more significant.
DCC improvements between GCN3 and 5 may be more than in just bandwidth. Some things can be done in less cycles, some things can't be done at all on older archs. Tessellation is unlikely as there's no big changes between 3 and 5 here, the rest is just guessing.

And Computerbase testet it with positive and negative results.
It's not a useless feature in general and clearly shows gains.
What it clearly shows is no change on average comprised of both minor gains and minor losses which means that it's useless in current games.


https://www.youtube.com/watch?v=zEfRi5pBQr4

Not the first indication of MSAA having an unusually high cost on Vega. Might be due to either bandwidth or some issues with the new backend (RBEs/caches).
 
1) Vega @ 1,05 Ghz with 512 GB/s vs. Fury X with 1,05 Ghz and 512 GB/s.
On average Vega (Gen 5) is only 6% per clock faster than the Fury X (Gen 3).
Polaris 10 (GCN Gen 4) was already 7% faster than Tonga (Gen 3) (But keep in mind that there might be other games and settings):
https://www.computerbase.de/2016-08/amd-radeon-polaris-architektur-performance/2/#abschnitt_die_gpugenerationen_im_benchmark

So Vega is just a big Polaris chip in terms of performance per clock? Wasn't Vega supposed to be the largest architectural improvement that GCN ever got?
 
Hmm most claims that the Polaris is faster now then before seem to be based on newer RX 580 benchmarks. The problem with that is that the 580 is basically an overclocked 480. Some people even tend to oc a rx 580 even further and benchmark it against a 1060FE, which is probelmatic.
It's hard to find conclusive evidence for the "polaris improved significantly" statement and I tried. Let's take a look at some benchmarks from pcgh.de from 2016 (480 release) and 2017 (580 release).

SNIP .

Simpler to use TechPowerUp's performance summaries.


Very clearly gets faster relative to the 1060 over time on average.
 
Top Bottom