• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Radeon RX Vega thread

Simpler to use TechPowerUp's performance summaries.



Very clearly gets faster relative to the 1060 over time on average.

Not too much different. 1060 was 10% faster, then 5% faster, then 4% faster (than the 480) in those three benchmarks. In any case, I think AMD's performance is generally comparable with the corresponding Nvidia competitor that you should just choose which one you like better (and freesync vs gsync).
 

Locuza

Member
AotS is very bandwidth hungry due to how its engine functions. It's also hungry in compute part of the rendering process which means that no bandwidth saving technique work there and thus Fiji can have a lead.

I would be cautious of memory OC benchmarks indicating much in case of Vega as it has been stated that OCing memory (or GPU) doesn't affect the clocks of IF which connects them and thus it's unclear that memory clock change even affect the actual bandwidth here.
Okay, what's the theory behind Gears of War 4 with Unreal Engine 4 which also runs 6% faster on Fury X?
The Infinity Fabric has its own clock domain, the user can manipulate the core and memory clock, you can theorize that because of the IF domain the memory OC don't scale that much but the results at least indicate change in performance.
3% on average from 15% more bandwith which were set.
Watch Dogs 2 shows 6% scaling, Titan Fall 2 7%, Gears of War 4 2%, Battlefield 1 3% and AotS 3%.

The results don't correlate with a bandwidth starving theory in AotS or Gears of War 4.
If, then Wach Dogs 2 and Titan Fall 2 are bandwidth hungry but on per clock base Watch Dogs 2 runs 6% faster and Titanfall 2 whopping 24%.

DCC improvements between GCN3 and 5 may be more than in just bandwidth. Some things can be done in less cycles, some things can't be done at all on older archs. Tessellation is unlikely as there's no big changes between 3 and 5 here, the rest is just guessing.
There is nothing from AMD indicating that the DCC got any fundamental improvements.
Gen 4 increased the efficiency and brought new compression ratios but it's grotesk to assume that because of it BF1 profits 25% per clock as if the Fury X with over 350 GB/s effective bandwidth is hugely starved in BF1.
Again, the bandwidth results don't indicate anything like that.

Tessellation got big changes between 3 and 5.
Gen 4 fixed the strip-format performance, brought the Primitive-Discard-Accelerator and an Index-Cache inside the Geometry-Engines.
Gen 5 further doubles the size of the parameter-cache and can also cache vertex parameter data in the L2$.
You clearly see the results in practise:
https://www.computerbase.de/2017-08/radeon-rx-vega-64-56-test/4/#abschnitt_vega_legt_bei_tessellation_zu

I got your remark about possible different bottlenecks in games but that's the real case and that matters in the end.
When the chip behaves clearly better with Tessellation than it got it's bottlenecks improved whereever there might have been and perhaps weren't testet with the small micro benchmarks in use.

What it clearly shows is no change on average comprised of both minor gains and minor losses which means that it's useless in current games.
It's useless but it does improve the 99th percentile in Deus Ex by 12% with the same performance.
Or 11% in Dishonored 2 or 6% in For Honor, 12% in Mafia 3.

How could it be not useless, there is no perspective there, AMD should remove it from the driver options and don't work further on it.

https://www.youtube.com/watch?v=zEfRi5pBQr4

Not the first indication of MSAA having an unusually high cost on Vega. Might be due to either bandwidth or some issues with the new backend (RBEs/caches).
PCGH has no issues with MSAA in Crysis 3 which leads to an open field of problems with no clear answers yet.

So Vega is just a big Polaris chip in terms of performance per clock? Wasn't Vega supposed to be the largest architectural improvement that GCN ever got?
It is but the results are obviously a disaster.
I'm quite certain that the driver is just miserable.
Vega FE got no DSBR working for games and had strange results in many synthethic tests, there was also no power management for the new deep sleep mode.
RX Vega got at least all of this things fixed but the HBCC is still unfinished and only time will tell if it can be usefull as a default setting.
Primitive Shaders don't seem to be (widenly) implemented, possibly affecting DSBR and geometry performance.
 

martino

Member
Simpler to use TechPowerUp's performance summaries.



Very clearly gets faster relative to the 1060 over time on average.

or it's mostly % changes relative to better and better hardware that make it appear like it
the better relative hardware is the smaller the performance between the two is even with 0 gains( not saying they are none)
 

ISee

Member
Simpler to use TechPowerUp's performance summaries.



Very clearly gets faster relative to the 1060 over time on average.


In July 2016 the difference between RX 480 and 1060 FE is 10%
16 Games tested:
Anno 2205., AC:S, BF3, BF 4, Batman AK, Blops 3, Crysis 3, FO4, Far Cry Primal, GTA 5, Hitman, JC3, Rainbow Sieg. RotTR, Witcher 3, WoW

In October 2016 the difference between RX 480 and 1060 FE is 5.5%
18 Games tested:
Anno 2205, AC:S, BF4, Batman AR, Blops 3, Deus Ex:MD, DOOM, F1 2016, FO4, Far Cry:primal, GTA V, Hitman, JC3, NMS, Rainbow Sieg, RotTR, Witcher 3, Warhammer

In Feb 2017 the difference between RX 480 and 1060 FE is 4.25%
21 Games tested:
Anno 2205, AC:S, BF4, Batman AR, Civ 6, Deus Ex:MD, DOOM, Dishonored 2, F1 2016, FO4, Far Cry:primal, GTA V, Hitman, JC3,Mafia 3, Rainbow Sieg, RotTR, Witcher 3, Shadow Warrior 2, Warhammer, Watch Dogs 2

The difference between July 2016 and Feb 2017 seems to be amazing. A 6% performance gain, just because of improved drivers. But that's not the right conclusion, all we can say, based on this data, is that the base data changed (half of the games benchmarked in feb 2017 weren't benchmarked in july 2016, that's too much)

So let's dive in even deeper and let's take a closer look at the benchmarks that all data sets have in common (12 games total):

Anno 2205:
RX 480 (-25% fps on average than a 1060 FE); RX 480 (-26%); RX 480 (-25%)
--> no significant difference over the time period

AC:S
RX 480 (-23%); RX 480 (-25%); RX 480 (-20%)
--> no significant difference over the time period

BF1
RX 480 (-25%); RX 480 (-26%); RX 480 (+4,3%)
--> Significant improvement


Batman:AK
RX 480 (+3,3%); RX 480 (+1,35%); RX 480 (+7,5%)
-->small improvement


Fo4:
RX 480 (-9,3%); RX 480 (-17,5%); RX 480 (-9,35%)
--> no significant difference over the time period

Far Cry Primal
RX 480 (-10,2%); RX 480 (-9,5%); RX 480 (-8%)
--> no significant difference over the time period

GTA V
RX 480 (-10%); RX 480 (-11,8%); RX 480 (-11,73%)
--> no significant difference over the time period

Hitman
RX 480 (+1,7%); RX 480 (+3%); RX 480 (+7%)
--> small improvement


Just Cause 3
RX 480 (+3,3%); RX 480 (+4,2%); RX 480 (+6%)
--> small improvement


Rainbow Six: Siege
RX 480 (-4,6%); RX 480 (-3,4%); RX 480 (-1,3%)
--> no significant difference over the time period

Rise of the Tomb Raider
RX 480 (-11,4%); RX 480 (-3,3%); RX 480 (-0,6%)
--> significant improvement


Witcher 3
RX 480 (-11%); RX 480 (-10,7%); RX 480 (-4,5%)
--> small improvement


Out of 12 games we have 2 games (17%) that improved in a significant way (~ 20,55% faster ) and 4 games (33%) had small improvements (~ 4,78% faster).

So yes, overall RX 480 improved because of driver optimizations. Those improvements are huge in two cases, but on average we have a much smaller improvement of 2.1%. Because half of the games didn't improve at all and 33% just barely.

In the end I stay by my previous statement: There isn't enough data to support the claim that "polaris improved over time in a significant way"

sorry for gramer and spelling mistakes, I'm obviously not a native speaker and it's sometimes a bit hard to phrase everything correctly

edit:
Nice analysis, although the one thing I wanted to point out is that BF1 is only in the 3rd benchmark, the first two uses BF4.

yeah sorry. I messed up there. But my points stays, because the improvements look even less relevant without BF... not that it really matters. People will keep on repeating the same statements, that are based on miss representative data.
 

Eternia

Member
The difference between July 2016 and Feb 2017 seems to be amazing. A 6% performance gain, just because of improved drivers. But that's not the right conclusion, all we can say, based on this data, is that the base data changed (half of the games benchmarked in feb 2017 weren't benchmarked in july 2016, that's too much)

So let's dive in even deeper and let's take a closer look at the benchmarks that all data sets have in common (12 games total):

*snip*

BF1
RX 480 (-25%); RX 480 (-26%); RX 480 (+4,3%)
--> Significant improvement
Nice analysis, although the one thing I wanted to point out is that BF1 is only in the 3rd benchmark, the first two uses BF4.
 

bomblord1

Banned
C̶a̶n̶ ̶a̶n̶y̶o̶n̶e̶ ̶r̶e̶c̶o̶m̶m̶e̶n̶d̶ ̶a̶ ̶1̶0̶8̶0̶ ̶f̶o̶r̶ ̶t̶h̶e̶ ̶s̶a̶m̶e̶ ̶o̶r̶ ̶l̶o̶w̶e̶r̶ ̶p̶r̶i̶c̶e̶ ̶t̶h̶a̶n̶ ̶w̶h̶a̶t̶ ̶t̶h̶e̶ ̶V̶e̶g̶a̶'̶s̶ ̶a̶r̶e̶ ̶g̶o̶i̶n̶g̶ ̶f̶o̶r̶ ̶w̶h̶e̶n̶ ̶t̶h̶e̶y̶ ̶a̶r̶e̶ ̶i̶n̶ ̶s̶t̶o̶c̶k̶ ̶(̶$̶5̶9̶9̶)̶?̶
edit: actually this would probably make more sense in the PC thread sorry
 

Nezacant

Member
The biggest reason amd hardware ages better is due to it typically performing better in new games as the years go on

Is there any sort of source for this claim? I've run both for many years and never noticed any kind of difference between one brand aging better than another.
 

ethomaz

Banned
Nice summary.

In my view it only show few games improves maybe because there are some issues with the driver but overall it is basically very little difference. This ideia Radeon drivers improves performance over time while Geforce drivers not is some myth create for some.

Both drivers improves a bit in some games.
 
Is there any sort of source for this claim? I've run both for many years and never noticed any kind of difference between one brand aging better than another.

Its why amd cards have consistently gained performance against nvidia counterparts since the 7970 v gtx 680. Happens at virtually every single pricepoint
 
Its why amd cards have consistently gained performance against nvidia counterparts since the 7970 v gtx 680. Happens at virtually every single pricepoint

Except it isn't. AMD cards gained on Kepler era gpus because PS4 was released and game design started to write games to use more shaders and less dedicated hardware like ROPs. It's very easy to see that in cross gen games like GTA V Kepler stayed strong.
 
Except it isn't. AMD cards gained on Kepler era gpus because PS4 was released and game design started to write games to use more shaders and less dedicated hardware like ROPs. It's very easy to see that in cross gen games like GTA V Keplar stayed strong.

Except it is. Happened with 390 v 970, fury x v 980ti, 380 v 960, and 480 v 1060

Of course using different games to make the summary will show big changes...

But where is the big changes in performance in the same games when compared with nVidia?

Which was exactly my point. I never said improvements in existing games were a major source of amds better aging
 

Nezacant

Member
Its why amd cards have consistently gained performance against nvidia counterparts since the 7970 v gtx 680. Happens at virtually every single pricepoint

AMD was always good about undercutting prices for cards against nvidia counterparts. The cards aren't magically ageing any better. They may price cut more as the hardware ages but that doesn't mean anything for Joe Shmoe the Gaming Bro who bought the card at full price when it was new. It ages... just like any other hardware.
 
AMD was always good about undercutting prices for cards against nvidia counterparts. The cards aren't magically ageing any better. They may price cut more as the hardware ages but that doesn't mean anything for Joe Shmoe the Gaming Bro who bought the card at full price when it was new. It ages... just like any other hardware.

That has nothing to do with the point im debating
 
The difference between July 2016 and Feb 2017 seems to be amazing. A 6% performance gain, just because of improved drivers. But that's not the right conclusion, all we can say, based on this data, is that the base data changed (half of the games benchmarked in feb 2017 weren't benchmarked in july 2016, that's too much)

So let's dive in even deeper and let's take a closer look at the benchmarks that all data sets have in common (12 games total):

Anno 2205:
RX 480 (-25% fps on average than a 1060 FE); RX 480 (-26%); RX 480 (-25%)
--> no significant difference over the time period

AC:S
RX 480 (-23%); RX 480 (-25%); RX 480 (-20%)
--> no significant difference over the time period

BF1
RX 480 (-25%); RX 480 (-26%); RX 480 (+4,3%)
--> Significant improvement


Batman:AK
RX 480 (+3,3%); RX 480 (+1,35%); RX 480 (+7,5%)
-->small improvement


Fo4:
RX 480 (-9,3%); RX 480 (-17,5%); RX 480 (-9,35%)
--> no significant difference over the time period

Far Cry Primal
RX 480 (-10,2%); RX 480 (-9,5%); RX 480 (-8%)
--> no significant difference over the time period

GTA V
RX 480 (-10%); RX 480 (-11,8%); RX 480 (-11,73%)
--> no significant difference over the time period

Hitman
RX 480 (+1,7%); RX 480 (+3%); RX 480 (+7%)
--> small improvement


Just Cause 3
RX 480 (+3,3%); RX 480 (+4,2%); RX 480 (+6%)
--> small improvement


Rainbow Six: Siege
RX 480 (-4,6%); RX 480 (-3,4%); RX 480 (-1,3%)
--> no significant difference over the time period

Rise of the Tomb Raider
RX 480 (-11,4%); RX 480 (-3,3%); RX 480 (-0,6%)
--> significant improvement


Witcher 3
RX 480 (-11%); RX 480 (-10,7%); RX 480 (-4,5%)
--> small improvement


Out of 12 games we have 2 games (17%) that improved in a significant way (~ 20,55% faster ) and 4 games (33%) had small improvements (~ 4,78% faster).

So yes, overall RX 480 improved because of driver optimizations. Those improvements are huge in two cases, but on average we have a much smaller improvement of 2.1%. Because half of the games didn't improve at all and 33% just barely.

In the end I stay by my previous statement: There isn't enough data to support the claim that "polaris improved over time in a significant way"

sorry for gramer and spelling mistakes, I'm obviously not a native speaker and it's sometimes a bit hard to phrase everything correctly

edit:


yeah sorry. I messed up there. But my points stays, because the improvements look even less relevant without BF... not that it really matters. People will keep on repeating the same statements, that are based on miss representative data.


There's a misunderstanding going on here. People who claim AMD cards 'age well' don't just mean purely from the perspective of what you've presented above i.e improvements from drivers to older games. It's about the fact that Polaris cards perform stronger relatively on newer games or games released after launch as well. That's part of the reason why it closed the gap - older games are swapped out from older benchmarks for newer titles which have been more optimized for AMD hardware, so on average it's performance is now nigh on identical to the 1060 with today's relevant games (let's say 2017 titles).

It's not just about drivers from AMD bringing up performance. It's about devs optimizing for Polaris or other AMD cards once they've been released. So in this sense performance on average improves over time compared to Nvidia where the performance is much better on day 1.

Here's a task for you - compare the gap in performance between the cards using mid 2016 games versus games released from 2017 onwards and report ypur findings to me, I'd love to see that.
 

ethomaz

Banned
There's a misunderstanding going on here. People who claim AMD cards 'age well' don't just mean purely from the perspective of what you've presented above i.e improvements from drivers to older games. It's about the fact that Polaris cards perform stronger relatively on newer games or games released after launch as well. That's part of the reason why it closed the gap - older games are swapped out from older benchmarks for newer titles which have been more optimized for AMD hardware, so on average it's performance is now nigh on identical to the 1060 with today's relevant games (let's say 2017 titles).

It's not just about drivers from AMD bringing up performance. It's about devs optimizing for Polaris or other AMD cards once they've been released. So in this sense performance on average improves over time compared to Nvidia where the performance is much better on day 1.

Here's a task for you - compare the gap in performance between the cards using mid 2016 games versus games released from 2017 onwards and report ypur findings to me, I'd love to see that.
Or maybe there are more game that runs better on AMD in 2017 than 2016... 2018 can be the opposite.

Think a bit about...

2016: 5 (AMD favor) + 6 (NV favor) + 9 (tie)
2017: 6 (AMD favor) + 5 (NV favor) + 9 (tie)
2018: 4 (AMD favor) + 7 (NV favor) + 9 (tie)

That has nothing to do with card aging well but the differents games released in the period... the games released in 2017 are different and uses different engines than the games released in 2016... the quantity is different too... even the benchmarks can't be compared because there are so many differences.
 
Or maybe there are more game that runs better on AMD in 2017 than 2016... 2018 can be the opposite.

Think a bit about...

2016: 5 (AMD favor) + 6 (NV favor) + 9 (tie)
2017: 6 (AMD favor) + 5 (NV favor) + 9 (tie)
2018: 4 (AMD favor) + 7 (NV favor) + 9 (tie)

That has nothing to do with card aging well but the differents games released in the period.

How can you still fail to understand what we are saying
 

ethomaz

Banned
How can you still fail to understand what we are saying
I understand what are you saying the same way I said that 2018 it can somebody can claim "nVidia cards aged well" using your own metrics.

When you change the data used to comparison you have different results that can favor any of the parts.
 
Or maybe there are more game that runs better on AMD in 2017 than 2016... 2018 can be the opposite.

Think a bit about...

2016: 5 (AMD favor) + 6 (NV favor) + 9 (tie)
2017: 6 (AMD favor) + 5 (NV favor) + 9 (tie)
2018: 4 (AMD favor) + 7 (NV favor) + 9 (tie)


That has nothing to do with card aging well but the differents games released in the period... the games released in 2017 are different and uses different engines than the games released in 2016... the quantity is different too... even the benchmarks can't be compared because there are so many differences.

giphy.gif
 

Nydus

Member
Oh boy the fandom is strong in those kids.

Instead of "giving tasks" to people who show you proof with factual data why dont you show something worthwhile and intelligent? Because posting "funny memes" is not an argument, its an insult to everyone reading this thread.

And i cant hear the age well shit anymore. I want to play games NOW and not in 2 years. Also i had a fury...its a pos in 2017. So leave me alone with your finewine scams 😑
 

ethomaz

Banned
Fury x v 980ti at launch
https://www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html

Fury x v 980ti now
https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_56/31.html

You can do this same type of comparison for almost every amd gpu v its nvidia equivalent for the last 6 years
That is exactly the failed claim I explained before... if you chose others games it will show 980ti increase the difference over Fury X.

You are comparing two different set of data to make a point. I can make any card looks different from launch if I choose the games that will lead me for bad or good results.

People in the thread asked for proof but you show differents data to try to create a point.
 
That is exactly the failed claim I explained before... if you chose others games it will show 980ti increase the difference over Fury X.

You are comparing two different set of data to make a point.

I can make any card looks different from launch if I choose the games that will lead me for bad or good results.

Im not choosing any games tho. I dont work for techpowerup. Btw Richard from DF has also weighed in on the fact that AMD gpus have been performing better in newer AAA games

Im also not an AMD fanboy ffs
 

ISee

Member
There's a misunderstanding going on here. People who claim AMD cards 'age well' don't just mean purely from the perspective of what you've presented above i.e improvements from drivers to older games. It's about the fact that Polaris cards perform stronger relatively on newer games or games released after launch as well. That's part of the reason why it closed the gap - older games are swapped out from older benchmarks for newer titles which have been more optimized for AMD hardware, so on average it's performance is now nigh on identical to the 1060 with today's relevant games (let's say 2017 titles).

It's not just about drivers from AMD bringing up performance. It's about devs optimizing for Polaris or other AMD cards once they've been released. So in this sense performance on average improves over time compared to Nvidia where the performance is much better on day 1.

Here's a task for you - compare the gap in performance between the cards using mid 2016 games versus games released from 2017 onwards and report ypur findings to me, I'd love to see that.

I see, and it very well could be true. But it is impossible to prove that "fact".
Mainly because we have to take 1060 performance as given and not changeable over time to prove that polaris "aged well" (we need some sort of base to work with). And that assumption is obviously false, because nvidia and graphical engineers also spend time to develop and improve their engines/games for pascal.

But okay, let's push some data. Not because you gave me a task btw, just because I want to find this out for myself.

edit: and done -->

older/2014/2015/ 2016 Games
(in comparison with GTX 1060 FE)

Assassins Creed Unity
RX 480 -13%

COD Advance Warfare
RX 480 -11%

Dragon Age Inquisition
RX480 -12%

Far Cry 4
RX 480 +4%

GTA 5
RX 480 -4%

Metro LL
RX 480 -18%

Witcher 3
RX 480 -5%

Doom (on release)
RX 480 +15%

Dirt Rally
-12%

F1 2015
RX 480 -12%

Fallout 4
RX 480 -13%

FC:primal
RX 480 -6%

Deus Ex:MD
+3%

Watch_Dogs 2
-5%

GoW 4
-5%

Titanfall 2
+7%

Mafia 3
-10%

BF 1 (DX12)
+10%

NMS
-24%

Dishonored 2
-20%

--> on average the RX 480 is %7.25 slower than a 1060 FE (20 games total).

2017 Games only

For Honor
-3%

Hellblade
-5%

Mass Effect Andromeda
- 9%

Prey
-12%

RE 7
+20%

Ghost Recon
- 10%

PUBG
-20%

The Surge
+1%

Rime
-27%

Nier
+4%

DoW 3
+ 0%

Halo Wars 2
-3%

Dirt 4
-5%

Styx shards of darkness
-25%

Sniper: Ghost Warrior 3
+9%

The RX 480 is on average 4.7% slower than a GTX 1060 FE (18 games total)

Performance improved by 2,53% this year (if 1060 FE performance is consistent). It's something?
[Benchmarks are mostly from computerbase, pcgh, gamegpu and techpowerup]
 

Nydus

Member
Btw whats up with those numbers from techpowerup? 192fps in doom @1080p with a 1080ti?? Hell techspot (hardware unboxed) has more fps in 1440p lol

Also techpowerup has the vega56 winning in doom...against a 1080 (with the fury just 3 fps behind it lol). Is this joker guy running the site? I think i dont want to trust this site anymore 🤣
 
Btw whats up with those numbers from techpowerup? 192fps in doom @1080p with a 1080ti?? Hell techspot (hardware unboxed) has more fps in 1440p lol

Also techpowerup has the vega56 winning in doom...against a 1080 (with the fury just 3 fps behind it lol). Is this joker guy running the site? I think i dont want to trust this site anymore 🤣

Yeah you're clearly much more intelligent. That fact that different levels in a game can have vastly different performance metrics is clearly lost on you. And a vega 56 beating a 1080 in doom isnt a surprise at all. Between vulkan and shader intrinsics its likely the best optimized pc game there is for amd

I see, and it very well could be true. But it is impossible to prove that "fact".
Mainly because we have to take 1060 performance as given and not changeable over time to prove that polaris "aged well" (we need some sort of base to work with). And that assumption is obviously false, because nvidia and graphical engineers also spend time to develop and improve their engines/games for pascal.

But okay, let's push some data. Not because you gave me a task btw, just because I want to find this out for myself.

edit: and done -->

older/2014/2015/ 2016 Games
(in comparison with GTX 1060 FE)

Assassins Creed Unity
RX 480 -13%

COD Advance Warfare
RX 480 -11%

Dragon Age Inquisition
RX480 -12%

Far Cry 4
RX 480 +4%

GTA 5
RX 480 -4%

Metro LL
RX 480 -18%

Witcher 3
RX 480 -5%

Doom (on release)
RX 480 +15%

Dirt Rally
-12%

F1 2015
RX 480 -12%

Fallout 4
RX 480 -13%

FC:primal
RX 480 -6%

Deus Ex:MD
+3%

Watch_Dogs 2
-5%

GoW 4
-5%

Titanfall 2
+7%

Mafia 3
-10%

BF 1 (DX12)
+10%

NMS
-24%

Dishonored 2
-20%

--> on average the RX 480 is %7.25 slower than a 1060 FE (20 games total).

2017 Games only

For Honor
-3%

Hellblade
-5%

Mass Effect Andromeda
- 9%

Prey
-12%

RE 7
+20%

Ghost Recon
- 10%

PUBG
-20%

The Surge
+1%

Rime
-27%

Nier
+4%

DoW 3
+ 0%

Halo Wars 2
-3%

Dirt 4
-5%

Styx shards of darkness
-25%

Sniper: Ghost Warrior 3
+9%

The RX 480 is on average 4.7% slower than a GTX 1060 FE (18 games total)

Performance improved by 2,53% this year (if 1060 FE performance is consistent). It's something?
[Benchmarks are mostly from computerbase, pcgh, gamegpu and techpowerup]

1060 v 480 at launch
https://www.techpowerup.com/reviews/NVIDIA/GeForce_GTX_1060/26.html

1060 v 480 now
https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_56/31.html
 
D

Deleted member 325805

Unconfirmed Member
I'm watching the DF 1070 vs 56 review and it's great to see the 56 keeping up with the 1080 when overclocked, the card seems pretty great, I just hope the price isn't too ridiculous.
 
This data was shown to me already and I even replied to it on this very page. I explained there in detail why this data is invalid and I even ran a quick analyze of it. Just scroll up and you'll be able to find it with ease.

You are talking about drivers improving existing games. Im not debating that at all. Im saying amd performance increases relative to nvidias in many game averages because newer games tend to perform better on amd. If you disagree with thats whats your explanation for amds gainin performance year after year at most pricepoints relative to nvidia?
 

ISee

Member
You are talking about drivers improving existing games. Im not debating that at all. Im saying amd performance increases relative to nvidias in many game averages because newer games tend to perform better on amd. If you disagree with thats whats your explanation for amds gainin performance year after year at most pricepoints relative to nvidia?

What are you talking about? I literally just analyzed the performance of 20 games from 2014-2016 and from 18 games from 2017. On average the performance just improved by 2.5% (from 2014/15/16 to 2017). Not that this kind of looking at the data is valid in the first place.
But pls, YOU even quoted that very post yourself. Did you even bother reading it? Or did you just care about posting something that proofs your believe, without really understanding it?
 

thelastword

Banned
With reference to what I posted earlier, DF has now tinkered with some of the basic improvements in the AMD GPU software, overclocking memory alone gives a 5% uptick in games on Vega 56. Pushing the power to the max gives an increase of about 11% in games.....all on a reference cooler.

I'm happy that DF has tinkered with wattman etc...but pushing the power limits, memory and core will push up wattage, however there is a way to maintain that 11% overclock whilst lowering power usage, they could increase the fan speed and undervolt tthe vrm to 1.1 instead of the default 1.2...Temps and Wattage would have decreased significantly whilst maintaining perf...


I hope more tech sites get the memo and do some decent overclocks, or at least. do it the best way known so far....I can't wait to see what the the AIB cards come up with however. That will be the most interesting thing in the coming weeks...

As I've said before, you can see how a simple/limited overclock on a stock cooler albeit with no leeway on the bios and sporting beta drivers, can match and even outpace the gtx 1080 in a few games....even some dx11 games at that, so what do we expect for future titles targetting AMD hardware and low level API's where AMD hardware shines? Yeah you get it....Don't forget also, that HBCC and primitive discarding is off atm....

I'll wait for DF and Joker's benches on the Vega 64WC/Air, I imagine going through the same process as what's being done for the Vega 56 should bring a nice boost to the 64's as well...

Video
 
Fury x v 980ti at launch
https://www.techpowerup.com/reviews/AMD/R9_Fury_X/31.html

Fury x v 980ti now
https://www.techpowerup.com/reviews/AMD/Radeon_RX_Vega_56/31.html

You can do this same type of comparison for almost every amd gpu v its nvidia equivalent for the last 6 years

Initially I thought it would be plausible for the Fury X to close in on the GTX 1070 but then after taking a second look at it's performance in games at the time and the difference in VRAM capacity I thought it would be unlikely, I then somehow confused myself with the RX 480 and the Fury X in the process.

Your post a couple of months ago that I responded too:

i meant more long the lines of performance in future titles that are more demanding without needing to run at 4k. for example i wouldnt be too surprised if this time next year, the furyx is equal to or faster than the 1070 at 1440p in performance summaries

My initial thoughts:

The Fury X is a lovely card but that 4GB of memory is not going to do it any favours vs 8GB, there's even games at 1080p that want over 4GB of memory if you max out the texture quality setting.

However if you're not maxing out the texture quality setting it's definitely got more to give in the games that don't play well with it's memory. I too wouldn't be too surprised if it is equal to or faster than the Fury X in such scenarios.

I think it's the only flagship I've seen AMD release in the past 5 years with a flaw which could impact it's capabilities in the future, the 7970 and R9 290X had more VRAM than their competitors so they were already secured in that area for the future, they pretty much bested the competing products when you see how they perform now. The Fury X however is not so future-proof in that aspect unfortunately.

EDIT: Equal to or faster than the Fury X? That doesn't any sense, the 1070 is what I meant.

My doubt after confusing myself with the Fury X and RX 480 and looking back on it's performance:

Mistakenly I appear to have somehow mixed up the Fury X and GTX 1070 with the RX 480 and the performance discrepancies it has with the GTX 1060 6GB. My apologies.
Looking back on the performance of the Fury X I see that it would be unlikely for it to decrease the gap between it and the GTX 1070, regardless of the memory it has. Meanwhile the situation with the RX 480 and GTX 1060 is entirely different, with the RX 480 clawing back the majority of the performance lead the GTX 1060 once had.

GTX 1060 vs. RX 480 - An Updated Review



I'm not quite sure how I managed to mix up the Fury X and RX 480 while still taking about the Fury X, epic fail on my part sorry.

It was a lovely little card but it's performance wasn't so hot for it's price-point, it only had 4GB or vram and it also didn't have much overclocking headroom when compared to the 980 Ti in roughly the same market segment.

Looking at the Vega 56 review from TechPowerUp kind of shows that my thoughts after taking a second look at the Fury X's performance were wrong, and that your thoughts have some validity to them. The irony is that my initial thoughts of the Fury X getting closer appear to be somewhat valid too, the same thoughts I went back on after confusing myself and taking a second look at the Fury X's performance.

It hasn't even been a year and the Fury X is close to or even faster than the GTX 1070 at 1440p in a some of the nwer games in TechPowerUp's Vega 56 review, I find it rather intriguing that it's able to hold up so well in some of the newer games against the GTX 1070. IIRC there was a slightly larger performance deficit between the cards?

Here are some of the games where this happens as well as the overall performance summary which has the GTX 1070 leading. Unfortunately these are only average frame-rates, I do wonder how the minimum frame-rates would compare though.


(Links to make the post smaller, sorry about all of the images!)
Prey and Sniper Elite 4
Relative performance summary

Hehe, I guess the real "epic fail" was my second look at the performance and coming to the seemingly wrong conclusion.
 

SRG01

Member
With reference to what I posted earlier, DF has now tinkered with some of the basic improvements in the AMD GPU software, overclocking memory alone gives a 5% uptick in games on Vega 56. Pushing the power to the max gives an increase of about 11% in games.....all on a reference cooler.

I'm more interested in seeing how memory OC actually impacts Vega at much higher speeds. There might actually be a memory bottleneck somewhere...
 
Initially I thought it would be plausible for the Fury X to close in on the GTX 1070 but then after taking a second look at it's performance in games at the time and the difference in VRAM capacity I thought it would be unlikely, I then somehow confused myself with the RX 480 and the Fury X in the process.

Your post a couple of months ago that I responded too:



My initial thoughts:



My doubt after confusing myself with the Fury X and RX 480 and looking back on it's performance:



Looking at the Vega 56 review from TechPowerUp kind of shows that my thoughts after taking a second look at the Fury X's performance were wrong, and that your thoughts have some validity to them. The irony is that my initial thoughts of the Fury X getting closer appear to be somewhat valid too, the same thoughts I went back on after confusing myself and taking a second look at the Fury X's performance.

It hasn't even been a year and the Fury X is close to or even faster than the GTX 1070 at 1440p in a some of the nwer games in TechPowerUp's Vega 56 review, I find it rather intriguing that it's able to hold up so well in some of the newer games against the GTX 1070. IIRC there was a slightly larger performance deficit between the cards?

Here are some of the games where this happens as well as the overall performance summary which has the GTX 1070 leading. Unfortunately these are only average frame-rates, I do wonder how the minimum frame-rates would compare though.



(Links to make the post smaller, sorry about all of the images!)
Prey and Sniper Elite 4
Relative performance summary

Hehe, I guess the real "epic fail" was my second look at the performance and coming to the seemingly wrong conclusion.

Yeah at 1070 launch it was 14% faster than a furyx at 1440p. Now its 8% faster in the vega review
 

thelastword

Banned
I'm more interested in seeing how memory OC actually impacts Vega at much higher speeds. There might actually be a memory bottleneck somewhere...


Watching this video, it seems Steve Burke from Gamers Nexus can do a 180MHZ OC on memory and it remains stable...All in all, he was able to get a 20% perf increase on the Vega 56 in this video in the final analysis, but he is quite adamant that they can do even better with more time and tweaking......He even suggests they will do a watercooling solution on Vega 56 to see how things improve for much bigger gains.

There are still constant reports of the primitive pixel shader, DSBR and HBC being off in Vega right now...from other sites mostly, however, Steve has contacted AMD on one thing..."trying to get them to open up the bios"...One tech reviewer was able to push more power through the Vega FE through a windows registry entry, who knows if he will have any luck with Vega 56, but AMD is considering opening it up......
 
Pascal was a major value leap over Maxwell. 1060=980, 1070=980Ti.

The power leap was nice, but value is lacking compared to Maxwell.

The 780Ti was $700 at launch in Nov 2013

The 970 was $330 at launch less than a year later and was matching the 780Ti in some, losing in other and winning some games.

The 980Ti was $650($50 less than 780Ti) in May 2015.

The 1070 was $380 (MSRP it never really hit and $50 more than 970) launched a year after the 980Ti and the comparison to the 980Ti is equivalent to the 970 vs 780Ti.

I don't know what actual prices were at the other launches, but the 1070 has been north of $400 for a year. So you have a $380 gap between the 780Ti and 970 vs a $250 gap between the 980Ti and the 1070.

Now consider the 1070 is currently $450 and at this time the 970 was still $330 (and sometimes less) with game bundles. Maxwell was a much better value than Pascal, but NVIDIA had to come hard, because the 290 and 390 were killer values.

I've pointed out before that the 70 series has historically been about $400, but the 290 had been sitting in the $400 price point for a year and the 970 wasn't faster. This led to NVIDIA undercutting the fuck out of AMD to maintain market share. This was great for consumer, but set a value expectation that won't be repeated unless there is a huge breakthrough like a super competitive AMD.

I'm glad I bought a Gigabyte G1 Gaming 1070 for $430 last August. They are $600 now? Can I mine bit coins with this bitch or what is going on?
 
Top Bottom