• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Radeon RX480 Review Thread, Launching Now!

So the Sapphire Nitro+ is the one to go with? It's the only one available here at the moment, so I might as well just get it.

Dunno Computer Base says the Devil is the most well-rounded and best. I think it or the Nitro would be good choices. The Strix is good also but too expensive.

I think the Nitro is the fastest, Strix second and Devil third at stock (by that I mean at boost/OC mode I mean). Unfortunately, they all seem to be waiting on some kind of driver update for either the proper OC software or fan temp target issue. AMD WattMan is meant to be buggy and doesn't actually increase voltage for OCs.
 
Some quick replies:

1. For memory amount, basically, no. Even 4GB will probably be enough for the foreseeable fair lifetime of these cards. Even today they don't fare well at anything above 1080p and this will not improve. At 1080p, you will not benefit in the least from mega texture packs a-la Doom's 5GB requiring Nightmare mode, the eye simply can not tell the difference in texture quality at this resolution beyond a certain point - and this has been verified by people like Digital Foundry. Bandwidth on the other hand will always be an issue, some games like Far Cry Primal are already very dependent on fillrate and bandwidth, and it will become more critical as your desired antialiasing method and level increases.

2. 1060 can not catch up in DX12, that's a dream. The Pascal architecture is basically Maxwell shrunk to 16nm finfet, meaning it is critically lacking in hardware schedulers. This is why it is so power efficient, but also why it is so terrible at parallelized tasks and low level API async compute operations. This will not improve. This will likely continue to be the case for Volta, which is very likely going to be Pascal (Maxwell) with HBM.

3. I suggest you watch this from the 16 minute mark onwards. It will not get better for the 1060, expecially once Nvidia catches on with the times and produces a new architecture with adequate hardware schedulers to take on DX12/Vulkan, when the Maxwell/Pascal(/Volta) cards will become greatly gimped.

4. RX 470 4GB will probably be an undisputed 1080p Price/Performance king. This will come at the expense of around 20% performance and lower memory bandwidth. What that means is the RX 480 will likely keep you more comfortable for about a year longer than the RX 470 would, but both will be great at 1080p.

5. Don't worry about this shit, like, at all.

6. In my mind, there is no reason to buy a Gsync monitor unless you are terribly invested in the nvidia ecosystem with shit like Shield and whatnot. But then, why should you be? Freesync is an open standard, comparable monitors are significantly cheaper and by investing in it you only force nvidia to cave in and adopt it.

7. Never owned a GPU with a backplate, never understood the appeal. Some people are incredibly anal about the way the inside of their cases look, I can't be assed, like, at all. Even though mine has a window and shit, I just go for a practical bang for buck approach with everything. YMMV.

8. I don't believe the 8-pin PCI-e connector was around in 2010, so your computer may lack that for custom 480/1060s. These cards don't draw very heavy loads on the rail though, so the 6-pin connector can be converted to an 8-pin with a cheap adapter and would be sufficient. People will tell you to upgrade your PSU but fuck that, if it's a branded decent CPU that 6-pin should be able to handle the current.


1. Does the lifetime of these cards seem possible to extend in 3-5 years? I'm hoping to keep the next GPU I upgrade to until then, and when the time comes I'll make an entirely new computer because I expect there will have finally been big enough advantages in CPUs (isn't Zen something everyone's looking forward to?) to warrant a total overhaul. As it is my 2500k and DDR3 are apparently still in great shape.

I'm worried upgrading to just 2x my current VRAM to 4GB won't be enough to last 3-5 years and will become a huge bottleneck like 2GB has for me now. I opted for 2GB over 1GB back in 2011 and I'm sure upgrading would have become absolutely mandatory much earlier if I hadn't. Even if I don't get excellent 1440p or 4k performance in the future on this card, it seems like 4GB could become a huge bottleneck just to run future game engines (I think my 6950 would do a fair bit better than mid 20s fps in DOOM if not for the 2GB VRAM, for example).

2 & 3. So if buying for the future, like I'm trying to, 480 may pan out as a better investment because it'll be more competitive with future API advances?

4. Hmm, if the 480 has likely another year of life in it than the 470, it seems like the 480 would be the better option. But is there a possibility of the 470 being capable of being OC'd to a just couple percentages away from the 480's base performance? If so I may consider it.

5. Cool. Crossfire with the second card in an 8x PCI will affect things though I assume? I think the motherboard I have (Asus P8Z68-V Gen 3 [Pro?]) has a system where the first two slots are x16, but if both are used at the same time the speed is reduced to 8x for each. Would that effect things, and how much percentage drop in total Crossfire potential boat would that cause? Are most boards like this, and thus SLI or Crossfire deals with 2 8x all the time?

6. Yeah I'm leaning towards Freesync. Any chance TV manufacturers will adopt either? I use my PC on my living room TV and my most recent monitor is 1920x1200 but has horrible color balance so I don't plan on going back to it. It seems like Freesync/G-Sync support is seen as more beneficial than 144hz screens, because Freesync/G-Sync make sub 60fps seem smooth, whereas 144fps is just "super smooth" while still requiring a huge amount of power. How is the upgrade from 1080p to 1440p or 4k (if you've done it)?

7. So there is no need to worry about "warping" or "sagging"? It's only cosmetic?

8. Yeah I checked and I don't think I have an 8-pin cord. Two 6-pins are powering my MSI 6950 2GB, is a pair of 6-pins the standard for power? It seems 8-pins are a bit of a rarity at the power levels of the 1060 and 480. I assume when they are used, it is just one 8-pin port, so a dual 6-pin to 8-pin converter would probably work well.

My PSU is something like 770W I think. Cards appear to have gotten much less power hungry over time, as I checked old reviews for 6950 and it seems even the 480, which is less efficient than the 1060 is a lot lower TDP than my current card so I'm not worried about being able to feed it enough power.

Last thing, which custom cards for either the 480 or 1060 would you recommend? It seems people like to bring up the EVGA SC, but it seems a little barebones to me, and the Nitro is said to be pretty good by some, but others say it is louder and hotter than expected?

Edit:

A word on backplates: most of the time they are fucking nonsense. Like on my Asus Strix 970 where if has NO function but makes thermals worse.
I fucking hate it, manufacturers advertise something that will hurt your card.

So if you have a choice, always go without backplate.

Thanks. Haven't heard many people say that it increases heat.

Be aware that the RX 480 requires a UEFI BIOS, which I think became common from 2012 and onwards. You might want to go with a 1060 if you don't know if your mobo supports UEFI or not.

Thanks. It's a good thing my motherboard is UEFI. I think when I bought it I specifically went for it because isn't it also a necessity for booting GPT hard disks or something? I was going for a motherboard that could last a long time, and it looks like it worked out. PCI-E 3.0 was another feature I was looking at, but it seems like upgrading my 2500k to an Ivy Bridge CPU wouldn't be worth doing for the small benefits PCI 3.0 seems to give, so I'm glad I've only got to upgrade my GPU to bring it back up to speed.
 

Jaagen

Member
Dunno Computer Base says the Devil is the most well-rounded and best. I think it or the Nitro would be good choices. The Strix is good also but too expensive.

I think the Nitro is the fastest, Strix second and Devil third at stock (by that I mean at boost/OC mode I mean). Unfortunately, they all seem to be waiting on some kind of driver update for either the proper OC software or fan temp target issue. AMD WattMan is meant to be buggy and doesn't actually increase voltage for OCs.

I'll probably get the Nitro, as some stores are starting to get it in stock already. The Strix doesn't seem to have a price yet in Norway and I can't find any release info on the Powercolor Devil.
 

Ganzor

Member
Have a hard time to decided if i should go Blower stock fan or an Aftermarket Nitro. My pc is an Matx build in a Bitfenix Phenom M so that limits the airflow. I want the best noise to performance balance since i mainly game with Speakers. The price difference is also 60$ between the 2 cards here in Denmark (not much cheaper to buy from other places in EU because of Shipping).
 
Dunno Computer Base says the Devil is the most well-rounded and best. I think it or the Nitro would be good choices. The Strix is good also but too expensive.

I think the Nitro is the fastest, Strix second and Devil third at stock (by that I mean at boost/OC mode I mean). Unfortunately, they all seem to be waiting on some kind of driver update for either the proper OC software or fan temp target issue. AMD WattMan is meant to be buggy and doesn't actually increase voltage for OCs.

how much more is the strix compared to the nitro+? haven't seen any prices for the 480 strix in germany.
in the hexus review the 480 strix is 35 pound less than the 1060 strix.


preordered the nitro two days ago on amazon. now another german retailer is saying that the nitro won't be available before september. WTF? i guess i can just wait for vega then :-X
 
Just read the ComputerBase review of the 480 Devil and seems like it's the best of the lot for sure. A lot quieter than the Nitro and more interestingly, in 'Quiet mode', it consumes the same power as the GTX 1060 while offering 93% of the power.

https://translate.google.co.uk/translate?hl=en&sl=de&u=https://www.computerbase.de/&prev=search

AMD are shocking at setting voltages for their reference cards. This means that, in terms of efficiency, the Pascal 1060 GPU in this case is only 8.6% more efficient than the Polaris.

Where are those people that were claiming that the Pascal GPU is a gen ahead of Polaris? The two GPUs are likely even closer in terms of efficiency down at the 460-470 vs 1050 level.
 

chaosblade

Unconfirmed Member
Devil looks solid. Temp target is a touch high, but it makes up for that a bit by running pretty quietly based on that review.
 

Ptaaty

Member
I preordered both 4g and 8g nitros from Amazon...also bought an 8gb reference that was in stock for a bit today...MSI for 239...should be able to recoup once the nitro ships. What's funny is I would have splurged for a 1070 if it supported freesync. Just ordered a Acer XF270HU on shellshocker.
 

CONCH0BAR

Member
I preordered both 4g and 8g nitros from Amazon...also bought an 8gb reference that was in stock for a bit today...MSI for 239...should be able to recoup once the nitro ships. What's funny is I would have splurged for a 1070 if it supported freesync. Just ordered a Acer XF270HU on shellshocker.

Did the 4GB Nitro go up on Amazon US or did you buy from another region's Amazon?
 
@Valkyri von Thanatos

1. Keeping the RX 480 is my 3-4 year plan. That said, I don't think any GPU can truly extend beyond 3 years unless your expectations go down. This is especially the case now with iterative consoles and lower level APIs pushing development in previously unseen paces. In the case of the RX480, beyond 2 years you will have to judge it as a low-mid tier card. What that means is twofold; first, it means do not expect to use your RX 480 as a 1440p card beyond its first two years at most (4K is right out, let's not get ahead of ourselves here), second, the 8GB VRAM will very likely lose its meaning faster than you think, i.e. the card will actually lack the kind of horsepower to drive 8GB worth of textures anyway. That's why I do not advocate the 8GB RX 480 strongly. In the case of your 6950, would actually having 4GB help your card? No, it's a pre-GCN architecture that fares pretty badly in all modern titles regardless. A 7850 with 2GB would probably do much better than your 6950 with 4GB. So yeah, buy the 8GB if you want some extra future proofing but I'm really not sold on it.

2 & 3. Basically yes, Pascal is Maxwell at higher clockrates and Maxwell will fare progressively worse as DX12/Vulkan become mainstream.

4. We don't know this yet. Theoretically, there is no reason why the 470 should overclock that much, it's the same GPU on a similar board and possibly on a board with 4 phases instead of 6. The GPU is cut down but the die size is the same, meaning thermal thresholds will be similar. And it will be using slower RAM for sure. I really doubt 470 will OC to 480 levels, it will just be a very solid $150 GPU and probably overclock to GTX 970 levels.

5. I believe it's pretty well established that dual 8x PCIe isn't a significant bottleneck, but I'm not too sure on this.

6. AMD is pushing Freesync for TVs so that might happen. TV manufacturers adopting a proprietary tech is less likely.

7. There's no warping issue for a card as small as the RX 480, especially with a blower type fan that keeps it straight regardless.

8. Converter will work well, your 770W would probably be enough to drive a 1080 with its dual 6-pins each converted to 8-pins IMO.

Personally I don't see the point of paying a premium for any AIB RX 480, the overclocking headroom is just not there to warrant it. The $199 reference 4GB board is the best value there is in my view, especially since it can unlock to 8GB but it's hard to find. If you can land a reference 8GB for $239 it's also great value. But an RX 480 for $250+ is not to my taste.

Cheers
 

Locuza

Member
Just read the ComputerBase review of the 480 Devil and seems like it's the best of the lot for sure. A lot quieter than the Nitro and more interestingly, in 'Quiet mode', it consumes the same power as the GTX 1060 while offering 93% of the power.

https://translate.google.co.uk/translate?hl=en&sl=de&u=https://www.computerbase.de/&prev=search

AMD are shocking at setting voltages for their reference cards. This means that, in terms of efficiency, the Pascal 1060 GPU in this case is only 8.6% more efficient than the Polaris.

Where are those people that were claiming that the Pascal GPU is a gen ahead of Polaris? The two GPUs are likely even closer in terms of efficiency down at the 460-470 vs 1050 level.
I was positively surprised by the numbers, but Pascal can also scale down significantly.
I'm curious to see how the 470 and the P11 are doing against the 1050 and GP107/8, especially the last chips should be manufactured at Samsung so the comparison is becoming better.
 

Marlenus

Member
Radeon RX 480 Red Devil im Test : PowerColors roter Teufel ist zivilisiert und leise



So far the only bench it "owns" when compared to 1060 is Doom VK which is beta with a lack of performance features for NV h/w, presence of known issues with vsync, etc. Hitman's 480 advantage present in both DX11 and DX12 so that's not a sign of a better DX12 performance. In all other DX12 benchmarks 1060 is at least on par or faster - which can either be a good or bad thing depending on how you factor the prices into the equation.

Untrue. Proof.

If you compare fastest API to fastest API.

Stock 1060 wins
Ashes
Tomb Raider

Stock 480 wins
Doom
Forza
Hitman
Quantum Break
Gears

They tie in
Total war: warhammer.

The 1060 has performance regressions in
Ashes
Doom
Hitman
Total War: Warhammer
Tomb Raider

The 480 has regressions in
Tomb Raider

To me it looks like the 480 is a safer bet if you intend to keep the card for more than a year. It performs better in most games with a low level API and it also gains performance when using the low level API.
 
I was positively surprised by the numbers, but Pascal can also scale down significantly.
I'm curious to see how the 470 and the P11 are doing against the 1050 and GP107/8, especially the last chips should be manufactured at Samsung so the comparison is becoming better.

Not sure that's technically true. 1080 is an efficiency beast. But scale down to the 1060 and Pascal loses a load of efficiency. Scale down again to the 1050 and it's likely very close to AMD's equivalent 470 (460?). Remember, Pascal is basically Maxwell more or less at higher clockspeeds. Once you take the high clockspeeds away, you're left with numbers very similar to last gen.
 

Locuza

Member
Actually I meant the power-consumption per chip and SKU.
You can scale down the GTX 1070 to only consuming 50% of the power, while holding 75% of the original performance:
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1070-8gb-pascal-performance,4585-8.html

It's not like a custom RX480 with a tighter power-target is the right comparison vs. the stock GTX 1060 to look at the efficiency difference between Pascal and Polaris.
Take a better point on the efficiency curve for the 1060 and I'm sure the difference will be bigger than just 9%.
 
Actually I meant the power-consumption per chip and SKU.
You can scale down the GTX 1070 to only consuming 50% of the power, while holding 75% of the original performance:
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1070-8gb-pascal-performance,4585-8.html

It's not like a custom RX480 with a tighter power-target is the right comparison vs. the stock GTX 1060 to look at the efficiency difference between Pascal and Polaris.
Take a better point on the efficiency curve for the 1060 and I'm sure the difference will be bigger than just 9%.

Is that really a metric though? Because the RX 480, if anything, is much easier to undervolt and retain clockspeeds. Power efficiency, this isn't..
 
Untrue. Proof.

If you compare fastest API to fastest API.

Stock 1060 wins
Ashes
Tomb Raider

Stock 480 wins
Doom
Forza
Hitman
Quantum Break
Gears

They tie in
Total war: warhammer.

The 1060 has performance regressions in
Ashes
Doom
Hitman
Total War: Warhammer
Tomb Raider

The 480 has regressions in
Tomb Raider

To me it looks like the 480 is a safer bet if you intend to keep the card for more than a year. It performs better in most games with a low level API and it also gains performance when using the low level API.
Apparently AMD just released a new driver that improves Tomb Raider performance by 10 percent, so that gap is closing as well. Can't wait to see Guru3d's AIB 480 reviews with latest drivers installed.
 
Actually I meant the power-consumption per chip and SKU.
You can scale down the GTX 1070 to only consuming 50% of the power, while holding 75% of the original performance:
http://www.tomshardware.com/reviews/nvidia-geforce-gtx-1070-8gb-pascal-performance,4585-8.html

It's not like a custom RX480 with a tighter power-target is the right comparison vs. the stock GTX 1060 to look at the efficiency difference between Pascal and Polaris.
Take a better point on the efficiency curve for the 1060 and I'm sure the difference will be bigger than just 9%.

Sure but there is a slight misconception that some tried to peddle that the whole Polaris GPU family is miles behind Pascal in terms of efficiency. It depends on what performance segment you look at.

We need to remember that Polaris was introduced as a lower-price and lower performance tier to enthusiast level chips. It's a tiny GPU. Anyway, let's wait and see numbers on the 460 and 470. As I said, these are going to be pretty much best in class I am predicting in terms of efficiency.
 

Locuza

Member
Sure but there is a slight misconception that some tried to peddle that the whole Polaris GPU family is miles behind Pascal in terms of efficiency. It depends on what performance segment you look at.

We need to remember that Polaris was introduced as a lower-price and lower performance tier to enthusiast level chips. It's a tiny GPU. Anyway, let's wait and see numbers on the 460 and 470. As I said, these are going to be pretty much best in class I am predicting in terms of efficiency.
But you need to take all things in perspective, the GP106 is smaller than the P10 chip,~200mm² vs. ~232 mm², while clocking much higher, but using less voltage and consuming less power and still performing better.
Without any doubts the Pascal products are leading in this perspective by not a small margin, although to be fair, the software side is also very important for the performance and efficiency which isn't in AMDs favor.
 

DonMigs85

Member
AMD just doesn't have the R&D budget to make their designs as efficient as Nvidia's. Their parts are also very ALU-heavy which probably doesn't do wonders for power consumption relative to their Nvidia counterparts to begin with.
 
AMD just doesn't have the R&D budget to make their designs as efficient as Nvidia's. Their parts are also very ALU-heavy which probably doesn't do wonders for power consumption relative to their Nvidia counterparts to begin with.

This is not the whole story. AMD certainly has at least the budget to reverse engineer Maxwell, and they likely did. The thing is, they don't believe in what nVidia are doing. nVidia basically took Kepler, which was a flop, and in order to improve their own efficiency largely cut down on the hardware schedulers, which are responsible for parallelizing tasks for multithreaded computing, async compute and low level APIs. AMD did not do this, and refuse to do this, because they apparently believe sacrificing efficiency now and adopting a future proof architecture into conventional TDP envelopes is the way to go.

I can only agree after seeing RX 480 vs GTX 1060 DX12 and Vulkan benchmarks. Pascal is a dead end for nVidia. Next year, a lot of people will lament their choices.
 

Bittercup

Member
Has anyone seen the PowerColor RX 480 Red Devil in store somewhere? Preferable mainland Europe.
Computerbase writes the card should be available today in small stocks and more coming next week. But none of the stores listed on their price comparison site have it in stock and dates only for some time later in August.
 

dr_rus

Member
PowerColor Radeon RX 480 RED DEVIL review

RX 480 Sapphire Nitro+ vs GTX 1060 Gainward : match à 280€

Untrue. Proof.

If you compare fastest API to fastest API.

Stock 1060 wins
Ashes
Tomb Raider

Stock 480 wins
Doom
Forza
Hitman
Quantum Break
Gears

They tie in
Total war: warhammer.

The 1060 has performance regressions in
Ashes
Doom
Hitman
Total War: Warhammer
Tomb Raider

The 480 has regressions in
Tomb Raider

To me it looks like the 480 is a safer bet if you intend to keep the card for more than a year. It performs better in most games with a low level API and it also gains performance when using the low level API.
These tests are not the only ones in the Internet, but even they show pretty much what I've said: the only "new API" game where 480 "owns" 1060 is Doom Vulkan. The rest of them are very close.
 

dr_rus

Member
Quantum break, forza and killer instinct as well

Direct3D12-Vulkan-Test-13.png


GTX1060_Forza-600x359.jpg


Killer Instinct is DX11. Which again reinforces the fact that it's not an issue of new APIs but mostly - bad renderer optimizations when porting a console GCN code to PC.

This situations isn't nearly as clear as some make it sound. 480 isn't universally faster than 1060 in DX12 at all.
 

Locuza

Member
This is not the whole story. AMD certainly has at least the budget to reverse engineer Maxwell, and they likely did. The thing is, they don't believe in what nVidia are doing. nVidia basically took Kepler, which was a flop, and in order to improve their own efficiency largely cut down on the hardware schedulers, which are responsible for parallelizing tasks for multithreaded computing, async compute and low level APIs. AMD did not do this, and refuse to do this, because they apparently believe sacrificing efficiency now and adopting a future proof architecture into conventional TDP envelopes is the way to go.

I can only agree after seeing RX 480 vs GTX 1060 DX12 and Vulkan benchmarks. Pascal is a dead end for nVidia. Next year, a lot of people will lament their choices.
They didn't and that's definitely not the reason why Maxwell/Pascal are more efficient than GCN.
 
Direct3D12-Vulkan-Test-13.png


GTX1060_Forza-600x359.jpg


Killer Instinct is DX11. Which again reinforces the fact that it's not an issue of new APIs but mostly - bad renderer optimizations when porting a console GCN code to PC.

This situations isn't nearly as clear as some make it sound. 480 isn't universally faster than 1060 in DX12 at all.

That's pretty disingenuous, dude. Literally one click away from the medium settings Quantum Break benchmark you linked is this:

qxgnTAL.png


And since their Forza Apex benchmark doesn't show the 1060 winning, I guess it's time to look somewhere else?

FcWyeU1.png
 
They didn't and that's definitely not the reason why Maxwell/Pascal are more efficient than GCN.

Sorry but, what? They did exactly what I said. Kepler had strong SMs with access large CUDA clusters so they could better manage parallel tasks whereas Maxwell has weak SMs that can also only access smaller CUDA clusters. In essence they serialized the architecture for DX11, while also gimping compute and cutting out FP64, which resulted in less power consumption and easier high clockspeeds. With that clockspeed, and with good drivers are basically bruteforcing a technically inferior architecture through DX11 and OpenGL but hitting a wall with DX12 and Vulkan.
 

cyen

Member
That's pretty disingenuous, dude. Literally one click away from the medium settings Quantum Break benchmark you linked is this:

qxgnTAL.png


And since their Forza Apex benchmark doesn't show the 1060 winning, I guess it's time to look somewhere else?

FcWyeU1.png

He always cherry picks the benchmarks, i dont know why i dont even bother.
 

thelastword

Banned
That's pretty disingenuous, dude. Literally one click away from the medium settings Quantum Break benchmark you linked is this:

qxgnTAL.png


And since their Forza Apex benchmark doesn't show the 1060 winning, I guess it's time to look somewhere else?

FcWyeU1.png
It's pretty clear that the rx480 outperforms the 1060 in many recent titles; Gears, Quantum Break, Forza, Hitman, Doom etc....so going forward, we will see even more titles being better on the 480 since most of these titles are devved using the low level api's of the consoles, which happen to use AMD GPU's.....
 

Locuza

Member
Sorry but, what? They did exactly what I said. Kepler had strong SMs with access large CUDA clusters so they could better manage parallel tasks whereas Maxwell has weak SMs that can also only access smaller CUDA clusters. In essence they serialized the architecture for DX11, while also gimping compute and cutting out FP64, which resulted in less power consumption and easier high clockspeeds. With that clockspeed, and with good drivers are basically bruteforcing a technically inferior architecture through DX11 and OpenGL but hitting a wall with DX12 and Vulkan.
Hooijdonk nearly everything is wrong what you say.

Kepler had very bad and inefficent SMs, 192 ALUs with few resources when it comes to the register-file and the caches, never able to manage practical peak performance.
Maxwell fixed this inefficient design with cutting down the ALU-Number, improving latency and the caches.

You can look at the theoretical throughput and compare it to the practical, also take a look at the latency for the operations:
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4

There is nothing what Nvidia cut down in regards to the schedulers or abilities to achieve higher efficiency, the opposite is the case.
With the Big Kepler Chip GK110 Nvidia introduces dynamic parallelism and Hyper-Q which is managing up to 32 compute-queues.
The small Kepler chips didn't support dynamic parallism or hyper-q, Maxwell does it for every chip.
http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Nvidia didn't gimped anything in comparison to Kepler, they improved on all fronts.
Schedulers, compute performance and so on.
Well there is one exception and this is the FP64 rate.
Kepler supported 1:24 for the small chips, Maxwell only does 1:32.
And there wasn't a chip with a high DP-Rate like the old GK110 which offered a DP:SP-Ratio of 1:3.

Pascal went a few steps further, implementing dynamic load balancing for graphics and compute reservation and making task preemption a lot faster.
Also Nvidia does have a Pascal implementation with only 64 ALUs, again improving the resources per ALU and implementing high FP64 Rates and also FP16.
But this time around this is not a product for consumers.
 

dr_rus

Member
That's pretty disingenuous, dude. Literally one click away from the medium settings Quantum Break benchmark you linked is this:

qxgnTAL.png


And since their Forza Apex benchmark doesn't show the 1060 winning, I guess it's time to look somewhere else?

FcWyeU1.png

I used the link that Marlenus posted saying that 480 wins in QB to show that it doesn't, even according to that same link. The only ones who are cherry picking stuff here are the guys who's saying that 480 wins (let alone "owns") in DX12. When you look at all the benchmarks out there it's pretty clear that it doesn't win at all, it's just hovers on the same level. Whatever cumulative figures we have for DX12+VK show that as well.
 
Hooijdonk nearly everything is wrong what you say.

Kepler had very bad and inefficent SMs, 192 ALUs with few resources when it comes to the register-file and the caches, never able to manage practical peak performance.
Maxwell fixed this inefficient design with cutting down the ALU-Number, improving latency and the caches.

You can look at the theoretical throughput and compare it to the practical, also take a look at the latency for the operations:
http://techreport.com/review/28513/amd-radeon-r9-fury-x-graphics-card-reviewed/4

There is nothing what Nvidia cut down in regards to the schedulers or abilities to achieve higher efficiency, the opposite is the case.
With the Big Kepler Chip GK110 Nvidia introduces dynamic parallelism and Hyper-Q which is managing up to 32 compute-queues.
The small Kepler chips didn't support dynamic parallism or hyper-q, Maxwell does it for every chip.
http://www.anandtech.com/show/9124/amd-dives-deep-on-asynchronous-shading

Nvidia didn't gimped anything in comparison to Kepler, they improved on all fronts.
Schedulers, compute performance and so on.
Well there is one exception and this is the FP64 rate.
Kepler supported 1:24 for the small chips, Maxwell only does 1:32.
And there wasn't a chip with a high DP-Rate like the old GK110 which offered a DP:SP-Ratio of 1:3.

Pascal went a few steps further, implementing dynamic load balancing for graphics and compute reservation and making task preemption a lot faster.
Also Nvidia does have a Pascal implementation with only 64 ALUs, again improving the resources per ALU and implementing high FP64 Rates and also FP16.
But this time around this is not a product for consumers.

This is nvidia's tune. What I'm saying is what actually happened. What you call more 'efficient' maxwell schedulers are basically weaker schedulers more oriented towards serial tasks. Nvidia butchered the card's compute performance for a DX11 gaming oriented architecture. You are basically mouthing nvidia PR at this moment, and I don't really care for that. You can just take the chip diagrams and die photos, and see what happened with your own eyes.
 

Locuza

Member
Nothing lost, nothing gained.
Okay, I try to be open minded.
Could you please explain to me how Maxwell or Pascal are more oriented towards serial tasks vs. Kepler?
How do you know that the compute performance was butchered and how do you interpret the die shots, where you clearly see that Maxwell reduced the capabilities?
 

notBald

Member
Uhh? I'm pretty sure those videos testing if RX 480 fried MB with some old AM3+ boards were not UEFI. Also I remember flashing a UEFI vBIOS on my R7 260X and it worked fine.

That was what was claimed in the thread I linked. I haven't actually tried the card with a non-UEFI motherboard myself.
 

JohnnyFootball

GerAlt-Right. Ciriously.
Well, I will say that the 470 benchmarks could really make that card the one for the best performance on a budget.

If the $160 price point is correct. I could easily see myself using that card in a second PC.
 

Sotha_Sil

Member
Well, I will say that the 470 benchmarks could really make that card the one for the best performance on a budget.

If the $160 price point is correct. I could easily see myself using that card in a second PC.

If it can ruin last-gen games with high performance, I might pick one up as a cheap holdover for Vega. So many games I've missed out on that I could play for the time between then and now.
 

JohnnyFootball

GerAlt-Right. Ciriously.
If it can ruin last-gen games with high performance, I might pick one up as a cheap holdover for Vega. So many games I've missed out on that I could play for the time between then and now.

My current plan is to build a new system from the ground up and tranfer my 1070 over to a home theater PC dedicated exclusively to gaming using a Corsair Bulldog and leave that hooked up to the TV.

I would probably just put a 470 in my current rig so that it has some gaming power.
 

wachie

Member
That's pretty disingenuous, dude. Literally one click away from the medium settings Quantum Break benchmark you linked is this:

qxgnTAL.png


And since their Forza Apex benchmark doesn't show the 1060 winning, I guess it's time to look somewhere else?

FcWyeU1.png
Welcome to the club.
 
This is nvidia's tune. What I'm saying is what actually happened. What you call more 'efficient' maxwell schedulers are basically weaker schedulers more oriented towards serial tasks. Nvidia butchered the card's compute performance for a DX11 gaming oriented architecture. You are basically mouthing nvidia PR at this moment, and I don't really care for that. You can just take the chip diagrams and die photos, and see what happened with your own eyes.

your information is incorrect. kepler was when nvidia dropped the fine grained scheduling. it was last seen in fermi. everything he told you is correct. tho i should point out that dynamic parallelism and hyper Q is only usable under cuda when doing compute only. i dont believe its usable at all for games.
 
Seems the fabled AMD driver improvements over time is happening faster than normal. AMD released new RotTR drivers today and in their review Hexus benched the game with them for their Nitro 4GB and 8GB review.

Nearly 20fps increase over the reference 480 at 1080p. What's surprising is the Nitro beats even the Fury X now in this game and is almost identical to the 1060 Gaming X at 1440p:

http://hexus.net/media/uploaded/2016/7/5a4242a2-0d5e-4afa-a033-fc16a532210f.png

Hexus said:
Hold up, what's going on here, then? Why are the Sapphire cards almost 20fps faster at 1080p than the reference RX 480? The release notes of the Crimson 16.7.3 driver show there's a performance uplift compared to older drivers, but we didn't expect this much. This is the only game affected in the transition between drivers, and it feels as if it's been 'Vulkanised'.

The uplift peters out at higher resolutions, however, yet it still serves to show the importance of software optimisations. The improvements put the RX 480 on a much closer footing against the GTX 1060.

RotTR is a major factor why the 1060 has an aggregate (+5-7%) performance advantage over the 480 and now that is greatly diminished. I would like to see how the two cards stack up with all the new Tomb Raider drivers and DOOM benched in Vulkan, you know, the free mode where everyone gets more performance. I think the performance gap would be even narrower.

And I predict it will continue to shrink as AMD slowly optimizes more games for Polaris arch.
 

DSix

Banned
Due to the nitro coming into market, I'm seeing the default blower models going down in price quite a bit.

Should I buy a stock RX 480 or is it important I wait for AIBs?
 
Top Bottom