• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon Fury X review thread

mkenyon

Banned
I kind of assume they had active cooling over the VRMs given the lack of fan on the card itself. Almost all (good) waterblocks have water pass over the VRM area in order to keep them cool. Even EK's block for the Fury X has this.

But, it looks like they just have some heatpipes connected to some additional copper which runs off the main waterblock. Bummer.
This is silly. They also commented that their videocard reviews literally always come down to the wire and they test all the games it is realistic for them to in the few days they have before NDA lifts. Brent also mentioned he always pulls all-nighters for his reviews. To say there is a lack of effort or they can't be bothered to do this or that is quite frankly insulting and disrespectful to the time and effort they put into their reviews.

There are already 2 other sites which do the FCAT and they give very detailed charts and graphs of many games. If FCAT is what you love, go read TPU and PCPer and pretend there isn't HardOCP. I greatly value HardOCP because they are literally the only site who don't just run a bunch of benchmarks, make graphs, and call it a day. They actually sit down and extensively play the games they are testing, which is incidentally why they can't test more than 5 because that's all they could fit in before the deadline.

There will always be a place for objective scientific measurements like FCAT but HardOCP are the only guys doing the heavy lifting and going in there and experiencing what it's like to actually play games on the cards they test and they provide interesting subjective feedback on what they are seeing with their eyes. So no, I don't want HardOCP to become like the other 28 sites who run canned benchmarks, prepare pretty bar charts, and talk about how pretty their charts are. There are already 28 other sites which do this. HardOCP are the only site which plays the actual game and tells us how it is, and I would like them to keep it that way.

P.S. I read reviews from multiple websites anyways.
Good.

To be 100% frank, Kyle has been quite an asshole about the situation, going to the point of banning people on the forums for requesting that they give better data. The reason why I bring these up is because I'd prefer websites who go into detail for their reviews to give accurate objective information to get hits from GAF rather than websites with petulant editors who refuse to get with the times.

Also, I don't think you understand how benchmarks work. You do have to play it in order for the benchmark to happen. It's not some passive thing in game. You have to repeat the same section 5+ times per game through active participation.
 
"stacking" seems to be a new thing now.

Samsung uses stacked NAND in their newer SSDs too.

I wonder if everyone is going to start going 'vertical' instead of horizontal with their silicon architecture now?
 

Pakkidis

Member
AMD is really an attractive choice in Canada because the 980ti goes for 800+ here. One of the reasons I usually choose AMD is the price/performance ratio.
 
Also, I don't think you understand how benchmarks work. You do have to play it in order for the benchmark to happen. It's not some passive thing in game. You have to repeat the same section 5+ times per game through active participation.

Huh? Tons of PC games have passive benchmark modes. I'm guessing 90% of these benchmark results are the in-game passive benchmark mode.

HardOCP does actually play the games for there results.
 

mrklaw

MrArseFace
honestly the water cooling sounds less like 'ooh super quiet and lots of OC headroom' and more like 'if we don't do water cooling we'll have to downclock it a lot to stop it frying'
 
This is the one reason keeping me from buying the Fury X.

For those with more GPU knowledge. Will this be a bad thing in the long run? Assuming the card will not be overclocked. I'm just worried the card will fry after few months.

I have no idea how high the VRMs temps can go.

Those VRMs are technically rated up to 125C but... Yeah. I wouldn't actually want to test how long they could go at 125C before frying.
 

tuxfool

Banned
"stacking" seems to be a new thing now.

Samsung uses stacked NAND in their newer SSDs too.

I wonder if everyone is going to start going 'vertical' instead of horizontal with their silicon architecture now?

Yes. It is an "easy" way:
- to make use of older processes or larger feature sizes
- Increase transistor count
- Failure tolerance
- reduce package sizes

This is something that has been a long time coming ( since IBM debuted the first paper describing stacked dies).
 

Sinistral

Member
Why do keep people acting like Windows 10 is going to do anything for the Fury X that it won't do for the 980 Ti? They're both DX12 cards.

It's more or less about AMD Drivers. They're not well tailored for DX11 multithreading, whereas with DX12, it seem to be alleviated. A synthetic test:

http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=21

Games retrofitting DX12 enhancements and built from the ground up are the unknown though. Not something to bank on. New Deus Ex is the first ground up DX12 game right?
 

mkenyon

Banned
Huh? Tons of PC games have passive benchmark modes. I'm guessing 90% of these benchmark results are the in-game passive benchmark mode.

HardOCP does actually play the games for there results.
Nope.

TechReport posts all the videos from the sections that they use for the benchmark, if you want to see what it is that they are testing. This is how most websites do it. Very few use in-game benchmarks, and if they do, then they point out that it's an in game benchmark and kind of separate it from the rest of the data.

Again, I think people are buying a bit too much into what Kyle says about HardOCP's testing methodology. They're not drastically different than other people, they just don't want to take the time crunch to put together frame time data in a digestible format. Understandably so, it took me probably ~40 hours when I was going through the data for my first time. But they're paid to do it, and I wasn't. :p
Those VRMs are technically rated up to 125C but... Yeah. I wouldn't actually want to test how long they could go at 125C before frying.
They won't fry, you'll get all sorts of errors way before then with crashing.
 
So is Fury X pretty much all there is from AMD in the short term? ie for the rest of the year in terms of highest performance, regardless of cost?
 
So is Fury X pretty much all there is from AMD in the short term? ie for the rest of the year in terms of highest performance, regardless of cost?

No, there is the Fury, Nano and the dual-GPU Fury X2 (or Fury Maxx). So all is not lost for them this year, they got some interesting products coming out.

I want to see what Sapphire, Gigabyte et al can do with the Fiji chip in their non-reference cards.
 

mkenyon

Banned
So is Fury X pretty much all there is from AMD in the short term? ie for the rest of the year in terms of highest performance, regardless of cost?
Yeah, as noted above, they have 3 more Fury products. The Fury X2 is going to be more or less two Fury X GPUs (possibly downclocked) on one PCB, so it's like having Crossfire on a single board.
 

tuxfool

Banned
It's more or less about AMD Drivers. They're not well tailored for DX11 multithreading, whereas with DX12, it seem to be alleviated. A synthetic test:

http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=21

Games retrofitting DX12 enhancements and built from the ground up are the unknown though. Not something to bank on. New Deus Ex is the first ground up DX12 game right?

This probably makes sense. AMD has a head start with low level APIs. They may have even been able to appropriate large sections Mantle code to work with Dx12.
 

Nachtmaer

Member
So is Fury X pretty much all there is from AMD in the short term? ie for the rest of the year in terms of highest performance, regardless of cost?

They showed the board of a dual Fiji which is probably going to take a few months. Besides that and a few cut down versions of Fury X, that's pretty much it for this generation. The same probably goes for Nvidia.

The last few years have really been a bore when it comes to PC tech. Edit: At least when it comes to CPUs and GPUs, it's nice to see some progress in the monitor business.
 

Randam

Member
PC Games Hardware had a similar finding:
Fiji_Cooler_Master_Heat_Full_Load_380Watt-pcgh.jpg


I think this could be rather easily alleviated though. The cooler can clearly handle more heat, but they need to transport it off the VRMs better.

what does this mean for the air cooled non X Fury?
 

mkenyon

Banned
what does this mean for the air cooled non X Fury?
It'll actually likely be in a better spot with air cooling, as it will have either active cooling, or fans moving air over the passive coolers, which does not happen on the water cooled card.
Just saw something interesting on reddit (for once).

Three different rev's of Catalyst 15.15 were released on the 15th, 17th, and 20th. There's speculation that variation between different sites test results might have to do with what driver rev was used.

https://www.reddit.com/r/buildapc/comments/3b30bt/discussionfury_x_possibly_reviewed_with_incorrect/
I really hope this is the case.
 

dr_rus

Member
Well, the Fury X is using 4GB at higher resolutions.

You can't expand RAM with the driver but you can make sure that what is being stored is what is most used. From the quotes I read AMD haven't cared about managing this before and large chunks of what is stored is barely ever accessed, so is just wasting space.

Benchmarks have already proven that Fury's 4GB are not different at all to Hawaii's 4GB. AMD was just controlling the damage with lies, as usual.
 

mephixto

Banned
It'll actually likely be in a better spot with air cooling, as it will have either active cooling, or fans moving air over the passive coolers, which does not happen on the water cooled card.

I really hope this is the case.

Well not only the card was overhyped and below the expectations, now they are telling us they are a incompetent company that can't manage a lauch of a key product, pissing off reviewers and possible customers. Fuck off AMD.
 

mkenyon

Banned
Well not only the card was overhyped and below the expectations, now they are telling us they are a incompetent company that can't manage a lauch of a key product, pissing off reviewers and possible customers. Fuck off.
Those things aren't additive though, it's an either/or.
 

Van Owen

Banned

mkenyon

Banned
Amd loyalists grasping. Wait for new drivers. Wait for DX12. It goes on and on.
Not sure if you read it, but they're indicating that due to a fuck up on AMD's part, websites used different drivers. It's not, "wait for new drivers", as Guru3D apparently used the correct ones on the Day 0 review.
 

mephixto

Banned
Not sure if you read it, but they're indicating that due to a fuck up on AMD's part, websites used different drivers. It's not, "wait for new drivers", as Guru3D apparently used the correct ones on the Day 0 review.

Even if Guru3D used the correct drivers its makes little(1 or 2 fps) or no difference at all, if this fcked of AMD is true, it's not a good look to invalidate and throw away the work of almost every review site.
 

FireFly

Member
Benchmarks have already proven that Fury's 4GB are not different at all to Hawaii's 4GB. AMD was just controlling the damage with lies, as usual.
I found this Computerbase review (translated from German):

https://translate.google.co.uk/tran...5-06/amd-radeon-r9-fury-x-test/11/&edit-text=

On the linked page there are a couple of titles where the Fury uses less memory than the 390X 4GB. This is in cases where the 390X is very close to or exceeding the 4GB limit. At the very least this shows that AMD manage Fury's memory differently.

The reviewer also notes that the frame time discrepancies do not disappear at lower resolutions, so are unlikely to be caused by memory bottlenecks.
 
Nope.

TechReport posts all the videos from the sections that they use for the benchmark, if you want to see what it is that they are testing. This is how most websites do it. Very few use in-game benchmarks, and if they do, then they point out that it's an in game benchmark and kind of separate it from the rest of the data.

I very much doubt most websites do it.

Tom's Hardware says which are in-game and which are not, but they don't separate them at all. Anandtech doesn't even say which they are using unless you can gleam it from the text, which is what you will find looking at their 980 Ti review (where they admit the frame times came as those reported by the in-game benchmark fro GTAV but don't otherwise say they used it). Guru3d uses the in-game benchmarks, and doesn't mention what they are using unless you can figure it out yourself. TechPowerUp is the same way.

I could go on like this. I wouldn't trust any review to contain something other than the in-game benchmark unless they explicitly say it is not, rather than the other way around.
 

dr_rus

Member
I found this Computerbase review (translated from German):

https://translate.google.co.uk/tran...5-06/amd-radeon-r9-fury-x-test/11/&edit-text=

On the linked page there are a couple of titles where the Fury uses less memory than the 390X 4GB. This is in cases where the 390X is very close to or exceeding the 4GB limit. At the very least this shows that AMD manage Fury's memory differently.

The reviewer also notes that the frame time discrepancies do not disappear at lower resolutions, so are unlikely to be caused by memory bottlenecks.

Of course it manages the memory differently, it's a completely different memory controller after all. But the amount of cases where Fiji is using less memory than Hawaii is basically the same as the opposite so all in all the results from memory utilization POV are the same.

As for frametimes - Hilbert of Guru3D used FCAT for testing which limited him to 1080p and he doesn't see any stuttering in SoM in this resolution - http://www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,31.html
At the same time Hexus.net review used something else to capture frametimes (Fraps?) and what they have in 4K is clearly different - http://hexus.net/tech/reviews/graphics/84170-amd-radeon-r9-fury-x-4gb/?page=9

Here's another direct comparison of frametimes in 1080p and 4K in one place - http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=18 - SoM results are clearly showing that Fury X is hitting it's VRAM limit there in 4K.
Same can be seen in PCars and a couple of other games in this review.
 

FireFly

Member
Of course it manages the memory differently, it's a completely different memory controller after all. But the amount of cases where Fiji is using less memory than Hawaii is basically the same as the opposite so all in all the results from memory utilization POV are the same.
Suppose that this holds for all games, but the cases where Fiji uses less memory are those where Hawaii would be at 4GB. Would you rather not hit the limit, or consume less memory when you are already within the limit?

As for frametimes - Hilbert of Guru3D used FCAT for testing which limited him to 1080p and he doesn't see any stuttering in SoM in this resolution - http://www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,31.html
At the same time Hexus.net review used something else to capture frametimes (Fraps?) and what they have in 4K is clearly different - http://hexus.net/tech/reviews/graphics/84170-amd-radeon-r9-fury-x-4gb/?page=9

Here's another direct comparison of frametimes in 1080p and 4K in one place - http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=18 - SoM results are clearly showing that Fury X is hitting it's VRAM limit there in 4K.
Same can be seen in PCars and a couple of other games in this review.
In those benches it certainly seems to get worse with resolution, but even Mordor isn't hitting 4GB in that Hardwareluxx review. But lets say you are right. Witcher 3 has clear frame pacing issues that seem to manifest at high resolutions. Yet according to that review it only uses about 2GB of RAM. What explains these issues?
 

Sinistral

Member
Of course it manages the memory differently, it's a completely different memory controller after all. But the amount of cases where Fiji is using less memory than Hawaii is basically the same as the opposite so all in all the results from memory utilization POV are the same.

As for frametimes - Hilbert of Guru3D used FCAT for testing which limited him to 1080p and he doesn't see any stuttering in SoM in this resolution - http://www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,31.html
At the same time Hexus.net review used something else to capture frametimes (Fraps?) and what they have in 4K is clearly different - http://hexus.net/tech/reviews/graphics/84170-amd-radeon-r9-fury-x-4gb/?page=9

Here's another direct comparison of frametimes in 1080p and 4K in one place - http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=18 - SoM results are clearly showing that Fury X is hitting it's VRAM limit there in 4K.
Same can be seen in PCars and a couple of other games in this review.

How do you get VRAM limits affecting Frame times from those charts? Nowhere does it list VRAM usage. From that same review:
http://www.hardwareluxx.com/index.p...5798-reviewed-amd-r9-fury-x-4gb.html?start=19

Their conclusions differ from yours. Not to mention Variable VRAM Cache is different from Required Explicit VRAM usage. Getting a number from Afterburner isn't even a solid gauge for what is really going on. There really is no solid distinction. This is something reviewers should develop more methods of testing. As, when working, SLI/Crossfire 4GB cards will perform better than a single 6GB 980TI and 12GB TitanX under the same settings.
 

matmanx1

Member

Digital Foundry was actually very positive in their review of the Fury X. They also noted that the upcoming Fury and Nano are probably going to be very exciting based on the tech in the Fury X. Based on the limited availability and timing I am thinking that AMD may not care all that much how many Fury X's they sell at $649 as I am sure they had initially wanted to charge more. It's a halo product and I guess some excitement and buzz are better than none at all.

I'm not giving them a pass on the driver issues (including the locked voltage and the frame timing) but they still have a chance to redeem themselves somewhat with the Fury and especially with the Nano.

I've chosen the 980Ti for now but I hope AMD gets their stuff together by the time the Nano hits.
 
Is AMD going to use this architecture the next time they move to a new process? From what I understand nVidia's Pascal could be launching within the the next 6-8 months, and it seems like they would have a significant advantage over AMD with a new architecture, manufacturing process and HMB2.
 

Sinistral

Member
Is AMD going to use this architecture the next time they move to a new process? From what I understand nVidia's Pascal could be launching within the the next 6-8 months, and it seems like they would have a significant advantage over AMD with a new architecture, manufacturing process and HMB2.

Both HBM2, TSMC and GloFo 16nm process are expecting to ramp up mass production for 2H 2016. So unless Pascal is radically changed, nVidia is stuck behind this as much as AMD. Last time I read anyway.
 

tuxfool

Banned
Is AMD going to use this architecture the next time they move to a new process? From what I understand nVidia's Pascal could be launching within the the next 6-8 months, and it seems like they would have a significant advantage over AMD with a new architecture, manufacturing process and HMB2.

From what I understand the AMD equivalent, Arctic Isles (4xx) series will be based on a new microarchitecture (GCN 2.0) also at 14/16nm and with HBM2. When those launch in relation to Nvidia other is anyones guess. Though typically AMD launches first on node shrinks (used to, this long ass 28nm period has changed a lot of things).

Both HBM2, TSMC and GloFo 16nm process are expecting to ramp up mass production for 2H 2016. So unless Pascal is radically changed, nVidia is stuck behind this as much as AMD. Last time I read anyway.

Got a source? I was under the impression that 16nm processes were already ready for mass production at the end of this year.
 

Nachtmaer

Member
Is AMD going to use this architecture the next time they move to a new process? From what I understand nVidia's Pascal could be launching within the the next 6-8 months, and it seems like they would have a significant advantage over AMD with a new architecture, manufacturing process and HMB2.

Good question. GCN has held up relatively well considering its age. I'm not sure if they will go for something entirely new, but they could do a revision to fix current bottlenecks and whatnot. Maybe it'll be something like what Maxwell is, which wasn't as big of a change as what Kepler was to Fermi.

Pascal could be similar. A new node, that NvLink thingy and HBM are already a lot of big changes. Adding another architecture overhaul would add even more risk.
 

Justinh

Member
Just saw something interesting on reddit (for once).

Three different rev's of Catalyst 15.15 were released on the 15th, 17th, and 20th. There's speculation that variation between different sites test results might have to do with what driver rev was used.

https://www.reddit.com/r/buildapc/comments/3b30bt/discussionfury_x_possibly_reviewed_with_incorrect/

FFS, why couldn't they name them differently to avoid confusion? Like at least 15.1515/15.1517/15.1520 or letters or something? Just seems...ugghh...

I very much doubt most websites do it.

Tom's Hardware says which are in-game and which are not, but they don't separate them at all. Anandtech doesn't even say which they are using unless you can gleam it from the text, which is what you will find looking at their 980 Ti review (where they admit the frame times came as those reported by the in-game benchmark fro GTAV but don't otherwise say they used it). Guru3d uses the in-game benchmarks, and doesn't mention what they are using unless you can figure it out yourself. TechPowerUp is the same way.

I could go on like this. I wouldn't trust any review to contain something other than the in-game benchmark unless they explicitly say it is not, rather than the other way around.
I don't have any sources or anything, but I always thought that sites use customs runs for benchmarks, not benchmarking tools unless specifically noted. I always got the notion of distrust for such ingame benchmarking tools.
 
FFS, why couldn't they name them differently to avoid confusion? Like at least 15.1515/15.1517/15.1520 or letters or something? Just seems...ugghh...


I don't have any sources or anything, but I always thought that sites use customs runs for benchmarks, not benchmarking tools unless specifically noted. I always got the notion of distrust for such ingame benchmarking tools.

Take all these benchmarks with a grain of salt. If you don't see them doing custom runs or they say they are, and the game has in-game benchmarking, I'd always assume the latter. It's just far too easy to run the in-game benchmark and type down the results and most people don't care that much since they assume the differences aren't that substantial between the benchmark and "real world" gameplay. Frankly, I think they are two different things--the benchmark's are the benchmarks and gameplay is a subjective thing. Some of the sites actually tell you if they are doing that, but most don't, and they certainly don't "separate" them like the person I originally replied to said.

Like I said in the ones I listed, many are using in-game benchmarking and just not saying it. For example, the Anandtech review only mentions that the GTAV benchmark gets its own frame time results, so they don't have to, so they must be using the in-game benchmarking tool. Guru3d mentions that they think their SoM results might be high because the in-game benchmark runs high, so you know they are doing that, even though the graphs aren't labeled as such.

The person I originally replied to said you "have to play it for the benchmarking to happen," which is most definitely not true. At least some, and probably many, are using the in-game benchmarks. A few sites refuse to use them at all.
 

Sinistral

Member
Got a source? I was under the impression that 16nm processes were already ready for mass production at the end of this year.

While 16nm is starting mass production soon, it's for mobile and lower power devices (Mostly Apple, because they pay the most), the high performance parts are more tricky apparently.
http://wccftech.com/tsmcs-16nm-finfet-faces-delays-qualcomm-jumps-ship-samsung/

HBM2's timeline is 2Q 2016:
http://www.kitguru.net/components/g...irms-mass-production-of-first-gen-hbm-memory/

Anyway, my readings were from other forums discussing the matter. Would be amazing to see Pascal in 8 months.
 
While 16nm is starting mass production soon, it's for mobile and lower power devices (Mostly Apple, because they pay the most), the high performance parts are more tricky apparently.
http://wccftech.com/tsmcs-16nm-finfet-faces-delays-qualcomm-jumps-ship-samsung/

HBM2's timeline is 2Q 2016:
http://www.kitguru.net/components/g...irms-mass-production-of-first-gen-hbm-memory/

Anyway, my readings were from other forums discussing the matter. Would be amazing to see Pascal in 8 months.

I will bet my left bollock that Pascal is not coming in 8 months on HBM2's production timeline alone.
 
Top Bottom