• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon Fury X review thread

Gritesh

Member
Gsync sounds great, but I can't justify an expensive monitor with only one DP input, which the 1440p gsyncs seem to be limited to. And yet I also find it hard to buy a 1440p monitor *without* gsync because it feels like a waste. Someone make a nice 27-28" 1440p gsync monitor with DP, HDMI and DVI connections


Asus is coming out with one that apparently has HDMI and display port is ips 27" 144hz gsync.

I haven't gone and picked up my stuff buti might hold off on the monitor and wait for this Asus one to out.
 

Chiggs

Member
This isn't a small loss for AMD. It's an outright flop. This card does nothing to make the masses switch from Nvidia the way its 9700 and 4870 did back in the day.

Next time around, they really need to stack this thing with more RAM and then sell it at a small loss.

The Nano card is pretty awesome, though.
 
Yep. And the ultimate point is that HardOCP doesn't do it because it takes too much effort. Everything they say is a rationalization of that point.

It's not because of lack of effort. I think HardOCP puts a lot more effort into their reviews than most, since they actually play each game a minimum of 30 minutes at different settings to arrive at their results. Everyone else just sits there and watches a benchmark run over and over.
 

oxidax

Member
Still no news from AMD regarding the voltage? :(
I really wanted this card. I'm still waiting for something that tells me that I should get it. Some hope..
 

spicy cho

Member
This isn't a small loss for AMD. It's an outright flop. This card does nothing to make the masses switch from Nvidia the way its 9700 and 4870 did back in the day.

Next time around, they really need to stack this thing with more RAM and then sell it at a small loss.

The Nano card is pretty awesome, though.
If there is a next time. You don't beat the competition by cutting your r&d budget. That said I do hope amd can compete and stay afloat because cards are getting too expensive and historically their cards have been a better value.
 

Crisium

Member
It's not because of lack of effort. I think HardOCP puts a lot more effort into their reviews than most, since they actually play each game a minimum of 30 minutes at different settings to arrive at their results. Everyone else just sits there and watches a benchmark run over and over.

No, it is from a lack of effort that they do not include frame times. They said this in the Hardforum post on this very review. Not taking away from anything else you said, since it's true they have a more time consuming way of benchmarking. But because that takes up so much time they can't be bothered to do more than 5 games or FCAT.
 

tokkun

Member
It doesn't necessarily need to be seen to be believed. Assume that you are targeting to play a game at an average of 60 FPS. That equates to 1 Frame per 1000ms/60 or ~ 1 frame per 16.7 ms.

Going to grab one of the graphs mkenyon posted...

w3-99th.gif


This essentially shows how quickly these cards pump out frames during 99% of the test scenario. More specifically, 99% of the time, we should not expect frametimes that are faster than the listed values.

Again, if we're targeting an average of 60 FPS, we want frametimes to be a close to 16.7 ms as possible. In this case, the Fury X is taking more than twice the time needed to display two frames to simply display one frame, 99% of the time.

As some of the reviewers pointed out, this results in a subjectively inferior experience whereas the 980 Ti provides smoother gameplay, regardless of the (minor) difference in average framerate.

If I have completely misunderstood 99th percentile frame times, please let me know. I don't want to cause any confusion if that is the case.

Yeah, you have misunderstood the meaning of 99th percentile. What it means is that 1% of all frames take that long or longer, not 99%.
 

Patchy

Banned
Fury X has done goofed with only 4GB RAM, I am now going to a 6GB solution via the 980Ti and I HATE the way Nvidia does business.

A 980ti will last me twice as long as a Fury X.

Really sad about this.
 

x3sphere

Member
I'm fairly certain the OC you referred to in those graphs are indeed gains tested with voltage unlocked, graphs typically showed the top overclock available and don't limit themselves to stock voltage

Well regardless, most people are getting 1350-1400 MHz on the 980 Ti at stock volts, and the standard boost clock is 1075 MHz. That's a 25-30% clockspeed increase - the Fury is only seeing a ~10% boost in the reviews I've read.

Perhaps adding volts could change things drastically, we will see. But you don't need to add volts to get a good OC on Maxwell.
 

Crisium

Member
Yeah, you have misunderstood the meaning of 99th percentile. What it means is that 1% of all frames take that long or longer, not 99%.

Really? I always interpreted it the other way around.

But it makes sense now how these results look so much worse than even PCPer themselves had. You can see that the Fury X has slightly worse frametimes in the PCPer review, but nothing nearly as bad as the ratio from techreport.

Same with Guru 3D.

4uPy7Ul.png

http://www.guru3d.com/articles_pages/amd_radeon_r9_fury_x_review,28.html

on this ~55 second run the graphics card manages to remain roughly below 24 ms; as you can see, there are no stutters recorded. This is spot on perfect rendering (frametime wise lower is better).

In-between frames there is a differential of say 1 ms, other than that, close to perfection is what this is.

That would mean the 99th percentile is basically those big spikes we see between the 40 and 50s mark in the above image. So techreport takes the worst of the worst? There's value in that, certainly, but that doesn't seem typical of what the end user experiences for most of playing and seems misleading...
 

tokkun

Member
Really? I always interpreted it the other way around.

You're probably more familiar with percentiles being used to describe test results in school where 99th percentile means you did as good or better than 99% of students.

In computing systems, when we talk about 99th percentile latencies, we are talking about results that are the same or worse than 99% of other events.

Think about it this way: If 99% of the frames were slow and 1% were fast, then we would see the effect in the average frame rate. However, when 99% of frames are fast and 1% are slow, it is possible for those 1% to not be evident when only looking at the average. That is why frame latencies need to be considered separately from traditional measurements of average frame rate.

Although you might think that means that 1% of frames being slow isn't a big deal, remember that if you are running a game at 60 fps, you are going to see a slow frame every 2 seconds (assuming they are uniformly distributed; often they aren't which is why it's good to graph the frame latencies over time). If you see a microstutter every few seconds, it will be annoying, even if your overall framerate average is good.
 

Crisium

Member
Makes sense. But is the 99th percentile based averages still? Say, once every 15 seconds you get an utterly massive spike. It would only last a small moment and is only every 15 seconds on average. But wouldn't it raise the 99% percentile average even though it is a tiny fraction that's way less than even 1% of the frames?
 
No, it is from a lack of effort that they do not include frame times. They said this in the Hardforum post on this very review. Not taking away from anything else you said, since it's true they have a more time consuming way of benchmarking. But because that takes up so much time they can't be bothered to do more than 5 games or FCAT.

This is silly. They also commented that their videocard reviews literally always come down to the wire and they test all the games it is realistic for them to in the few days they have before NDA lifts. Brent also mentioned he always pulls all-nighters for his reviews. To say there is a lack of effort or they can't be bothered to do this or that is quite frankly insulting and disrespectful to the time and effort they put into their reviews.

There are already 2 other sites which do the FCAT and they give very detailed charts and graphs of many games. If FCAT is what you love, go read TPU and PCPer and pretend there isn't HardOCP. I greatly value HardOCP because they are literally the only site who don't just run a bunch of benchmarks, make graphs, and call it a day. They actually sit down and extensively play the games they are testing, which is incidentally why they can't test more than 5 because that's all they could fit in before the deadline.

There will always be a place for objective scientific measurements like FCAT but HardOCP are the only guys doing the heavy lifting and going in there and experiencing what it's like to actually play games on the cards they test and they provide interesting subjective feedback on what they are seeing with their eyes. So no, I don't want HardOCP to become like the other 28 sites who run canned benchmarks, prepare pretty bar charts, and talk about how pretty their charts are. There are already 28 other sites which do this. HardOCP are the only site which plays the actual game and tells us how it is, and I would like them to keep it that way.

P.S. I read reviews from multiple websites anyways.
 

tokkun

Member
Makes sense. But is the 99th percentile based averages still? Say, once every 15 seconds you get an utterly massive spike. It would only last a small moment and is only every 15 seconds on average. But wouldn't it raise the 99% percentile average even though it is a tiny fraction that's way less than even 1% of the frames?

By definition, the 99% latency value only tells us that 99% of frames were as good or better and that 1% were worse. It does not tell us how much worse. You could take that periodic spike and make it 100X worse, and it would not change the 99th percentile value.
 

Crisium

Member
That'd explain why no memory overclocking. Hopefully they unlock voltage and we can get higher shader/TMU clocks than right now. 390X mainly has gains over 290X because of memory bandwidth though, so this could mean Fury X also craves bandwidth and no OC could spell trouble.
 
Well regardless, most people are getting 1350-1400 MHz on the 980 Ti at stock volts, and the standard boost clock is 1075 MHz. That's a 25-30% clockspeed increase - the Fury is only seeing a ~10% boost in the reviews I've read.

Perhaps adding volts could change things drastically, we will see. But you don't need to add volts to get a good OC on Maxwell.

It depends on how conservative AMD was with stock voltage, I believe it will be unlocked pretty soon and we'll know. Nvidia cards for the last few times clocked higher on stock voltages anyway so it's not surprising.
 

Chiggs

Member
If there is a next time. You don't beat the competition by cutting your r&d budget. That said I do hope amd can compete and stay afloat because cards are getting too expensive and historically their cards have been a better value.

I agree wholeheartedly. I'm not a huge Nvidia fan, but they've got the best products out there.
 
I'm disappointed that the Fury X is literally the same price as the 980Ti down under (AUS) barring 1-2 non-reference models that are $20-$30 higher.

I'm not even remotely knowledgeable when it comes to specs or components etc, but does the cooling / noise factor into the equation here, despite the lesser performance of the Fury X Vs. 980 Ti?

I'm currently using a 680 SLI (2GB) and looking to upgrade. My current cards run so insanely loud and hot that it's borderline torture in the summer. Should I bother upgrading now, or just wait? Performance wise my current GPU's are solid / okay for now (Barring the VRAM). I intend for my next upgrade to last for me the remainder of this console generation.
 

Durante

Member
Good to see people explaining frametime percentiles in here.

This is so bogus, because it leaves so much up to subjectivity. FCAT and Frame Time testing shows this objectively. You have it backwards.
Yeah, I can't say that I'm a huge fan of HardOCP's testing methodology.
 

dr_rus

Member
Well, it seems even Shadow of Mordor is only using 3.8 GB at 4K on the Fury.

http://www.extremetech.com/gaming/2...erformance-power-consumption-and-4k-scaling/2

If the spikes are caused by memory management issues, how do we know AMD can't improve things in a future driver? According to AMD, up until now they have done very little work on improving their memory optimisation. So, I think there is still some hope that things will get better.

How much RAM do you expect it to use? It's basically using all the RAM there is - the 0.2 difference is negligible and can be accounted to driver allocation issues.

You can't expand the RAM with the driver. SoM is one game which really suffers from 4GBs in anything higher than 2560x1440. It works but the bus traffic is heavy and leads to stuttering and slowdowns. Not much anyone can do with this beside increase the onboard RAM size.
 

Durante

Member

RE4PRR

Member
Good to see people explaining frametime percentiles in here.

Yeah, I can't say that I'm a huge fan of HardOCP's testing methodology.

What? Of actually playing the game for 30 minutes instead of just trying out game benchmarks? So what they leave out FCAT, others do it anyway.
 

Sanjay

Member
Is there any additional information about this other than your link?

It would be nice to know more, because a DP -> single link DVI adapter is a neat extra, but really just ~ €10 if you buy it (and readily available). Active DP -> dual link DVI adapters are a lot more rare and expensive.

I have no idea yet as this was just posted now.

What is the difference between the two and why is the other type more expensive?
 

Durante

Member
What? Of actually playing the game for 30 minutes instead of just trying out game benchmarks? So what they leave out FCAT, others do it anyway.
No, reporting subjective feelings rather than objective data. I'm more of a science/empirical evidence guy.

(Not that it matters in this particular case, since the subjective impressions and objective data line up)

I have no idea yet as this was just posted now.

What is the difference between the two and why is the other type more expensive?
Single-link has a limit of 1920×1200@60 Hz, dual link goes up to 2560×1600. Therefore, for the people with 2560×1440 Korean monitors with just DVI dual link is pretty important.

As for why they are more expensive, I'm not an expert but I assume they are simply more complex to build.
 

FireFly

Member
How much RAM do you expect it to use? It's basically using all the RAM there is - the 0.2 difference is negligible and can be accounted to driver allocation issues.

You can't expand the RAM with the driver. SoM is one game which really suffers from 4GBs in anything higher than 2560x1440. It works but the bus traffic is heavy and leads to stuttering and slowdowns. Not much anyone can do with this beside increase the onboard RAM size.
Well, the Fury X is using 4GB at higher resolutions.

You can't expand RAM with the driver but you can make sure that what is being stored is what is most used. From the quotes I read AMD haven't cared about managing this before and large chunks of what is stored is barely ever accessed, so is just wasting space.
 

wildfire

Banned
They don't offer the same data comparisons at all. That's the point :p

They do. That's the whole point of apples to apples sections of their reviews. They don't show off 6-12 GPUs in hose charts but their results with just 3 are sufficient since their testing method doesn't change much across GPU products in adjacent pricing tiers.


This isn't a small loss for AMD. It's an outright flop. This card does nothing to make the masses switch from Nvidia the way its 9700 and 4870 did back in the day.

Next time around, they really need to stack this thing with more RAM and then sell it at a small loss.

Out of all the compromises they made, for the time being, RAM was the least important problem. Their problem is that the way they think about targeting their audience is totally wrong. Anyone with actual business sense could tell before this product shipped people would react negatively on multiple fronts. Their marketing team declared a month ago they don't want to be known as making second tier products but the people outside of the marketing department actually do want to make and sell us exactly that.
 
Fury X has done goofed with only 4GB RAM, I am now going to a 6GB solution via the 980Ti and I HATE the way Nvidia does business.

A 980ti will last me twice as long as a Fury X.

Really sad about this.

Let's see what happens six months from now when fury x gets new drivers and support for the 980ti ceases.
 

pj

Banned
Let's see what happens six months from now when fury x gets new drivers and support for the 980ti ceases.

It'll MAYBE slowly close the gap until they're about even in 18 months? Terrific.

I put off my build 2 months waiting for this card because I didn't want to support nvidia. The Fury X is a huge disappointment to me and it's ridiculous to pray that it will maybe someday get better with drivers or dx12 games that are years away.
 

riflen

Member
What are you talking about?

He's alluding to the assertions flying around recently that Nvidia have neglected Kepler GPUs since Maxwell 2 was released. It's all bollocks, of course.
Nvidia seem to be a popular whipping boy on this forum of late. They're also apparently responsible for the poor quality of the latest Arkham game, if you believe some people.
 

KingSnake

The Birthday Skeleton
He's alluding to the assertions flying around recently that Nvidia have neglected Kepler GPUs since Maxwell 2 was released. It's all bollocks, of course.
Nvidia seem to be a popular whipping boy on this forum of late. They're also apparently responsible for the poor quality of the latest Arkham game, if you believe some people.

That's bollocks. My 780 is still well supported with all the new games (the ones that are not broken, of course).
 

Durante

Member
He's alluding to the assertions flying around recently that Nvidia have neglected Kepler GPUs since Maxwell 2 was released. It's all bollocks, of course.
Nvidia seem to be a popular whipping boy on this forum of late. They're also apparently responsible for the poor quality of the latest Arkham game, if you believe some people.
It's not really "of late", it's since they had the temerity to note the higher performance of PCs around the console release, when console fanboyism was at its most ferocious. Since then, they've been known as "salty", for missing out on the incredibly lucrative
not
console hardware supplier market. (This stopped completely derailing every single NV thread only after moderator intervention)

I think the fact that they are associated with high-end PC gaming also automatically puts them on some posters' shitlists.
 

mrklaw

MrArseFace
PC Games Hardware had a similar finding:
Fiji_Cooler_Master_Heat_Full_Load_380Watt-pcgh.jpg


I think this could be rather easily alleviated though. The cooler can clearly handle more heat, but they need to transport it off the VRMs better.

when you say 'easily' allieviated - you mean by removing the stock cooler and replacing it with a custom one? Sounds quite advanced when the stock cooler was supposed to have 'huge headroom for overclocking' which they said on the stage for the reveal.
 

NeOak

Member
PC Games Hardware had a similar finding:
Fiji_Cooler_Master_Heat_Full_Load_380Watt-pcgh.jpg


I think this could be rather easily alleviated though. The cooler can clearly handle more heat, but they need to transport it off the VRMs better.

The fuck? 100 degrees Celsius with water? Wat.
 
This is beyond disappointing. Literally near bulldozer levels for me tbh. I am normally an AMD supporter as much as I can be. I don't have too many issues with their drivers, and when there is an issue I don't mind spending a little time to find a work around.

But god damn did they mislead us here. The benchmarks before launch showed it beating the TI at stock, and it was supposed to be "an overclockers dream". Meanwhile the VRMs are sitting at 100* load.

Seriously, they fucked this launch up and now we see why the NDA ended on release day. Now yesterday I bought the MSI 6G 980 TI, when I had all intentions of getting a Fury X. If they could have released the Fury X with some decent overclocking room at $550 this would be an awesome price proposition. Beats the 980 handily, could overclock to around stock TI levels.
 
PC Games Hardware had a similar finding:
Fiji_Cooler_Master_Heat_Full_Load_380Watt-pcgh.jpg


I think this could be rather easily alleviated though. The cooler can clearly handle more heat, but they need to transport it off the VRMs better.

This is the one reason keeping me from buying the Fury X.

For those with more GPU knowledge. Will this be a bad thing in the long run? Assuming the card will not be overclocked. I'm just worried the card will fry after few months.

I have no idea how high the VRMs temps can go.
 

matmanx1

Member
The last 24 hours have been interesting. I woke up yesterday morning with the intention of buying a Fury X and was refreshing several different web pages in a vain attempt to locate some stock at 8am EST. Of course I struck out but went on and "pre-ordered" the Gigabyte version from Amazon with the expectation that I would give it a few days or a week and see how stock levels were when I returned from vacation next week.

But as the day went on and I read more reviews and thought about the whole situation I began to seriously question my decision. As keen as I am on AMD being a valid competitor to Nvidia and as much as I like the idea of a small'ish, powerful water-cooled HBM equipped GPU, at $650+ why am I not buying the single best performing card for the money? It just doesn't make sense, especially not knowing how long it is going to take Amazon and Gigabyte to actually fulfill my order.

So this morning I ordered an EVGA 980Ti SC+ from Newegg, who has them in stock and can ship immediately so that I can get on with my PC upgrade.
It was a decision made solely by my brain as my heart really wanted the Fury X. But there's no point in paying that kind of money for a slightly worse experience. That would just defeat the purpose of why I game on the PC to start with.
 
Dunno, drivers, unlocked voltage for overclocking (this is certainly happening soon), and gaming with the Fury X on Windows 10 may see AMD's new flagship power right past the 980 Ti eventually.

But that's a lot of hoops to jump through and may be a lot of waiting. Right now a good non-reference 980 Ti is the better choice.
 

Durante

Member
when you say 'easily' allieviated - you mean by removing the stock cooler and replacing it with a custom one? Sounds quite advanced when the stock cooler was supposed to have 'huge headroom for overclocking' which they said on the stage for the reveal.
No, I meant more by slightly modifying the stock cooler (in a "v2" or so) to improve the heat transport off the VRMs.

This is the one reason keeping me from buying the Fury X.

For those with more GPU knowledge. Will this be a bad thing in the long run? Assuming the card will not be overclocked. I'm just worried the card will fry after few months.

I have no idea how high the VRMs temps can go.
There are VRMs which are validated for continuous operation at these temperatures. I'm sure it will be fine at stock clocks.
 
Dunno, drivers, unlocked voltage for overclocking (this is certainly happening soon), and gaming with the Fury X on Windows 10 may see AMD's new flagship power right past the 980 Ti eventually.

But that's a lot of hoops to jump through and may be a lot of waiting. Right now a good non-reference 980 Ti is the better choice.

Why do keep people acting like Windows 10 is going to do anything for the Fury X that it won't do for the 980 Ti? They're both DX12 cards.
 
Top Bottom