• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

AMD Radeon Fury X review thread

Now that the dust has settled a little bit, I think too many of you are being too harsh on this card. It's more or less as fast as a Titan X in 4K res. Not too shabby imo.

Obviously the 1080p performance looks bad comparatively, but consider this:

- The drivers used for the reviews are likely not the best as per usual
- The voltage is locked, so you aren't going to get good overclocks when the core voltage is on the limit (the minimum) of what you can feed the GPU. Apparently this is so AMD could keep power draw and temp down. This explains the paltry overclocking headroom. I am certain this card will offer at least 15-20% core clock overclocks with more voltage.
- On the overclocking point, this card runs cooler than any other top GPU out there. Again, not too shabby.

image.png


image.png


image.png


image.png


image.png


image.png


image.png


image.png
 

Durante

Member
Now that the dust has settled a little bit, I think too many of you are being too harsh on this card. It's more or less as fast as a Titan X in 4K res. Not too shabby imo.
It's like a Titan X at 4k if you look at FPS. It's closer to a 980 if you look at frametimes.

It's sad really that even years later there are still just a few sites who have moved over to frametime testing.
 

mephixto

Banned
It's like a Titan X at 4k if you look at FPS. It's closer to a 980 if you look at frametimes.

It's sad really that even years later there are still just a few sites who have moved over to frametime testing.

Well, it's Tom's Hardware... they were something ages ago, now it looks like the IGN of hardware sites.
 

Sorral

Member
It's like a Titan X at 4k if you look at FPS. It's closer to a 980 if you look at frametimes.

It's sad really that even years later there are still just a few sites who have moved over to frametime testing.

I was going to get it until I saw the frame time charts... Granted Tom's Hardware's frame time charts aren't as bad as the ones Mk posted earlier, but they were still not that good. Do you think there is any hope that AMD will improve them this year?

Maybe I'm too paranoid, but I just don't want to grab a 980/980ti where they might end up like how Kepler is now when Pascal releases next year... even though I actually need an upgrade now. :/

Well, it's Tom's Hardware... they were something ages ago, now it looks like the IGN of hardware sites.

They did frame times tests. Click the third and fourth picture to see them.
 
It's like a Titan X at 4k if you look at FPS. It's closer to a 980 if you look at frametimes.

It's sad really that even years later there are still just a few sites who have moved over to frametime testing.

The thing is, I'm not even sure what frametimes are. Are you sure it makes this much of a difference? Honest question, as it's peculiar that something that is such a big deal apparently isn't even covered in most of the reviews.

In the review that I saw that did mention this, it said AMD will likely fix these issues mostly with drivers.
 
The thing is, I'm not even sure what frametimes are. Are you sure it makes this much of a difference? Honest question, as it's peculiar that something that is such a big deal apparently isn't even covered in most of the reviews.

In the review that I saw that did mention this, it said AMD will likely fix these issues mostly with drivers.

Frametimes are the individual time for each from to be rendered. they are important because you could have a nomal 59 fps on average, but one (or more) of those frames in the second could be taking 40 or 60 ms to be displayed. This would look like the game stuttering.

Measuring frametimes (which should be a steady and similar number) is more important than measuring average framerate.
 

dr_rus

Member
The thing is, I'm not even sure what frametimes are. Are you sure it makes this much of a difference? Honest question, as it's peculiar that something that is such a big deal apparently isn't even covered in most of the reviews.

In the review that I saw that did mention this, it said AMD will likely fix these issues mostly with drivers.

Worse frametimes is (micro)stuttering. AMD is unlikely to fix this in 4K as Fury's frametimes issues are coming from the need to fetch the data via PCIE because 4GBs of RAM is not enough for 4K even today.
 
Thanks for the explanations.

This is something I will have to see with my own eyes to really understand how much of a big deal it is. But then again, we're comparing metrics like one card offering 84 fps whilst the other offers 87 fps even though it is literally impossible to notice the difference between 84 fps and 87 :/
 

mkenyon

Banned
Frametimes are the individual time for each from to be rendered. they are important because you could have a nomal 59 fps on average, but one (or more) of those frames in the second could be taking 40 or 60 ms to be displayed. This would look like the game stuttering.

Measuring frametimes (which should be a steady and similar number) is more important than measuring average framerate.

VIRTUAL_FIST_BUMP.png


It makes me so happy to see so many people on board with this on GAF.
 

Saintruski

Unconfirmed Member
looks like they still havnt really done anything to improve their tessellation...which believe it or not is more important than compute power

amd-radeon-r9-fury-x-tessmark.jpg
 

Jzero

Member
Damn it, i was thinking of getting the regular Fury when that came out and pair it with an Ultra Wide FreeSync monitor.
 

FLAguy954

Junior Member
Yeah, I feel like this is the GPU equivalent of the Bulldozer release. I was so hyped for it.

You (and others like HardOCP) say this but this is nothing like the Bulldozer release, that is a huge exaggeration.

Bulldozer was literally equal or worse than the previous generation AMD CPU and brought nothing new to the table save for a few AMD specific instructions. Since it did not improve over the previous generation. that also meant that failed to close the growing gap vs Intel CPUs.

The Fury X, on the other hand, is equipped with next gen HBM and is, on average, 30-50% more powerful than AMD previous flagship GPU. All of this while providing massive gains in efficiency (only drawing on average of 20 more watts than the 980 Ti) and actually closing the performance gap further in comparison to Nvidia's 980 Ti (where it is on average 10-15% slower).

The only fuck up I see is the pricing as it is not worth $650. $600 is good and $550 is better. I hope that the normal Fury is just the Fury X on air because it would be the perfect price pint for a Fury product @ $550 imo.
 
It makes me so happy to see so many people on board with this on GAF.
A GPU with higher power, high efficiency, steady frametime with decent price point coupled with a 144 Hz ultrawide FreeSync/ULMB 3440x1440 projector is what I would consider gaming nirvana. I can't wait to have my hands on something like this.
 

Elman

Member
Thanks for the explanations.

This is something I will have to see with my own eyes to really understand how much of a big deal it is. But then again, we're comparing metrics like one card offering 84 fps whilst the other offers 87 fps even though it is literally impossible to notice the difference between 84 fps and 87 :/

It doesn't necessarily need to be seen to be believed. Assume that you are targeting to play a game at an average of 60 FPS. That equates to 1 Frame per 1000ms/60 or ~ 1 frame per 16.7 ms.

Going to grab one of the graphs mkenyon posted...

w3-99th.gif


This essentially shows how quickly these cards pump out frames during 99% of the test scenario. More specifically, 99% of the time, we should not expect frametimes that are faster than the listed values.

Again, if we're targeting an average of 60 FPS, we want frametimes to be a close to 16.7 ms as possible. In this case, the Fury X is taking more than twice the time needed to display two frames to simply display one frame, 99% of the time.

As some of the reviewers pointed out, this results in a subjectively inferior experience whereas the 980 Ti provides smoother gameplay, regardless of the (minor) difference in average framerate.

If I have completely misunderstood 99th percentile frame times, please let me know. I don't want to cause any confusion if that is the case.


EDIT:

Yeah, you have misunderstood the meaning of 99th percentile. What it means is that 1% of all frames take that long or longer, not 99%.

Now I'm confused. Going by what I've read online, it seems to fit mkenyon's definition in the Frame Latency thread:

99th Percentile - This shows in milliseconds how long one can expect 99% of all frames to be rendered. This is more or less a more accurate assessment of 'Average FPS'.
 

tuxfool

Banned
looks like they still havnt really done anything to improve their tessellation...which believe it or not is more important than compute power

amd-radeon-r9-fury-x-tessmark.jpg

Mhmm. I'm not entirely certain extreme tessellation factors are that useful yet, they don't see that widespread usage, but who knows they might be in the future.
 

Gritesh

Member
Bah tiger direct just contacted me to let me know it was a pricing error and they are not honoring it now.

I cancelled it and my local store just got in the evga 980ti sc+ so I bought that and the new acer ips gsync monitor.

Sorry AMD I really wanted to go with you but I just couldn't do it...
 

endtropy

Neo Member
Mhmm. I'm not entirely certain extreme tessellation factors are that useful yet, they don't see that widespread usage, but who knows they might be in the future.

It's something I would consider "important". As the power divide between PCs and consoles this generation grows, tessellation is one area where PC's can see a large improvement while requiring minimal resources. I.e. developer can create high quality textures with depth information that run on consoles and PC's but on PCs' have the addicted deformation and complexity that tessellation can provide, again all with minimal additional work required. Tessellation is really one of those "break through" technologies IMO right up there with the non-fixed graphics pipe line and geometry acceleration before that. I don't think it's getting second fiddle because AMD doesn't think it's important, I think it got second fiddle because this version of GCN barely moved the needle. I'd be shocked if the next iteration of Fury didn't see a tremendous improvement in the tessellation engine.
 
Bah tiger direct just contacted me to let me know it was a pricing error and they are not honoring it now.

I cancelled it and my local store just got in the evga 980ti sc+ so I bought that and the new acer ips gsync monitor.

Sorry AMD I really wanted to go with you but I just couldn't do it...

Thanks for the running commentary of what's been happening with you and your purchases today. Lol. Another small loss for AMD.
 

Rival

Gold Member
It seems to be a pretty good card but I can't help but feel as though the missed an opportunity to hit a real home run on this.
 

Ionic

Member
Bah tiger direct just contacted me to let me know it was a pricing error and they are not honoring it now.

I cancelled it and my local store just got in the evga 980ti sc+ so I bought that and the new acer ips gsync monitor.

Sorry AMD I really wanted to go with you but I just couldn't do it...

I think I'm about ready to do the same thing. I even tried to buy a 290x about 6 months ago, but that order was cancelled because they ran out of stock and couldn't ship mine. So I decided to wait until AMD's new cards and it just happens again. Somebody upstairs is just telling me to switch to Nvidia. I really, really don't like the idea of shelling out for a stupid Gsync monitor over a Freesync one though.
 
Now that the dust has settled a little bit, I think too many of you are being too harsh on this card. It's more or less as fast as a Titan X in 4K res. Not too shabby imo.

Obviously the 1080p performance looks bad comparatively, but consider this:

- The drivers used for the reviews are likely not the best as per usual
- The voltage is locked, so you aren't going to get good overclocks when the core voltage is on the limit (the minimum) of what you can feed the GPU. Apparently this is so AMD could keep power draw and temp down. This explains the paltry overclocking headroom. I am certain this card will offer at least 15-20% core clock overclocks with more voltage.
- On the overclocking point, this card runs cooler than any other top GPU out there. Again, not too shabby.

image.png


image.png


image.png


image.png


image.png


image.png


image.png


image.png

That Tom's Hardware review is the only outlier which shows the Fury X performing better. And if you look at their "methodology" it's easy to see why, they reuse results from previous reviews. They aren't even comparing apples to apples because drivers are not the same on every card shown.

Tech Report, PCPer, TPU all benchmarked a lot of games using the same machine and the latest drivers on all cards and they did not find such results.

Of course HardOCP are the ones who actually play through games as part of their review and they found the experience subjectively inferior on the Fury X. Ultimately we are playing games, not bar charts, and HardOCP was scathing about how they felt games played on the Fury X compared to the 980 Ti. You don't need fancy FCAT charts to know frametime sucks, you can FEEL it while gaming and HardOCP commented on it, so did other sites including Tom's. Of course PCPer and TPU did FCAT which shows objectively what HardOCP felt.

For $650, the 980 Ti is the better product hands down. Too bad AMD didn't launch this thing at $550 and kill the 980 instead taking a futile swing at the 980 Ti and missing.
 

Gritesh

Member
I think I'm about ready to do the same thing. I even tried to buy a 290x about 6 months ago, but that order was cancelled because they ran out of stock and couldn't ship mine. So I decided to wait until AMD's new cards and it just happens again. Somebody upstairs is just telling me to switch to Nvidia. I really, really don't like the idea of shelling out for a stupid Gsync monitor over a Freesync one though.


Me neither bro, but when you dive into it gsync really does some things that free sync can't do. Ie sync all the way up to 144hz and below 30hz it still does some magic. Plus the bonuses of using Nvidia I mean the cost offset is worth it.
It was a good deal to get the fury x at 549.99 but not at 679.99 it comes in too close to the 980ti.
 

Ionic

Member
Me neither bro, but when you dive into it gsync really does some things that free sync can't do. Ie sync all the way up to 144hz and below 30hz it still does some magic.

The BenQ Freesync monitor I was looking at went from 40Hz-144Hz ): And I still have a niggling objection to all the PhysX stuff. The last time I used it was in Cryostasis with my 8800GT. It's cool but I'm still not a big fan of Nvidia's emphasis on proprietary middleware, but if I get a screamin' Nvidia card I'll obviously use it and fuel that stuff. I think I just need to not think about it and enjoy a new card finally. AMD can always recapture me a few generations down the line.
 
Quick glances at a few reviews seems to peg the Fury X as a flop

Hyperbole much? Lol Jesus, this kind of response is silly. Aside from the HardOCP review that reads like an nVidia fan boy reposte inserting a ton of subjective opinion, the majority of reviews are even keeled & seem to indicate the Fury X is a nice step in the right direction for AMD as well as reaching general price/performance parity with the 980Ti while being smaller, quieter, and just as if not more efficient.

AMD probably should have come in at $599 (or maybe $550 in an effort to beat the 980) but I think we can expect AIBs to make that move with rebates at some point soon once the supply/demand curve settles in.

On a side note, market share is won at far below the $649 price point. The Fury X is a great flagship product & proof of concept that'll be extended/cut down to the entire Arctic Islands product stack in 2016-2017. But as far as "winning the consumer mindshare battle" in the here & now, that job will be up to more reasonably priced products like the Nano (& perhaps others that AMD is holding close to the vest for now like a potential uncut Tonga XT). The Nano is the far more interesting market share product & how it stacks up against the 970 &/or 980 will be far more important to AMD in terms of market performance than the Fury X.

By any measure, the Fury X is impressive & only "fails" in the minds of people who held it to the standard of trouncing the Titan X as the only possible measure of success. Bottom line is AMD can still compete & produce products worth considering. Any real criticism should be leveled at the 300 series which is obviously filler to sell out old silicon stock & perform as a placeholder while HBM yields improve. If HBM was already spinning out at needed yields I'm sure we'd already see an entire stack based on Fiji. Which is where they need to go.

In any case, the Nano remains the most interesting product in the stack in terms of performance/watt & potential ability to become AMD's 970 in the marketplace if they can price it right.
 
That Tom's Hardware review is the only outlier which shows the Fury X performing better. And if you look at their "methodology" it's easy to see why, they reuse results from previous reviews. They aren't even comparing apples to apples because drivers are not the same on every card shown.

Tech Report, PCPer, TPU all benchmarked a lot of games using the same machine and the latest drivers on all cards and they did not find such results.

Of course HardOCP are the ones who actually play through games as part of their review and they found the experience subjectively inferior on the Fury X. Ultimately we are playing games, not bar charts, and HardOCP was scathing about how they felt games played on the Fury X compared to the 980 Ti. You don't need fancy FCAT charts to know frametime sucks, you can FEEL it while gaming and HardOCP commented on it, so did other sites including Tom's. Of course PCPer and TPU did FCAT which shows objectively what HardOCP felt.

For $650, the 980 Ti is the better product hands down. Too bad AMD didn't launch this thing at $550 and kill the 980 instead taking a futile swing at the 980 Ti and missing.

Comparing reference 980 Ti to Fury X, then yeah, I would say the Ti is the better deal, but not by a landslide at 1440p/4k. Reference 980 Ti is bigger, noisier and hotter than the Fury X and we'll have to wait and see about drivers and OC and where that settles too.

And for the purpose of fairness, I'm comparing the Fury X to the reference 980 Ti's. We still need to see what's up with the Fury non-reference cards that swing into the picture at $550 next month and what performance they'll offer overclocked. So the sensible bet is to wait a bit.

But I totally understand people thinking 'screw it, we've waited long enough'. I would agree tbh, it feels like the Fury X was originally meant to release months ago. Can you imagine the positives if this thing had come out pre-Titan X? Narrative would have obviously been massively different. But alas AMD are late. Again.
 

mrklaw

MrArseFace
I think I'm about ready to do the same thing. I even tried to buy a 290x about 6 months ago, but that order was cancelled because they ran out of stock and couldn't ship mine. So I decided to wait until AMD's new cards and it just happens again. Somebody upstairs is just telling me to switch to Nvidia. I really, really don't like the idea of shelling out for a stupid Gsync monitor over a Freesync one though.

Me neither bro, but when you dive into it gsync really does some things that free sync can't do. Ie sync all the way up to 144hz and below 30hz it still does some magic. Plus the bonuses of using Nvidia I mean the cost offset is worth it.
It was a good deal to get the fury x at 549.99 but not at 679.99 it comes in too close to the 980ti.

Gsync sounds great, but I can't justify an expensive monitor with only one DP input, which the 1440p gsyncs seem to be limited to. And yet I also find it hard to buy a 1440p monitor *without* gsync because it feels like a waste. Someone make a nice 27-28" 1440p gsync monitor with DP, HDMI and DVI connections
 
Thanks for the explanations.

This is something I will have to see with my own eyes to really understand how much of a big deal it is. But then again, we're comparing metrics like one card offering 84 fps whilst the other offers 87 fps even though it is literally impossible to notice the difference between 84 fps and 87 :/

If those frame times fine then you won't notice the difference but if you have a few late frames, the fraps style count won't catch them and still report 84 fps. Yet for example you're playing the game and you notice some stutter or torn frames while seeing 84 fps, when you look at each frame in a frame time capture you'll see some are dropped yet fraps still counts them or literally one line of pixels contains a new frame that you obviously don't benefit from seeing yet its counted in frame rate counters.

Pcper have lots of videos and explanations.

A extreme example is dual cards, they have high frame rates yet the experience is terrible quite often. Frame times just confirm what your seeing.
 

FireFly

Member
Worse frametimes is (micro)stuttering. AMD is unlikely to fix this in 4K as Fury's frametimes issues are coming from the need to fetch the data via PCIE because 4GBs of RAM is not enough for 4K even today.
Well, it seems even Shadow of Mordor is only using 3.8 GB at 4K on the Fury.

http://www.extremetech.com/gaming/2...erformance-power-consumption-and-4k-scaling/2

If the spikes are caused by memory management issues, how do we know AMD can't improve things in a future driver? According to AMD, up until now they have done very little work on improving their memory optimisation. So, I think there is still some hope that things will get better.
 

FLAguy954

Junior Member
May I present the EK waterblocks for the Fury X, beautiful as hell and enables single-slot water cooling:

Code:
[IMG]http://cdn.overclock.net/b/b5/b5d7452f_FC-R9-Fury-X_1_1200.jpeg[/IMG]

[IMG]http://cdn.overclock.net/b/ba/ba5334f2_FC-R9-Fury-X_2_1200.jpeg[/IMG]
 

mephixto

Banned
May I present the EK waterblocks for the Fury X, beautiful as hell and enables single-slot water cooling:

Code:
[IMG]http://cdn.overclock.net/b/b5/b5d7452f_FC-R9-Fury-X_1_1200.jpeg[/IMG]

[IMG]http://cdn.overclock.net/b/ba/ba5334f2_FC-R9-Fury-X_2_1200.jpeg[/IMG]

OMG!! looks amazing.
 

mkenyon

Banned
Of course HardOCP are the ones who actually play through games as part of their review and they found the experience subjectively inferior on the Fury X. Ultimately we are playing games, not bar charts, and HardOCP was scathing about how they felt games played on the Fury X compared to the 980 Ti. You don't need fancy FCAT charts to know frametime sucks, you can FEEL it while gaming and HardOCP commented on it, so did other sites including Tom's. Of course PCPer and TPU did FCAT which shows objectively what HardOCP felt.
This is so bogus, because it leaves so much up to subjectivity. FCAT and Frame Time testing shows this objectively. You have it backwards.
 

Crisium

Member
Well, it seems even Shadow of Mordor is only using 3.8 GB at 4K on the Fury.

http://www.extremetech.com/gaming/2...erformance-power-consumption-and-4k-scaling/2

If the spikes are caused by memory management issues, how do we know AMD can't improve things in a future driver? According to AMD, up until now they have done very little work on improving their memory optimisation. So, I think there is still some hope that things will get better.

I disagree with Dr. Rus saying the bad frametimes are from 4GB. It'll be fixed if AMD are up to the task. Things look far worse when a card is actually at its VRAM limit. Especially since The Witcher 3 frametimes chart is getting passed around a lot and the game is well known to be lenient on VRAM.
 

tuxfool

Banned
It's something I would consider "important". As the power divide between PCs and consoles this generation grows, tessellation is one area where PC's can see a large improvement while requiring minimal resources. I.e. developer can create high quality textures with depth information that run on consoles and PC's but on PCs' have the addicted deformation and complexity that tessellation can provide, again all with minimal additional work required. Tessellation is really one of those "break through" technologies IMO right up there with the non-fixed graphics pipe line and geometry acceleration before that. I don't think it's getting second fiddle because AMD doesn't think it's important, I think it got second fiddle because this version of GCN barely moved the needle. I'd be shocked if the next iteration of Fury didn't see a tremendous improvement in the tessellation engine.

One of the issues with hardware tessellation, for geometry as I'm not sure they use it for texturing of any kind, is that it is somewhat unflexible but very fast in cases where no feedback is required from the object being tessellated. There is a civ5 GDC presentation where they opted to use compute in order be able get the desired effect. But really the issue stems from the fact the consoles don't have a strong tessellation engine so things that use high tessellation factors are relegated to PC specific effects like hairworks (which despite the superiority of Nvidia's engine still murder frame rates).
 

tuxfool

Banned
This is so bogus, because it leaves so much up to subjectivity. FCAT and Frame Time testing shows this objectively. You have it backwards.

I'd have to agree. Often you cannot feel the frametime variability unless it gets to extreme levels. Other times you may not attribute feeling correctly. If a hardware site is going to make qualitative statements on something (easily measurable) then they need to have the data to back it up.
 

tuxfool

Banned
Anyone talked about the Frame Rate Limiting yet? There are third party methods to do similar things, but this is built into the drivers and looks very effective.

http://semiaccurate.com/2015/06/24/frame-rate-limiting-with-amds-fury-x/

Frame-Rate-Limiting-Fury-X1.png

(Total system measure)

This is really only important if it offers more consistent frame times than a solution like RTSS. On the other hand it probably works Universally across any API as it is implemented at the driver level unlike RTSS.
 
This is so bogus, because it leaves so much up to subjectivity. FCAT and Frame Time testing shows this objectively. You have it backwards.

FCAT is merely a measurment tool - it's result are only as good as test place chosen in game.

Anyone talked about the Frame Rate Limiting yet? There are third party methods to do similar things, but this is built into the drivers and looks very effective.

http://semiaccurate.com/2015/06/24/frame-rate-limiting-with-amds-fury-x/

Frame-Rate-Limiting-Fury-X1.png

(Total system measure)

You get same effect by using vsync for example. But it's nice that it was added to the drivers
 

mkenyon

Banned
I'd have to agree. Often you cannot feel the frametime variability unless it gets to extreme levels. Other times you may not attribute feeling correctly. If a hardware site is going to make qualitative statements on something (easily measurable) then they need to have the data to back it up.
Yep. And the ultimate point is that HardOCP doesn't do it because it takes too much effort. Everything they say is a rationalization of that point.
 
Wow the OC vs OC is so bad its insane, totally missed this one.

lol how many times in this thread has it been mentioned the voltage was locked for reviews?

It is a fair comparison as no one is adding volts to get massive overclocks on the 980 Ti. These are the overclocks most people will get. I'm not sure about the G1 Gaming but the EVGA ACX is still based on the reference board design so has stock volts.

Now perhaps the Fury will open a lot more with voltage - but the 980 Ti does too. People with modded bios on water have gotten upwards of 1500 MHz on the core.

I'm fairly certain the OC you referred to in those graphs are indeed gains tested with voltage unlocked, graphs typically showed the top overclock available and don't limit themselves to stock voltage
 

wildfire

Banned
Yep. And the ultimate point is that HardOCP doesn't do it because it takes too much effort. Everything they say is a rationalization of that point.

They already offer the same type of comparisons seen in other review sites in addition to their visual feel tests. Lack of effort isn't the phrase you're looking for.
 

mkenyon

Banned
They already offer the same type of comparisons seen in other review sites in addition to their visual feel tests. Lack of effort isn't the phrase you're looking for.
They don't offer the same data comparisons at all. That's the point :p
Doesn't PCper and TechReport basically have to use special, expensive tools to test the frame time stutter?
Not too expensive, and IIRC the cost of the setup was mostly covered. TechReport also demonstrated that frame time capturing through something like FRAPS was basically identical outside of some special circumstances like the Crossfire issue with large frames not syncing causing runt frames.
 

tuxfool

Banned
You get same effect by using vsync for example. But it's nice that it was added to the drivers

It isn't the same. For example if you have a Gsync or Freesync display, you can choose an arbitrary cap. I believe that is what they were targeting as the main use case. Also it may have the benefits I noted in my previous post.
 

Jamex RZ

Banned
That isnt true, in theory it has more compute cores

Whether that makes any difference in current games is another issue

And HDMI 2.0, which for me and many people with 4k TVs matters a lot. That sealed the deal for me, not going to game at 30fps max lol (yes, I have my desktop in my living room).
 
Top Bottom