• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Digital Foundry - Texture filtering on consoles, what's going on?

c0de

Member
Yes, but again this is just baseless speculation as we have no evidence that the PS4 is lacking in bandwidth compared to the XB1. Correct me if I'm wrong, but the PS4 has 176 GB/s bandwidth theoretically and the XB1's DDR3 RAM has 68 GB/s theoretically. I really doubt that the PS4's significant advantage in bandwidth is used up just for 1080p. I know the XB1 has the eSRAM, but that is mainly used for frame buffer if I'm not mistaken and there is zero evidence that it is used for AF.

On the other hand, we have a few examples of AF getting patched in for games on the PS4. Furthermore, most games that lack AF on PS4 are quick cash grabs and I highly suspect that the devs didn't even bother to get some additional work done to enable AF on PS4, which seems to default to trilinear.

The article is about af vs consoles, not xbone vs ps4.
The article explicitly also addresses low af on consoles, not only it being absent.
Also posting theoretical maximum values leads to nothing as they are not achievable in reality, as the slide in the article shows. For ps4 it isn't even 140, in the unrealistic case there is no CPU access to the RAM.
Low af seems to be due to bandwidth and it also seems to be the cause why Bluepoint use 4x and 8x af.
 

omonimo

Banned
The whole matter it's interesting but put in middle the AF issue on ps4 like was a thing ruins everything. I don't see the connection. Seems a bait flame.
 

Durante

Member
Why did DF not contact the developers of games such as Dying Light or DmC remastered? Both of those games were released without AF (or using trilinear instead) on the PS4, and it later got patched in, without any cost on performance. I'm sure developers from either of those games could shed some light on the issue and how they solved it. Anonymously too.

In fact, SSFIV on the PS4 also was originally released without AF, and since then not only was AF introduced, but also improvements on both framerate and input lag.

I'm sorry, but it is certainly not hardware.
Your line of argument doesn't really support your conclusion. E.g. if there was no performance impact ever, then why would many first party titles only use 4xAF on very prominent surfaces?

I think it's quite obvious that we are looking at 2 separate things here:
  • AF being cut back (below 16x) for performance reasons on both consoles, either because of development time constraints or because its relative performance impact is still deemed too high on console hardware.
  • AF completely missing from the PS4 versions of many games which feature it on XB1. The most likely explanation for this is still some kind of API difference.
 

omonimo

Banned
Your line of argument doesn't really support your conclusion. E.g. if there was no performance impact ever, then why would many first party titles only use 4xAF on very prominent surfaces?

I think it's quite obvious that we are looking at 2 separate things here:
  • AF being cut back (below 16x) for performance reasons on both consoles, either because of development time constraints or because its relative performance impact is still deemed too high on console hardware.
  • AF completely missing from the PS4 versions of many games which feature it on XB1. The most likely explanation for this is still some kind of API difference.
I imagine he talks specifically of ps4 vs xbone hardware not in general.
 

Marlenus

Member
And that's why I added benchmarks of the R9 280 (=7950, GCN 1.0), to show that it still isn't free (and probably never will be).

in one benchmark you performed and I have no clue how that test was run. Was it a built in benchmark or real world game play? If the latter did you run the same course each time, did you do the same actions, did yoh run the test multiple times and average out the results, wad that the biggest difference or was it an average, how long was that difference sustained for or was it merely a blip which could be some other rendering hitch, how do I know the frame rate differences are due to AF on it's own and not run to run variance. Your benchmarks are worthless without answers to some of those questions and, considering the general performance impact of AF, ideally all of them as you need to remove as many variables as possible due to how small the difference is. In the digital foundry results the differences between AF and no AF are within margin of error as anything less than 5% is pretty much a tie unless you run the test lots of times and scrap the outliers.

Actaully it dates even further back. When ATi released the Radeon 8500, at first glance many marveled at the performance of their AF implementation in contrast to the GeForce 3 (which already offered almost perfect AF quality). The reason why it was so fast though was not only that it was heavily angle dependent, it also only used a bi-linear filter with AF. The latter was improved by the Radeon 9700 (R300 series).

The 9700 was pretty much the first time you could use the performance AF setting at almost no cost but it only really worked on vertical or horizontal surfaces (maybe 45 degrees too but can't remember). At the time that was fine because generally speaking the geometry in games was pretty simple but as games evolved so did AF. Just compare the synthetic AF test results of the 9700 and 5870 to see the progression in angled independance. NV were later to the party with angle independent AF but they got there in the end.

I wish it would, but the drivers on both AMD and Nvidia's sides still default to "standard" texture filtering quality and have to be changed manually in order to get the best texture filtering quality. No doubt in order to gain a percent here or there in benchmarks.

You would not notice the performance gain as these AF enhancements have come without a performance penalty. They are probably so ingrained in the algorithm that they cannot be turned off. Even if you could chances are you would notice the drop in IQ more than the increase in FPS, if there even was one.

The article is about af vs consoles, not xbone vs ps4.
The article explicitly also addresses low af on consoles, not only it being absent.
Also posting theoretical maximum values leads to nothing as they are not achievable in reality, as the slide in the article shows. For ps4 it isn't even 140, in the unrealistic case there is no CPU access to the RAM.
Low af seems to be due to bandwidth and it also seems to be the cause why Bluepoint use 4x and 8x af.

If it was due to bandwidth the kaveri rig with less than 40GB/s of bandwidth would have highlighted it. Just because a dev says something does not make it true, I hate arguments from authority and devs are just people who can be wrong.

The idea that AF is more expensive on consoles when lower performing parts of the same architecture can do it on PC where the software stack is also higher overhead is just dumb dumb dumb.
 

thelastword

Banned
If it was due to bandwidth the kaveri rig with less than 40GB/s of bandwidth would have highlighted it. Just because a dev says something does not make it true, I hate arguments from authority and devs are just people who can be wrong.

The idea that AF is more expensive on consoles when lower performing parts of the same architecture can do it on PC where the software stack is also higher overhead is just dumb dumb dumb.
Dumb dumb dumb is an understatement, no matter how much it is shown that AF is not a hardware issue...... Devs patching high levels of AF in at no visible performance costs, so many devs just leaving it out, games defaulting to trilinear, even the most un-optimized games don't use trilinear, but in 2015 it's a thing. People are trying to impress that the PS4 is starved so much of GPU resources that it can only muster trilinear filtering in some cases...SMH...whilst the xb1 can do Aniso F.....The biggest take from the article is the chart, less than a 1fps hit in so many games tested on a severely weak APU, but instead people want to hang on to some vague answers from devs.....

Why didn't DF interview From Software or the RE-R2 devs, these games have 16xAF, why didn't they interview Techland, Ninja Theory and Other Ocean, all of these devs patched in 8xAF and up in their games. Why didn't they go to the tech guys from the ice team and get them to deliberate, instead, they go to Evolve devs (marketing deal with MS) who said that XB1 does better AF because of 900p, that's just not true.....

Why didn't they get a hold of the strider devs to admit what's really going on......That they just didn't bother.....
 

dark10x

Digital Foundry pixel pusher
Why didn't DF interview From Software or the RE-R2 devs, these games have 16xAF, why didn't they interview Techland, Ninja Theory and Other Ocean, all of these devs patched in 8xAF and up in their games. Why didn't they go to the tech guys from the ice team and get them to deliberate, instead, they go to Evolve devs (marketing deal with MS) who said that XB1 does better AF because of 900p, that's just not true.....

Why didn't they get a hold of the strider devs to admit what's really going on... That they just didn't bother.....
I'm not sure what process was used for this article but getting in contact with people that can answer these kinds of questions is insanely difficult. Publishers do not want the press talking to their engineers ESPECIALLY at Japanese companies. FROM and Capcom? Are you kidding? There is no way in hell they would EVER talk about this. People have been banging on FROM's door for ages over the Bloodborne frame-pacing issues. They've never spoken of it. These developers are very private and hold this stuff close to their chest.

Techland has also proven difficult to talk to. I've been bugging them about the frame-rate cap issue on PS4 while indoors for months and they've never responded or acknowledged it.

Ninja Theory hasn't released a game on PS4 yet, have they? QLOC ported DmC.

It's not that questions aren't being asked, rather, that few answers will ever be given. Bluepoint and the guys that made Ethan Carter blew me away with how open they were in comparison to most.

It is as Durante says - two different issues.
 

JawzPause

Member
I thought this would be a more popular thread on gaf...
Anyhow, this really is a mystery and its a shame that DF didn't get to the bottom of it
 

KKRT00

Member
Dumb dumb dumb is an understatement, no matter how much it is shown that AF is not a hardware issue...... Devs patching high levels of AF in at no visible performance costs, so many devs just leaving it out, games defaulting to trilinear, even the most un-optimized games don't use trilinear, but in 2015 it's a thing. People are trying to impress that the PS4 is starved so much of GPU resources that it can only muster trilinear filtering in some cases...SMH...whilst the xb1 can do Aniso F.....The biggest take from the article is the chart, less than a 1fps hit in so many games tested on a severely weak APU, but instead people want to hang on to some vague answers from devs.....

Why didn't DF interview From Software or the RE-R2 devs, these games have 16xAF, why didn't they interview Techland, Ninja Theory and Other Ocean, all of these devs patched in 8xAF and up in their games. Why didn't they go to the tech guys from the ice team and get them to deliberate, instead, they go to Evolve devs (marketing deal with MS) who said that XB1 does better AF because of 900p, that's just not true.....

Why didn't they get a hold of the strider devs to admit what's really going on......That they just didn't bother.....
They didnt they interview Ready of Dawn or Guerrilla Games which uses AF x2 on many textures?

There is some problem with AF on consoles, its a fact, many games are proving it. Now we must exactly know why and what kind of bandwidth is required for AF x8.
 

Durante

Member
The idea that AF is more expensive on consoles when lower performing parts of the same architecture can do it on PC where the software stack is also higher overhead is just dumb dumb dumb.
Actually, the way you formulate this argument indicates a lack of deeper understanding of how software and hardware interact. If a given software stack has a higher performance overhead, then -- if there is any impact at all -- it is more likely for a feature which only induces more GPU work, but not more API interactions, to cause a lower relative performance impact than if the API interactions induced lower overhead costs. Anisotropic filtering is such a feature.
 

Marlenus

Member
It is as Durante says - two different issues.

I doubt it, based on the admittedly small sample size, the hardware seems perfectly capable so the only other reason is software. The PC has a software disadvantage due to higher overhead in the API so that leaves game code which is squarely on the devs shoulders and I doubt they will admit it.

It would be nice to do an update if there is time to test the kaveri system with more games to see if any show a higher cost and to broaden the number of engines and rendering modes types tested.
 

dark10x

Digital Foundry pixel pusher
I doubt it, based on the admittedly small sample size, the hardware seems perfectly capable so the only other reason is software. The PC has a software disadvantage due to higher overhead in the API so that leaves game code which is squarely on the devs shoulders and I doubt they will admit it.

It would be nice to do an update if there is time to test the kaveri system with more games to see if any show a higher cost and to broaden the number of engines and rendering modes types tested.
You doubt it? Why exactly?

In a last generation port, such as DmC, of course it's an implementation issue - not a performance problem. Games where the PS4 version lacks AF while we see it on Xbox One all fall into that category.

For the more demanding titles, though? You really think Ready at Dawn was being "lazy" when they opted for low levels of AF in The Order 1886?
 

Durante

Member
I doubt it, based on the admittedly small sample size, the hardware seems perfectly capable so the only other reason is software.
How do you explain the fact that first-party titles widely heralded for their technical excellence often use as low as 4xAF?
 

thelastword

Banned
Ninja Theory hasn't released a game on PS4 yet, have they? QLOC ported DmC.
Well they hired QLOC for the port job and communciation for the patch was naturally handled through them as well.....so if you were going to get anything it would be from them, NT are not shy towards the community, they share quite a bit and Tameem Antoniades is quite a vocal and expressive chap. Just cue all the developer diaries they do of their games, they are certainly more forthcoming than most.

dark10x said:
It's not that questions aren't being asked, rather, that few answers will ever be given. Bluepoint and the guys that made Ethan Carter blew me away with how open they were in comparison to most.
Hence, why some of these answers were so vague, even from those you deem forthcoming. Some of the answers did not gel with the reality of the situation, in that so many devs patched in AF at no (visible) cost in performance. I think some devs will just make up excuses when they're found wanting? Did those Evolve devs ever patch in AF in to the PS4 version? As forthcoming as you think they are, lots of devs just want to protect their skin and avoid backlash, so it's not a farfetched thing that they may want to shift focus/blame elsewhere. The only thing I trust from that article is the hard cold evidence and it's in line with all the devs who patched in AF without issue.

As a matter of fact, when I was reading through the first few paragraphs of that article, I said ok, I hope you guys do something tangible evidence-wise, I hope you guys test the A10 APU which strangely I've been highlighting it's ability to do 16xAF in Witcher 3 tech threads, next paragraph BAM! I saw you guys mentioned the A10 APU in AF tests, I said finally, that's how you do it. It's the most important and veritable result in that whole article. I just hope you guys keep on trying some of these devs, perhaps if not interviews you could engage them on twitter or facebook or even B3D.....
 

Marlenus

Member
Actually, the way you formulate this argument indicates a lack of deeper understanding of how software and hardware interact. If a given software stack has a higher performance overhead, then -- if there is any impact at all -- it is more likely for a feature which only induces more GPU work, but not more API interactions, to cause a lower relative performance impact than if the API interactions induced lower overhead costs. Anisotropic filtering is such a feature.

If AF induces no API interactions then the higher overhead on PC makes no difference and it means it is a more pure hardware test, which with the samples provided, show that even substantially weaker than console hardware can handle 16x AF with a performance cost that is within margin of error.

You doubt it? Why exactly?

In a last generation port, such as DmC, of course it's an implementation issue - not a performance problem. Games where the PS4 version lacks AF while we see it on Xbox One all fall into that category.

For the more demanding titles, though? You really think Ready at Dawn was being "lazy" when they opted for low levels of AF in The Order 1886?

Not lazy but console devs have habitually used lower AF settings and changing that mindset could be a challenge. Also devs are human and make mistakes or don't spot things and if AF is not on their mind it is easily overlooked.

How do you explain the fact that first-party titles widely heralded for their technical excellence often use as low as 4xAF?

Same way you explain why UFC was 900p with 4xMSAA on PS4 when in all likelihood 1080p with 2x MSAA would offer superior IQ and better performance. If it's not being thought about it can easily be missed.
 

Dazza

Member
You doubt it? Why exactly?

In a last generation port, such as DmC, of course it's an implementation issue - not a performance problem. Games where the PS4 version lacks AF while we see it on Xbox One all fall into that category.

For the more demanding titles, though? You really think Ready at Dawn was being "lazy" when they opted for low levels of AF in The Order 1886?

I don't get why you can say this on a forum and leave it open to interpretation in the article.

I agree with the others saying this is a poorly researched article, it sheds no more light than what we've already unveiled here at NeoGAF.

All it does is lend credence to the often held opinion that games journalism isn't real
journalism.
 

Durante

Member
If AF induces no API interactions then the higher overhead on PC makes no difference and it means it is a more pure hardware test, which with the samples provided, show that even substantially weaker than console hardware can handle 16x AF with a performance cost that is within margin of error.
Again, you are not considering the full set of potential implications of what you are talking about. What is measured if you look at a metric such as FPS is the final, aggregate performance of a complex stack of interacting hardware and software components. This means that individual effects on performance might well be underrepresented, or not represented at all, in this number. For example, if a game is limited primarily by CPU and API overheads, or ROPs, or shader processing, or vertex setup -- and so on and so forth -- on a given platform, then changing the AF level won't have a measurable impact on aggregate performance even though it might have one on another platform where the resource bottlenecks contributing to the final measured performance are distributed differently.

Same way you explain why UFC was 900p with 4xMSAA on PS4 when in all likelihood 1080p with 2x MSAA would offer superior IQ and better performance. If it's not being thought about it can easily be missed.
I don't see how "missing" something can result in a complex set and distribution of distinct AF levels across a variety of surfaces.
 

Javin98

Banned
The article is about af vs consoles, not xbone vs ps4.
The article explicitly also addresses low af on consoles, not only it being absent.
Also posting theoretical maximum values leads to nothing as they are not achievable in reality, as the slide in the article shows. For ps4 it isn't even 140, in the unrealistic case there is no CPU access to the RAM.
Low af seems to be due to bandwidth and it also seems to be the cause why Bluepoint use 4x and 8x af.
Still I'm saying we have literally zero evidence that the XB1 has better AF than the PS4 in some games because the PS4 having bandwidth issues. In almost every one of such cases, the PS4 lacks AF entirely, which makes me think this is a dev oversight instead. Right now, I think the best theory is that additional work is required to enable AF on PS4 or else it defaults to trilinear.

Edit: Also, if the memory bandwidth of the PS4 doesn't hit the theoretical value of 176 GB/s, you can be damn sure the XB1 doesn't hit 68 GB/s either.
 

darkinstinct

...lacks reading comprehension.
in one benchmark you performed and I have no clue how that test was run. Was it a built in benchmark or real world game play? If the latter did you run the same course each time, did you do the same actions, did yoh run the test multiple times and average out the results, wad that the biggest difference or was it an average, how long was that difference sustained for or was it merely a blip which could be some other rendering hitch, how do I know the frame rate differences are due to AF on it's own and not run to run variance. Your benchmarks are worthless without answers to some of those questions and, considering the general performance impact of AF, ideally all of them as you need to remove as many variables as possible due to how small the difference is. In the digital foundry results the differences between AF and no AF are within margin of error as anything less than 5% is pretty much a tie unless you run the test lots of times and scrap the outliers.

Margin of error is less than 2 %.
 

c0de

Member
This forum is going nuts in technical discussions lately. People prefer to believe their own belief or misunderstanding instead of believing what devs say, who actually wrote code for the consoles.
It doesn't make sense to them as they probably don't know all facts and dependencies to profoundly say it doesn't make sense. Instead opinions about how devs work are posted, sitting in the pleasant chair at home, armchair developing at its best.
 

Fafalada

Fafracer forever
Marlenus said:
If AF induces no API interactions then the higher overhead on PC makes no difference and it means it is a more pure hardware test
Framerate is a product of hundreds of workloads running in paralel. If any one of those grows longer than the rest(eg. CPU time spent in driver), the time for others(like say - texture filtering) can be extended "for free" without any impact on framerate.

c0de said:
This forum is going nuts in technical discussions lately
We demand black or white answers otherwise someone isn't telling the truth or is being 'lazy'.
 

Marlenus

Member
Again, you are not considering the full set of potential implications of what you are talking about. What is measured if you look at a metric such as FPS is the final, aggregate performance of a complex stack of interacting hardware and software components. This means that individual effects on performance might well be underrepresented, or not represented at all, in this number. For example, if a game is limited primarily by CPU and API overheads, or ROPs, or shader processing, or vertex setup -- and so on and so forth -- on a given platform, then changing the AF level won't have a measurable impact on aggregate performance even though it might have one on another platform where the resource bottlenecks contributing to the final measured performance are distributed differently.

I don't see how "missing" something can result in a complex set and distribution of distinct AF levels across a variety of surfaces.

There is no need to be patronising. I am well aware that different hardware has different bottlenecks and that it is not always obvious. Take the 7790 being very shader heavy, it has obvious bandwidth / rop bottlenecks as it has similar shader performance to the 7850. Or Fury(X) where it should show better scaling than it does at stock settings and surprisingly has great scaling with memory overclocking despite offering lots of bandwidth anyway.

The devs in the article talk about a) bandwidth and b) AF being expensive. If a was true it would show up in the Kaveri system as that has far less bandwidth than both PS4 and Xbox One. It would also suffer the same memory contention issues that the consoles can have due to it also being a shared memory pool. if b was true it would also show up in the Kaveri system due to it being massively underpowered vs the console GPUs.

The results show that AF is not expensive (if it was something as weak as the A10-7800 would suffer when going from 0x to 16x) and that it is not a bandwidth issue (if it was the 38.4 GB/s of the Kaveri system would also highlight it). Is AF limited by TMU performance? If it was you would expect greater than 2% drops across the board considering the A10-7800 only has 32 TMUs running at 720Mhz. The Xbox One has 48 running at 853Mhz and the PS4 has 72 running at 800Mhz. Every bottleneck you look at would appear first on the Kaveri platform if it existed yet it does not show up in any of the tested games.

A larger selection of games would be a lot more conclusive so I do hope DF follow up when they have time with some more games to see if there are any situations which show larger drops after turning on AF.

Missing was the wrong word to use, it is more a case of it not really being considered in any deep way. This is speculation but I think it can be pretty easily be explained in two ways

1) Low AF on both consoles - This could be a simple case of work flows. Timelines are tight and finding the time to change the process flow and then test higher AF levels could just be something that studios do not want to commit to. It seems like it should be an easy win to change it up to have 8x AF as a baseline and hardware wise it looks like there is little to no performance impact but perhaps the issue is neither hardware or software but simply lack of time to change and test it.

2) No AF on PS4 but AF on Xbox One - As already stated this is probably some sort of default API setting difference and a simple oversight where the dev thinks its enabled, it is not picked up in QA for some reason and ships with the setting turned off on PS4 and turned on on Xbox One.

In light of the above I do think I was a bit harsh with my comments stating that it was squarely on the devs shoulders as I did not really consider the time impact that such a change could have.

Margin of error is less than 2 %.

Error margin is not a fixed %, it depends on the repeatability of the test and the number of tests performed. A set benchmark is more repeatable than a gameplay test so the error bars in the former are smaller than the latter. The more repeats you do also close the error bars as you have more data points. Generally 5% is pretty reasonable if you are doing a basic 3 repeat, exclude the outliers (if any) and averaging the remaining results kind of test.
 

tuxfool

Banned
This forum is going nuts in technical discussions lately. People prefer to believe their own belief or misunderstanding instead of believing what devs say, who actually wrote code for the consoles.
It doesn't make sense to them as they probably don't know all facts and dependencies to profoundly say it doesn't make sense. Instead opinions about how devs work are posted, sitting in the pleasant chair at home, armchair developing at its best.

The problem stems from the fact that these developers said one thing. Then you have tweets from people on the ICE team saying another. Both could be correct and talking past each other but a conclusive answer still isn't within grasp.
 
in one benchmark you performed and I have no clue how that test was run. Was it a built in benchmark or real world game play? If the latter did you run the same course each time, did you do the same actions, did yoh run the test multiple times and average out the results, wad that the biggest difference or was it an average, how long was that difference sustained for or was it merely a blip which could be some other rendering hitch, how do I know the frame rate differences are due to AF on it's own and not run to run variance. Your benchmarks are worthless without answers to some of those questions and, considering the general performance impact of AF, ideally all of them as you need to remove as many variables as possible due to how small the difference is. In the digital foundry results the differences between AF and no AF are within margin of error as anything less than 5% is pretty much a tie unless you run the test lots of times and scrap the outliers.

I benchmarked two games where I noticed larger differences: Trackmania 2 and BioShock Infinite. It was just real world screenshot benchmarking at a specific location (see the screenshots), not a timedemo or anything. Of course I made sure that there were no fluctuations in frame rate and that the results were reproduceable in a second run.
There are also differences of about 7-8% in BioShock Inifinite's integrated timedemo, but I purposely picked that location because I wanted to demonstrate a "bad case" scenario - as this is what matters if you want to hold a stable frame rate.

The 9700 was pretty much the first time you could use the performance AF setting at almost no cost but it only really worked on vertical or horizontal surfaces (maybe 45 degrees too but can't remember). At the time that was fine because generally speaking the geometry in games was pretty simple but as games evolved so did AF. Just compare the synthetic AF test results of the 9700 and 5870 to see the progression in angled independance. NV were later to the party with angle independent AF but they got there in the end.

As I said, you could already use AF on the Radeon 8500 with the same low performance penalties (with the quality drawbacks I mentioned).

And I have to disagree that the compromises in quality were fine or less noticeable back then. Here's an old screenshot I made back in the day, displaying an old game with simple terrain geometry (Gothic 2):

af8eu0d.png

This is what 16x AF looked like on R300 when you didn't have a straight plane in front of you. :)
It was already a lot better on GeForce 3, almost angle independent (see http://www.vogons.org/viewtopic.php?f=46&t=44035#p431135). Nvidia only reduced the quality on later cards in order so save performance. The quality was back to good levels with the release of the 8800 GTX (G80).

You would not notice the performance gain as these AF enhancements have come without a performance penalty. They are probably so ingrained in the algorithm that they cannot be turned off. Even if you could chances are you would notice the drop in IQ more than the increase in FPS, if there even was one.

But you can still turn the quality up and down! And it defaults to a medium quality setting. To be fair, this medium setting is already quite good and most people won't notice any difference in image quality, but it's there. And I'm pretty sure AMD and Nvidia would've gotten rid of the setting (and default it to maximum quality) if there was no performance penalty in every situation.
 

Durante

Member
There is no need to be patronising.
You start with that, and then you write a long meandering argument based on the exact same misconceptions that I tried to explain to you (and which Fafalada explained as well, more concisely).

To boil it down to the bare minimum: there is no way to determine the relative performance impact of any individual setting on a platform A from aggregate performance measurements on a different hardware and software platform B.
 
I think we need a more nuanced view of what the performance costs of AF are in modern games in a variety of scenarios, and I think i can post something that will point it out quite well.

I decided to test my GPU in two settings, 4K and 1080p: both to stress shading and bandwidth with different degrees. Here are the results in a game with complex surface shading and extremely high quality textures:

3840X2160 1x AF = 41.8 fps
3840X2160 16x Driver AF = 35.8 fps
1920X1080 1x AF = 109.6 fps
1920X1080 16x Driver AF = 104.0 fps

The cost of AF when the GPU is less stressed is decidedly less than the cost when it is being heavily utilized. IMO, if I can induce significant framerate (42 vs 36 fps) differences in a context where hardware is already quite stressed, then AF definitely is not "free". I imagine most modern AAA console games are fighting resources and utilizing the shit out the hardware as is, hence why AF... especially since bandwidth is scarcer and more fought over in console hardware... is decalred expensive. Mind you the images and numbers above were made wih decidedly more powerful, more modern hardware than that found in consoles. One can imagine that the relative costs on consoles are probably even more drastic.

This is of course just extrapolating, but I think we cannot even say anymore that it "costs nothing". Rather, it is "nothing" imo given what it does to the image.
 

Renekton

Member
I think we need a more nuanced view of what the performance costs of AF are in modern games in a variety of scenarios, and I think i can post something that will point it out quite well.

I decided to test my GPU in two settings, 4K and 1080p: both to stress shading and bandwidth with different degrees. Here are the results in a game with complex surface shading and extremely high quality textures:

3840X2160 1x AF = 41.8 fps

3840X2160 16x Driver AF = 35.8 fps

1920X1080 1x AF = 109.6 fps

1920X1080 16x Driver AF = 104.0 fps

The cost of AF when the GPU is less stressed is decidedly less than the cost when it is being heavily utilized. IMO, if I can induce significant framerate (42 vs 36 fps) differences in a context where hardware is already quite stressed, then AF definitely is not "free". I imagine most modern AAA console games are fighting resources and utilizing the shit out the hardware as is, hence why AF... especially since bandwidth is scarcer and more fought over in console hardware... is decalred expensive. Mind you the images and numbers above were made wih decidedly more powerful, more modern hardware than that found in consoles. One can imagine that the relative costs on consoles are probably even more drastic.

This is of course just extrapolating, but I think we cannot even say anymore that it "costs nothing". Rather, it is "nothing" imo given what it does to the image.
Thanks for the finding.

Seems like it's not as freebie as I thought.
 

Durante

Member
Great job testing that! Though I think even some of your conclusions are going a bit too far given how little we can really tell from coarse-grained performance metrics.

In any case, that's around 5% performance impact at 1080p and 17% at 4k in your testing scenario.
 
Though I think even some of your conclusions are going a bit too far given how little we can really tell from coarse-grained performance metrics.

Yeah, you are right (especially in light of your post preceding mine). So I am probably am over extrapolating, but I can only imagine that the situation at its worst on advanced hardware is better than the worst situation on less advanced hardware. It would be surprising if it wasn't IMO.

But at least there is some context!
 

c0de

Member
Great job testing that! Though I think even some of your conclusions are going a bit too far given how little we can really tell from coarse-grained performance metrics.

In any case, that's around 5% performance impact at 1080p and 17% at 4k in your testing scenario.

All we can do is a “cold case“ analysis and observe results, sadly. It would need profiling data to see what's really happening. The thing is that people here decide to conclude only by given results without taking into account that there is a complex net of dependencies which results in what you see on screen.
That is why we get posts like “I don't understand why game x doesn't do feature y, game z looks way worse, it should be easy to do it better, so lazy dev“.
 
If anyone would like to perform the same test @ 4K with *without, 1080p with *without in other games with other hardware... that would also be cool to see the differences.

Or heck... the same test in SC with different hardware would be just as interesting.
 

Nheco

Member
I think is somewhat related with shared memory system. Xbox One games are generally ahead this matter, and it has two memory pools, unlike ps4.
 

Marlenus

Member
You start with that, and then you write a long meandering argument based on the exact same misconceptions that I tried to explain to you (and which Fafalada explained as well, more concisely).

To boil it down to the bare minimum: there is no way to determine the relative performance impact of any individual setting on a platform A from aggregate performance measurements on a different hardware and software platform B.

If we were comparing different architectures then I would 100% agree and while using the Kaveri platform as a surrogate for the consoles with a unified memory architecture is not perfect the fact that DF found very little performance difference on that platform is a strong indicator that hardware is not the issue, especially as Kaveri is weaker in that regard than either console.

4 games is not a conclusive example but when you add in other games where AF was added in with a patch and there was no performance impact it adds to the overall picture that the AF situation is not a hardware issue.

I think we need a more nuanced view of what the performance costs of AF are in modern games in a variety of scenarios, and I think i can post something that will point it out quite well.

I decided to test my GPU in two settings, 4K and 1080p: both to stress shading and bandwidth with different degrees. Here are the results in a game with complex surface shading and extremely high quality textures:

3840X2160 1x AF = 41.8 fps

3840X2160 16x Driver AF = 35.8 fps

1920X1080 1x AF = 109.6 fps

1920X1080 16x Driver AF = 104.0 fps


The cost of AF when the GPU is less stressed is decidedly less than the cost when it is being heavily utilized. IMO, if I can induce significant framerate (42 vs 36 fps) differences in a context where hardware is already quite stressed, then AF definitely is not "free". I imagine most modern AAA console games are fighting resources and utilizing the shit out the hardware as is, hence why AF... especially since bandwidth is scarcer and more fought over in console hardware... is decalred expensive. Mind you the images and numbers above were made wih decidedly more powerful, more modern hardware than that found in consoles. One can imagine that the relative costs on consoles are probably even more drastic.

This is of course just extrapolating, but I think we cannot even say anymore that it "costs nothing". Rather, it is "nothing" imo given what it does to the image.

What GPU are you using? I also find it curious that while the % difference is quite different in both cases you are talking about a 6 FPS and 5.4 FPS decrease, almost as though it is a fixed FPS cost regardless of other settings. On the other hand the DF test with tomb raider went from a 0.6 FPS drop with low settings to a 1.2 FPS drop with medium settings so perhaps you conclusion is correct. We need more data points really.
 

KKRT00

Member
If anyone would like to perform the same test @ 4K with *without, 1080p with *without in other games with other hardware... that would also be cool to see the differences.

Or heck... the same test in SC with different hardware would be just as interesting.

Here You go:

GTX 970 said:
1080p AF x1 - High Performance setting - 179.3fps
K18b.png


1080p AF x16 - Quality Setting - 169.6fps
M18b.png


4k AF x1 - High Performance setting - 58.5fps
vermintide_2015_10_25u9skd.png


4k AF x16 - Quality Setting - 46fps
vermintide_2015_10_25ges8u.png
 

Durante

Member
Another instance where the relative difference is massively higher at higher resolutions.

It makes sense really, regardless of what the exact reasons are, it's obvious that changing the resolution shifts the load more towards pixel shading (which is what AF falls under), while reducing the relative impact of lots of operations unrelated to AF (everything vertex-related, everything on the CPU, setup etc.).
 

c0de

Member
The problem stems from the fact that these developers said one thing. Then you have tweets from people on the ICE team saying another. Both could be correct and talking past each other but a conclusive answer still isn't within grasp.

ICE Team said ps4 is capable of af which nobody here doubts. The statement for itself is of course true and there is no point in doubting this.
 
Perhaps include x8 AF

edit
1440p ultra
AF x16
bNOIO77.jpg

AF off
9mvc2ox.jpg


I did 3 separate tests, each time there was a clear sitting at 36fps with AF x16 and sitting at 39fps with AF off in the driver. Not sure it's fully off though but you can see the changes are obvious. The game doesn't have any AF setting in the options.

Interesting I compared x8 to x16 and the cost was almost the same for each, x8 only had .2 .3 of frame advantage. x8 does cost quite a bit then.
 

HTupolev

Member
Interesting I compared x8 to x16 and the cost was almost the same for each, x8 only had .2 .3 of frame advantage. x8 does cost quite a bit then.
Probably because most of the texturing can be nicely-sampled with less than or equal to 8 taps. The tap count in the setting is a maximum, and in most scenes, only a small fraction of the texturing needs greater than 8.
By contrast, if you created a scene made up entirely of ultra-oblique surfaces with large unique textures, you could probably find a significant perf difference between the two settings.

This is possibly part of why console games tend to make harsher compromises than PC users. A PC gamer is looking at performance aggregates, while a console developer may be in a position to say something like "drawing these regions into the g-buffer could be sped up by 20% if we dial back to 8x for these types of surfaces."
 

RoboPlato

I'd be in the dick
For those on PC doing comparisons, could we get screen/performance comparisons of 8x and 16x?

They didnt they interview Ready of Dawn or Guerrilla Games which uses AF x2 on many textures?

There is some problem with AF on consoles, its a fact, many games are proving it. Now we must exactly know why and what kind of bandwidth is required for AF x8.

Both The Order and Shadow Fall use variable levels of AF set by artists, which I think it smart for console games, especially ones that have controlled environments like The Order. SF has better AF overall, although I don't think either drop below 4x. I do think there's something API related that causes AF to take up more bandwidth than usual on consoles at this point.
 
Perhaps include x8 AF
For those on PC doing comparisons, could we get screen/performance comparisons of 8x and 16x?

You betchya. Getting a last gen game like RE5 to be GPU bound was pretty hard, had to inject AO, ppaa, 4k... but it eventually worked...even then there are some scenes in the benchmark which are CPU bound. In the end variances of AF really did not change the performance. 1-2 frame difference at best. Sadly the gameplay screens that showed that 1-2 fps difference are too large to upload at native resolution.

1x
8x
16x
What GPU are you using?
Titan X
 
Some WD 1440p ultra. No AA, I run SMAA inj but disabled it for the tests


Some frame variance there as x16 is slightly higher than x8 but in general it's again 2-3 fps difference with x8/x16 compared to off.

I also tried setting filtering to high performance instead of high quality but can't replicate how no AF looks on console
L28b.jpg

This is with no AF on PS4 DMC, even the lower third around the character is very bad. This raises some other issues with comparing console to PC GPUs. Think of the frames saved on console with selected filtering, very low to completely off and much worse quality. I know DMC received a patch by the way.

I've tested AF off, x2 x4 x8 x16, all show expected improvements but at AF off it didn't look bad until the top half. x2 looks pretty good and so on. When I see articles it will speculate there's hardly any AF on the console game and will say it's x2 x4 but the x2 x4 I'm seeing on PC looks way better and even off on PC is decent enough. I was expecting an abomination setting AF to off as I've run AF at x8x16 for around a decade.
 

dr_rus

Member
Another instance where the relative difference is massively higher at higher resolutions.

It makes sense really, regardless of what the exact reasons are, it's obvious that changing the resolution shifts the load more towards pixel shading (which is what AF falls under), while reducing the relative impact of lots of operations unrelated to AF (everything vertex-related, everything on the CPU, setup etc.).

I think this has more to do with caching than moving the bottleneck.
 
Assetto Corsa some interesting results. Game has crude but high res textures everywhere.

x8
6AtQCKM.jpg

off
0uYvvAV.jpg


x16
9BiDiB4.jpg

off
zCIZWto.jpg

Seeing a 11-13fps drop with AF enabled. The impact in image quality is quite drastic. So about 16-17% and the same percentage at 4k - 30fps with AF x16 and 35fps AF OFF.
 

Saty

Member
Article updated with comments by Krzysztof Narkowicz, lead enginer programmer of Shadow Warrior developer Flying Wild Hog: http://www.eurogamer.net/articles/digitalfoundry-2015-vs-texture-filtering-on-console

"It's not about unified memory and for sure it's not about AI or gameplay influencing AF performance. It's about different trade-offs. Simply, when you release a console game you want to target exactly 30 or 60fps and tweak all the features to get best quality/performance ratio. The difference between x8 and x16 AF is impossible to spot in a normal gameplay when the camera moves dynamically," he says.

"AF x4 usually gives the best bang for the buck - you get quite good texture filtering and save some time for other features. In some cases (eg when you port a game without changing content) at the end of the project you may have some extra GPU time. This time can be used for bumping AF as it's a simple change at the end of the project, when content is locked. Applying different levels of AF on different surfaces is a pretty standard approach. Again it's about best bang for the buck and using extra time for other more important features (eg rock-solid 30fps)."
More at the link.
 

Kysen

Member
For sure AF is not free. AF requires multiple taps and has to sample a lower mipmap (very bad for texture cache), so it's a pretty heavy feature.
And there it is, hopefully this ends "but it should be free" comments we see on gaf all the time.
 

Slaythe

Member
Bad AF on consoles and bad AF on ps4 are two completely different problems.

1) AF can be taxing so it's lowered to keep framerate or features stable on consoles, with a chance at improving it during the final optimization

2) Xbox One has AF x8 and PS4 has NO AF, not x2 or x4, none, then gets patched at x8 with no impact on performances whatsoever

These two problems are completely unrelated.

And the question is, if PS4 is lead platform, is it possible it's also responsible for some cases of terrible AF on specific games, affecting both consoles ? Which would be a case number 3, caused by the same issue as number 2.
 

Lord Error

Insane For Sony
Great job testing that! Though I think even some of your conclusions are going a bit too far given how little we can really tell from coarse-grained performance metrics.

In any case, that's around 5% performance impact at 1080p and 17% at 4k in your testing scenario.
Let's say the difference is 10% on consoles. That's still quite a lot, and I think that's the reason devs go out of their way to set AF per-surface basis in some of the games. I think every developer would rather their game sticks to 30 or 60FPS as often as possible rather than enabling high level AF, which IMO doesn't add anywhere close enough visual impact to be worth even 10% performance penalty. Our brain does amazing job justifying and ignoring partially blurred, more distant portions of the FOV - because that is similar to what it receives when our eyes look real word around us - as opposed to how it has no way of ignoring artifacts like edge crawls or aliasing.
 
which IMO doesn't add anywhere close enough visual impact to be worth even 10% performance penalty.
Assuming that would be a cost associated with it, I can only disagree. It basically ruins all the work and artistry expended into making shading function in a more realistic way, the work that goes into differentiating objects via material settings and normal maps, the work that goes into highly detailed diffuse textures, etc.

Low levels of filtering basically turn all surface properties that your uses to differntiate into the same thing, except now they just differ by primary color.


It kinda of relegates all those huge advances in assets production and shading technology into a 1-4 meter sphere surroung the player FOV. IMO, a horrible decision.
 
Top Bottom