• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

GeForce GTX 970s seem to have an issue using all 4GB of VRAM, Nvidia looking into it

Status
Not open for further replies.

pestul

Member
Did I say you can't be annoyed? I said it doesn't constitute false advertising.
Every single GTX970 is advertized as having a 256bit memory bus. If that is not the case (ie 208bit), then that would be false advertising.

I'm hoping that based on the 980m results with a similar chip layout and cut down SMM, this is just a bug.
 

riflen

Member
Every single GTX970 is advertized as having a 256bit memory bus. If that is not the case (ie 208bit), then that would be false advertising.

I'm hoping that based on the 980m results with a similar chip layout and cut down SMM, this is just a bug.

The memory interface width is advertised as 256bit, which is correct because the design features 4 x 64bit controllers. They do not make a statement about the effective width. This is my point.
 

LilJoka

Member
Heres Shadow of Mordor from my system
i7 3770 4.2Ghz
Asus P8Z77I Deluxe
Samsung 2x4GB 2133Mhz CL10
GTX 970 1500/3880Mhz

The 1080p one was limited to 62fps by RTSS. The 2880p one was playable not stuttery.

ayTNBQd.jpg

Mean: 18.5ms
Variance: 90.16
Standard Deviation: 9.49531

3XFSXsV.jpg

Mean: 33.4ms
Variance: 159.5
Standard Deviation 12.6

More heavy VRAM usage
yqGL4kU.jpg
 

Thrakier

Member
Now we add different perceptions a well to these "benches". To some people a constantly fluctuating framerate below 60FPS seems "non stuttery", as the guy above states.
 

LilJoka

Member
Now we add different perceptions a well to these "benches". To some people a constantly fluctuating framerate below 60FPS seems "non stuttery", as the guy above states.

Stutter is literally game stopping with huge frame time differences to me. Anyway the data is there, so what difference does it make..
 

Durante

Member
People are discussing a real problem that will easily be front and center when games start creeping by 3.5GB.
Some people are doing that. Lots of people are playing up drama.

whether or not we are happy with current performance is only one part of the story. If the cards cannot be used to their full potential then I think that is reasonable grounds to be concerned.
Absolutely, and I never said anything counter to that. It's not reasonable grounds to act like the sky is falling or like we should replace our 970s with 960s though.

Especially as long as we have no idea what exactly the issue is.
 

LilJoka

Member
Now we need hardware sites to stop copy pasting screenshots from forums into news and instead make detailed benchmarking analysis of nvidia claim.

My shots show the 3500MB priority range. It looks like they did know, and are using clever drivers to try mitigate the "problem", probably by shifting memory around and keeping more active areas in the faster 3500MB section. That is why when you test that specific 500mb range it has such poor bandwidth.
 

Thanks! Added to OP. Not sure what to make of it yet.

So cut SMM theory as the cause was true?

From the other thread:

Their testing methodology is not good at showing what occurs. Better would be
1. to put the game with low textures at something like 4K. Thereby not filling the VRAM to even 3.5GB, then measure performance.

2. Then putting the textures to ULTRA, thereby filling to beyond 3.5GB at 4K, and measuring the performance. This would show the actual cost of filling that last 512 MB.

Why even bother testing? We know what the performance for that last .5GB is, and it's super shit, that has been tested extensively already. Whatever they do in their drivers to juggle VRAM around to mitigate that is beside the point. They're just skirting the issue with no guarantee that they can avoid it for all cases.

I just don't understand why they had to lie when this card would have been perfectly fine sold as a 3.5GB GPU.
 

Costia

Member
So they acknowledged the issue but still haven't answered the actual question:
What is the performance of that last 0.5GB segment?
The game performance suggests they have a walk-around for the problem, but they still didn't say how severe the actual problem is.
 

Alasfree

Member
From Techreport: http://techreport.com/news/27721/nvidia-admits-explains-geforce-gtx-970-memory-allocation-issue

Interesting. We explored a similar datapath issue related to the GTX 970's disabled SMs in this blog post. In that case, we looked at why the GTX 970 can't make full use of its 64 ROP partitions at once when drawing pixels. Sounds to me like this issue is pretty closely related.

Beyond satisfying our curiosity, though, I'm not sure what else to make of this information. Like the ROP issue, this limitation is already baked into the GTX 970's measured performance. Perhaps folks will find some instances where the GTX 970's memory allocation limits affect performance more dramatically than in Nvidia's examples above. If so, maybe we should worry about this limitation. If not, well, then it's all kind of academic.
The blog they mention: http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980
 

LilJoka

Member
Basically if you want actual 4GB, you need a GTX 980.
Otherwise GTX 970 is 3.5GB.

Case... closed?

Yes, but thats not really good enough for people that have bought a "4GB GTX 970". The performance drop off does seem minimal though, so maybe nVidia are right that the impact is minimal due to some clever driver work.
 

Serandur

Member
Yes, but thats not really good enough for people that have bought a "4GB GTX 970". The performance drop off does seem minimal though, so maybe nVidia are right that the impact is minimal due to some clever driver work.
An average FPS number tells you nothing about the low points, stuttering, and frametime inconsistencies. It's just an average, not representative of the actual performance and issues.

The only clever thing here is how Nvidia are trying to cover up the problem by using vague and imprecise data. No clever driver work can make up for a massive bandwidth deficit and the cases where stuttering occurs because of either a refusal to use more than 3.5 GBs or using more but it not being fast enough.
 

LilJoka

Member
An average FPS number tells you nothing about the low points, stuttering, and frametime inconsistencies. It's just an average, not representative of the actual performance and issues.

The only clever thing here is how Nvidia are trying to cover up the problem by using vague and imprecise data. No clever driver work can make up for a massive bandwidth deficit and the cases where stuttering occurs because of either a refusal to use more than 3.5 GBs or using more but it not being fast enough.

I have posted frame time stats in this thread, the game didn't feel like it was running out of VRAM just felt like low fps, not huge stutters, but stutters every time you need to load more into VRAM since it's got to deal with the slow 500mb partition. It's not ideal but effect is minimal. I still think it's wrong of them to do this.

The only reason i say it's clever on the driver side is that one would expect huge frequent slow downs, but it isn't the case, somehow it manages it.
 

Serandur

Member
I have posted frame time stats in this thread, the game didn't feel like it was running out of VRAM just felt like low fps, not huge stutters, but stutters every time you need to load more into VRAM since it's got to deal with the slow 500mb partition. It's not ideal but effect is minimal. I still think it's wrong of them to do this.

The only reason i say it's clever on the driver side is that one would expect huge slow downs, but it isn't the case, somehow it manages it.

Those are some erratic frametime spikes, but perhaps it's less noticeable to you because you're only running a single 970 (getting low FPS which may cover up the issues a bit), but I've got two in SLI with enough horsepower to maintain a constant 60 FPS in anything (or definitely at least a capped 30 for testing) and VRAM is the reason I've had any trouble with smoothness/deviations from that consistent 60 in several ganes.

I usually get pegged at 3.5 GBs with stuttering as if I need more, sometimes it goes over but I still get issues. Low GPU usage, high general FPS, but stuttering and in some cases massive FPS drops (in Space Engine) once it reaches or crosses that 3.5 GBs point.
 

LilJoka

Member
Those are some erratic frametime spikes, but perhaps it's less noticeable to you because you're only running a single 970 (getting low FPS which may cover up the issues a bit), but I've got two in SLI with enough horsepower to maintain a constant 60 FPS in anything (or definitely at least a capped 30 for testing) and VRAM is the reason I've had any trouble with smoothness/deviations from that consistent 60 in several ganes.

I usually get pegged at 3.5 GBs with stuttering as if I need more, sometimes it goes over but I still get issues. Low GPU usage, high general FPS, but stuttering and in some cases massive FPS drops (in Space Engine) once it reaches or crosses that 3.5 GBs point.

I totally agree.
 

kanuuna

Member
Should this thing influence my choice in GPU for a PC build I have under way? I'm shooting for a 3440x1440 resolution at 60hz, and I had picked 2x GTX 970 for the setup. Would a single GTX 980 been the better choice?
 

LilJoka

Member
Should this thing influence my choice in GPU for a PC build I have under way? I'm shooting for a 3440x1440 resolution at 60hz, and I had picked 2x GTX 970 for the setup. Would a single GTX 980 been the better choice?

Really for dual GPU config you might be better off with AMD, their current frame times are better than SLI recently. See the FCAT analyses at pcper site. The loss is PhysX and any other nvidia tech you may want though.
 

kanuuna

Member
Really for dual GPU config you might be better off with AMD, their current frame times are better than SLI recently. See the FCAT analyses at pcper site. The loss is PhysX and any other nvidia tech you may want though.
I only ever considered Nvidia. I've been using SLI / Dual-GPU cards for some time and I haven't had too many issues. I like the options drivers give you and the amount of options you have for forcing AA and more with Inspector.
 

dr_rus

Member
An average FPS number tells you nothing about the low points, stuttering, and frametime inconsistencies. It's just an average, not representative of the actual performance and issues.

The only clever thing here is how Nvidia are trying to cover up the problem by using vague and imprecise data. No clever driver work can make up for a massive bandwidth deficit and the cases where stuttering occurs because of either a refusal to use more than 3.5 GBs or using more but it not being fast enough.

Go ahead and test the cards yourself. It not like they're using some unreleased games and proprietary tools to do the testing.

"Fast enough" for what? The cards are performing exactly like in all the benchmarks. This issue was there right from the beginning.
 
An average FPS number tells you nothing about the low points, stuttering, and frametime inconsistencies. It's just an average, not representative of the actual performance and issues.

This is the problem with NVIDIA's response. Good news: they've admitted it's hardware related. Bad news: they're trying to play it off like it's a "feature".

We're not even sure how they came to their averages. Did they throw out outliers (i.e. the very low frametimes and very high frametimes)? On the surface, their numbers seem a little too perfect (oh, look, it's exactly 25% slower!) and, as stated already, averages are great for hiding fluctuations. Essentially, this is a typical PR response that addresses next-to-nothing. Tech sites need to continue doing their own tests on the issue and not take NVIDIA's word as gospel, as they're intentionally trying to obfuscate the issue.
 

bj00rn_

Banned
Tech sites need to continue doing their own tests on the issue and not take NVIDIA's word as gospel, as they're intentionally trying to obfuscate the issue.

Include internet drama-queens and their crazy conspiracy theories as well in this anti-gospel procedure and we'll be on to something..

Anyway, already 677 post and still no real world tangible scientific examples and videos.. I'm close to chalk this one off as a whole lot of nothing.
 
Include internet drama-queens and their crazy conspiracy theories as well in this anti-gospel procedure and we'll be on to something..

Anyway, already 677 post and still no real world tangible scientific examples and videos.. I'm close to chalk this one off as a whole lot of nothing.

Let's wait for the FCAT and frametime analyses from something like Guru3d or PcPer. I assume they will attempt to corrobrate or refute Nvidia.
 

Serandur

Member
Go ahead and test the cards yourself. It not like they're using some unreleased games and proprietary tools to do the testing.

"Fast enough" for what? The cards are performing exactly like in all the benchmarks. This issue was there right from the beginning.
I have tested my two 970s in multiple titles and I have had issues with VRAM and freezes/stuttering best represented by frametime spikes. I posted graphs on AC Unity earlier in this thread and have been closely following the issue for a couple weeks, also posting descriptions of my findings on Skyrim and Space Engine (specifically how stubborn the 970s are im going past 3.5 GBs and stuttering with low GPU usage accompanying it) on Overclock.net and some other stuff, including slower VRAM load-in on MSI Kombustor after 3.5 GBs.

FPS doesn't best reflect the problem at all, especially not a single average FPS figure from Nvidia. As can be seen in my AC Unity graph to demonstrate how FPS does not accurately reflect frametimes, small dips to 27 FPS (from 30) in the Excel chart are accompanied by much-more-telling frametime spikes way above 33.3 ms.
 

cheezcake

Member
I have tested my two 970s in multiple titles and I have had issues with VRAM and freezes/stuttering best represented by frametime spikes. I posted graphs on AC Unity earlier in this thread and have been closely following the issue for a couple weeks, also posting descriptions of my findings on Skyrim and Space Engine (specifically how stubborn the 970s are im going past 3.5 GBs and stuttering with low GPU usage accompanying it) on Overclock.net and some other stuff, including slower VRAM load-in on MSI Kombustor after 3.5 GBs.

FPS doesn't best reflect the problem at all, especially not a single average FPS figure from Nvidia. As can be seen in my AC Unity graph to demonstrate how FPS does not accurately reflect frametimes, small dips to 27 FPS (from 30) in the Excel chart are accompanied by much-more-telling frametime spikes way above 33.3 ms.

Do you have a game other than Unity to test with? Unity stutters a fair amount its a well known issue with the game.

Edit: sleepy and skimmed post a bit too much.
 

cheezcake

Member
Heres Shadow of Mordor from my system
i7 3770 4.2Ghz
Asus P8Z77I Deluxe
Samsung 2x4GB 2133Mhz CL10
GTX 970 1500/3880Mhz

The 1080p one was limited to 62fps by RTSS. The 2880p one was playable not stuttery.

ayTNBQd.jpg

Mean: 18.5ms
Variance: 90.16
Standard Deviation: 9.49531

3XFSXsV.jpg

Mean: 33.4ms
Variance: 159.5
Standard Deviation 12.6

More heavy VRAM usage
yqGL4kU.jpg

These are good stats but we really need a control with a 980 to actually infer something.
 

filly

Member
I think this is so fucked up. If I buy 1 gb of ram and 20-10% of it was throttled down I would be furious. Having 4gb of ggdr 5 means something. It means it's future is going to be above the 3gb standard that people are coding forms third parties (as the Xbox is 3gb). It means you will always have the edge over a console. By 1 gb in most cases. But this shows a 50% reduction in that critical part, in that by gimping the last lot of memory. I understand that real world specs are to a high standard. But 4gb of GDDR5 should be a consistent speed. False advertising and it's not right.

I bought a 970 thinking that it has potential and now that has been stolen. Time to trade up.
 

Zane

Member
I think this is so fucked up. If I buy 1 gb of ram and 20-10% of it was throttled down I would be furious. Having 4gb of ggdr 5 means something. It means it's future is going to be above the 3gb standard that people are coding forms third parties (as the Xbox is 3gb). It means you will always have the edge over a console. By 1 gb in most cases. But this shows a 50% reduction in that critical part, in that by gimping the last lot of memory. I understand that real world specs are to a high standard. But 4gb of GDDR5 should be a consistent speed. False advertising and it's not right.

I bought a 970 thinking that it has potential and now that has been stolen. Time to trade up.

It does have "potential," whatever that means. I'm playing all my games at 1080p at max settings at 60 fps and no stutter, Shadow of Mordor included. AC Unity gives my PC a workout but show me any PC that can max that game out at playable framerates and frametimes. I fully expect this card to last me and anyone else who bought it for the next 4 years.
 
I am trying to catch up with all the posts people have made so apologies if this has already been spoken of.

I ran that memory test program (Nai's benchmark) and my RAM speed drops by a significant amount when it hits just over 3000MB. It just tails off even more after 3500.

I don't know how credible a test this is but I have seen a lot of other people's numbers from the nVidia forums where their speeds drop when it hits 3000MB.

So is the problem that bandwidth is dying at 3GB and not 3.5GB?


I have tried Shadow of Mordor and it is hard to get it to hit 3500MB consistently (with ultra textures). It didn't seem to cause performance issues when I did get it to hit that value.
 
I am trying to catch up with all the posts people have made so apologies if this has already been spoken of.

I ran that memory test program (Nai's benchmark) and my RAM speed drops by a significant amount when it hits just over 3000MB. It just tails off even more after 3500.

I don't know how credible a test this is but I have seen a lot of other people's numbers from the nVidia forums where their speeds drop when it hits 3000MB.

So is the problem that bandwidth is dying at 3GB and not 3.5GB?



I have tried Shadow of Mordor and it is hard to get it to hit 3500MB consistently (with ultra textures). It didn't seem to cause performance issues when I did get it to hit that value.

Try going above it. That is where the performance problems shoukld occur.
 

Bastardo

Member
From Techreport: http://techreport.com/news/27721/nvidia-admits-explains-geforce-gtx-970-memory-allocation-issue
Interesting. We explored a similar datapath issue related to the GTX 970's disabled SMs in this blog post. In that case, we looked at why the GTX 970 can't make full use of its 64 ROP partitions at once when drawing pixels. Sounds to me like this issue is pretty closely related.

Beyond satisfying our curiosity, though, I'm not sure what else to make of this information. Like the ROP issue, this limitation is already baked into the GTX 970's measured performance. Perhaps folks will find some instances where the GTX 970's memory allocation limits affect performance more dramatically than in Nvidia's examples above. If so, maybe we should worry about this limitation. If not, well, then it's all kind of academic.
The blog they mention: http://techreport.com/blog/27143/here-another-reason-the-geforce-gtx-970-is-slower-than-the-gtx-980

Wow, in my opinion this is even more severe and should be investigated with priority.

After hearing about the 3,5 -> 4 GB memory issue my first thought was to simply disable everything above 3.5 GB. That would always lead to predictable performance. The problem is not really the lower amount of memory. The problem is the unpredictability of the performance loss.
The above statement would however be even more severe in that case. If you cannot use all 64 ROPs at once and an unknowing game programmer does that (i.e. if he optimizes his code for the number of ROPs), this will lead to two passes in the rendering step instead of one. Also here the solution would only be to deactivate some of the ROPs completely to avoid using them all at once.
 
Try going above it. That is where the performance problems shoukld occur.

I put Watch Dogs into 4k resolution with 8xMSAA but the frame rate was crawling so it was impossible to tell anything. The VRAM usage was up at 3900MB.

Running at 4K with x4MSAA got to 3526MB but never went any higher. It hitched in places but that is Watch Dogs so I cannot use that as a definitive answer to problems over 3500MB.

I am finding it very difficult to get a game to go significantly over 3500MB.

Is it a case that the card has to be 'forced' to use the last 500MB; MSI Afterburner is very rarely showing above 3500MB. Is the 980 more readily providing the VRAM where as the 970 doesn't?

I will have to read more otherwise I am probably asking questions that have been asked / answered etc.
 

dr_rus

Member
I put Watch Dogs into 4k resolution with 8xMSAA but the frame rate was crawling so it was impossible to tell anything. The VRAM usage was up at 3900MB.

Running at 4K with x4MSAA got to 3526MB but never went any higher. It hitched in places but that is Watch Dogs so I cannot use that as a definitive answer to problems over 3500MB.

I am finding it very difficult to get a game to go significantly over 3500MB.

Is it a case that the card has to be 'forced' to use the last 500MB; MSI Afterburner is very rarely showing above 3500MB. Is the 980 more readily providing the VRAM where as the 970 doesn't?

I will have to read more otherwise I am probably asking questions that have been asked / answered etc.

That's because their driver avoid allocation of the slow 500 MB of memory in games which can live without them. What we're looking at basically is the fact that all these games are running just fine with 3.5 GB of VRAM.

I think that it'll be some time (a couple of years probably) before we'll get the game which will _require_ more than 3.5 GB of VRAM to run at playable framerate. And this is the place where problems may start to occur.

I don't know about you but I plan to switch to a new card in two years. So for me this issue is a non-issue at the moment.
 
That's because their driver avoid allocation of the slow 500 MB of memory in games which can live without them. What we're looking at basically is the fact that all these games are running just fine with 3.5 GB of VRAM.

I think that it'll be some time (a couple of years probably) before we'll get the game which will _require_ more than 3.5 GB of VRAM to run at playable framerate. And this is the place where problems may start to occur.

I don't know about you but I plan to switch to a new card in two years. So for me this issue is a non-issue at the moment.


I just got Mordor up to 3700MB during a stormy scene at 1440p. The frame rate was a little lower but that is understandable because of the effects.

I didn't notice any performance drops or stuttering that I would construe as to be related to the VRAM. All seemed to perform as expected really.

I looked at my Afterburner charts and framerate / frametime didn't dive when the RAM was being used; the frame rate drop was due to effects happening I would say. I think I will hang on for others to run in depth checks!

Edit: GPU usage didn't waiver.
 

Makareu

Member
Try going above it. That is where the performance problems shoukld occur.

But that is not the case, there is no significant performance drop when you access the last 500 MB. I did the test and posted the results here. And everytime an actual 970 owner bother to test it, the results are the same.

The 970 just went from a 90/100 rated GPU to a 88/100 GPU. While this is disappointing from Nvidia, this is not a scandal either. But it is almost like some people have some kind of outrage quota to reach.
 

pestul

Member
Everyone keeps citing SoM. We need to test a lot more to be sure. We can't even be sure MSIAfterburner is reading the mem properly either.
 

wazoo

Member
The above statement would however be even more severe in that case. If you cannot use all 64 ROPs at once and an unknowing game programmer does that (i.e. if he optimizes his code for the number of ROPs), this will lead to two passes in the rendering step instead of one. Also here the solution would only be to deactivate some of the ROPs completely to avoid using them all at once.

There are no optimization in PC gaming. Nobody will assume some ROP/VRAM number.
 

Mengy

wishes it were bannable to say mean things about Marvel
Basically if you want actual 4GB, you need a GTX 980.
Otherwise GTX 970 is 3.5GB.

Case... closed?

Yes, that does seem to be the reality of it.

So... What are they going to do about it?

My hunch is Nvidia won't do anything about it, and will keep very quiet hoping that it just goes away. They made their money, they'll move on to the new and better things, and just generally try their best to ignore it.

If they can fix it with drivers then that will be fantastic, but I don't believe they'll be able to.
 

THE:MILKMAN

Member
My brother just bought a 970 SSC but he isn't bothered about it. I guess going from a Radeon HD6850 helps.

In the EU there might be an issue with the claimed RAM being 4GB though. I used to own a Fiesta ST which has 197 BHP but because it is a "temporary" overboost figure Ford could only list it as 180 BHP/182 PS.

In the real world on real roads the full 197 BHP was always available.

If NVIDIA are writing the drivers to avoid using the last 500MB or devs are keeping their games from using above 3.5GB, it could be a problem in the EU.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
The memory interface width is advertised as 256bit, which is correct because the design features 4 x 64bit controllers. They do not make a statement about the effective width. This is my point.


Seems the specs state "Memory Bandwidth (GB/sec) 224" That's clearly not the case for the last .5GB.

And LOL at the corporate ball lickers in here trying to weasel on the side of the manufacturer on this issue. SMH.
 

bootski

Member
i've been testing this out as extensively as i'm able to and i've arrived at a number of conclusions.

1) the card does not like going past 3.5GiB. it stubbornly sits at 3.5GiB for as long as it's able to and does not go into that 2nd partion .5GiB unless forced. this is the most noticeable issue i could find. here's a youtube vid showing COD:AW, i think and demonstrating this issue (my screenshots are further down). watch mem usage.

2) when it does break that mark, for the second that it does, frametimes skyrocket outside the norm, even when loading at blackscreens. it's extremely hard to notice and i'd consider it a non-issue.

3) during it's stay >3.5GiB, frametimes will occasionally spike, causing a noticeable short stutter. it doesn't happen often. also, in game, the gpu usage started fluctuating wildly between 85-100% whereas it was staying pegged at 100% before.

how i tested:

1) msi kombustor testing w/ gpu maxed out but mem usage <3.5GiB; 1440p

hdrdof.PNG

lotx64.PNG


both these show the card pegged, fps fluctating a bit, frametime doing the same but not abnormally so.

2) msi kombustor testing w/ gpu maxed out but mem usage >3.5GiB; 1440p + 720p

memburn.PNG

memburn2.PNG


goes past the 3.5GiB mark when forced, the weird gpu usage thing shows up here. i'm unsure as to what's causing this. the actual graphic looked fine

3) sleeping dogs; 1800p (3200x1800); everysetting maxed but no ultra-tex pack downloaded; gpu maxed, <3.5GiB

sdogs.PNG


i wouldn't ever play the game at these settings but wanted to see what the rest of the stats looked like when i brought the framerate down to the sub 30 area and kept the mem usage below 3.5GiB. as you can see, the framerate/time start jumping around but not that crazily. game was sluggish but not unplayable, no noticeable stuttering.


4) fc4; 1800p medium; 1800p ultra but no AA; all <3.5GiB although sitting ON the border for both the 1800p tests and 1800p ultra with full AA >3.5GiB

fc1800p.png


ok. this image shows 3 play sessions, as marked on the image and described above. the BIG dips in gpu usage are during the game restarts or when i hit a vendor screen, menu etc. here, it was quite obvious that the game hilariously did not want to cross the 3.5GiB mark. you can see even going from 1800p medium settings to ultra did not do anything to the mem usage but sat RIGHT on the line. only when TXAA4+ or whatever the highest AA setting was added on did it crack that mark, and then i started seeing the frametimes really get going and had a tiny bit of stuttering, but nothing over the top. i had to expand the graph from 50 to 100 here to show most of the range. the gpu usage also really starts acting a fool.

in closing
there's an issue with the cards certainly. they are most definitely under utilizing the full 4GiB of memory. however, outside synthetics, to break the 3.5GiB mark i need to crank games up to ludicrous settings i'd never use when normally playing. all in all, the issue exists but it's certainly been blown out of proportion from what i've seen. i did these tests to see if i needed to return the card and just go right to the 980 series. i still might, i have until friday to decide, but this issue would certainly not have me trading the card in for a 290x or certainly not down to a 960 eww.

if anyone knows of any games i can test, outside shadow of mordor, that runs well while utilizing tons of VRAM, i'd love to hear about it. shadow of mordor has been tested to death post 3.5GiB and it certainly doesn't seem to have any major stuttering issues from any of the videos that have been posted.

also, in testing i used downsampling. to do it properly, i had to disconnect my 2nd monitor :( as when running at a downsampled res, even though it'd show properly on my first monitor, it would black out a portion of my 2nd monitor. anyone know how to fix this?
 

pestul

Member
bootski, you've just done the best analysis thus far available online. Great job. I look forward to seeing some more benches done this way.
 
Status
Not open for further replies.
Top Bottom