• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

8 Gb of Vram is not enough even for 1080p gaming.

It still boggles my mind why MS hasn't invested into making their own line of gaming PCs. I know they can do a lot better than Alienware.
 

Loxus

Member
I don't really understand what exactly it says.
A method for graphics processing. The method including rendering graphics for an application using a plurality of graphics processing units (GPUs). The method including dividing responsibility for the rendering geometry of the graphics between the plurality of GPUs based on a plurality of screen regions, each GPU having a corresponding division of the responsibility which is known to the plurality of GPUs.

From my understanding.
This is a TV screen divided into 4 and labeled A, B, C, D.
WZGEHMR.png


This is the multi-GPU chip, where you can see GPU A, GPU B, GPU C, and GPU D.
Each of these are only suppose to render the corresponding letter on the screen.
S2E7fqh.jpg


On 3nm, these chiplet will be small. Similar to the size of Zen CPU chiplets.
If each chiplet is capable on doing 4k60, 4 of these chiplets should be capable of producing an 8k/60 image usingthis method.

There will most likely be alot of custom hardware to make this possible, similar too the IO Complex.

Cost can be worked out, but in a different thread. Don't want to derail this one.
 
Truth is, the more recourses you give to developers, the more bloated their code becomes because they care less about optimizing it. Give them 50GB of VRAM today and they will cap it in a couple of years without improving the graphics as much.
Yeah we see this with file size of games. When they were no longer constrained by DVD-ROM capacity they went wild.
 

Knightime_X

Member
What settings are being used though?
Shadow, texture, and volumetric lighting set to ultra?

Many of these settings can be set to medium and outside of vram usage, you can't even tell the difference.
 

Bojji

Member
What settings are being used though?
Shadow, texture, and volumetric lighting set to ultra?

Many of these settings can be set to medium and outside of vram usage, you can't even tell the difference.

Those settings barely make a difference in new games in terms of VRAM. Textures are the biggest factor and you can tell the difference between settings.
 

Knightime_X

Member
Those settings barely make a difference in new games in terms of VRAM. Textures are the biggest factor and you can tell the difference between settings.
Textutes, yes but no point in using 4k textures in 1080p. Even just slightly higher would be enough.

As for the rest, they all add up.
If people would learn to ratio resources it will make a noticeable difference in their favor.
 

yamaci17

Member
Textutes, yes but no point in using 4k textures in 1080p. Even just slightly higher would be enough.

As for the rest, they all add up.
If people would learn to ratio resources it will make a noticeable difference in their favor.
even a plague tale reuqiem, asobo studios did it. high to ultra textures make not so much difference at 1080p but massive vram savings. it also still looks decent on a 1080p screen

but yeah we're led to believe it is impossibe by certain doomsayers here. what can you say, lol
 

Black_Stride

do not tempt fate do not contrain Wonder Woman's thighs do not do not
Those settings barely make a difference in new games in terms of VRAM. Textures are the biggest factor and you can tell the difference between settings.
Please dont use The Last of Us as your example of going from Ultra to High to Medium textures.
Cuz TLoU was broken at launch.
Most games going for high to medium wont completely fuck up the look of the game.
The latest patch has helped with VRAM usage and making medium not actually look like absolute ass.
TLoU on High even is okay....its medium settings were clearly broken.

NGmf5hj.png


RCeq1jT.png

I find that difficult to believe.
Their medium environmental settings.....setting was/is broken, the environment textures would load some Switch port level shit for the environment.
Totallly totally blurry mess you could barely make out what the texture was supposed to be.
It was clearly a mistake or some heavy heavy oversight.

Because, in all my years of gaming ive never seen texture quality disparity that massive between high and medium.
even a plague tale reuqiem, asobo studios did it. high to ultra textures make not so much difference at 1080p but massive vram savings. it also still looks decent on a 1080p screen

but yeah we're led to believe it is impossibe by certain doomsayers here. what can you say, lol
Exactly.
A relatively small studio managed to compress their textures in such a way they dont suddenly become a blurry mess.
TLoUs medium environment textures are worse than some games low textures.
 
I have a 3070 and I play cyperpunk on ultra @ 1440p on overdrive mode with dlss quality at 30-35 fps

And thats with triple npc and traffic mod on...and reshade.

Gamers just eat whatever they can up these days.
 

SlimySnake

Flashless at the Golden Globes
Interesting, but that person's issue is that dwm.exe is using GPU processing power. My GPU is idle, but my VRAM allocation towards dwm.exe is 700MB.


I really have no idea lol. I'm using a 3070 and Windows 10 on this particular PC, for reference.
same. i have tried a lot of things by googling. still at 700MB. Fucking Microsoft.

just installed the EA app to play Star Wars and it is 600MB lol. though while i was in game, it brought it back down to 200MB.
 

rnlval

Member
It is unfortunate that games use so much VRAM, but you don't have to run games at Ultra quality and use ray tracing.
Rename "RTX 3070 Ti" into "GTX 3070 Ti". :messenger_grinning_sweat::messenger_grinning_smiling:

My MSI "RTX 3070 Ti" Suprim X GPU card in my HTPC. :messenger_tears_of_joy:

rLZylVU.jpg


I think it is funny how many of these are AMD sponsored titles that look no better than titles from 2018.

That’s a new time low for Hardware Unboxed, milking the cow to the max here.

Alas, the biggest question is: How come TLOU Part I runs perfectly on a system with 16GB total memory, when in the PC you need 32GB RAM + 16GB video RAM to even display textures properly?

https://www.techpowerup.com/306713/...s-simultaneous-access-to-vram-for-cpu-and-gpu
Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."

A shared pool of memory between the CPU and GPU will eliminate the need to keep duplicates of the game scenario data in both system memory and graphics card VRAM, therefore resulting in a reduced data stream between the two locations. Modern graphics cards have tended to feature very fast on-board memory standards (GDDR6) in contrast to main system memory (DDR5 at best). In theory, the CPU could benefit greatly from exclusive access to a pool of ultra quick VRAM, perhaps giving an early preview of a time when DDR6 becomes the daily standard in main system memory.



One step closer to AMD's decade-old "Fusion" hUMA/HSA dream for the PC.

pyhpVoa.jpg
 
Last edited:

yamaci17

Member
same. i have tried a lot of things by googling. still at 700MB. Fucking Microsoft.

just installed the EA app to play Star Wars and it is 600MB lol. though while i was in game, it brought it back down to 200MB.
if you have multiple screens, it can have an impact

and this is why the logic of "hey, i have 10 / 12 gb, I will last the whole gen bcoz it is equal/more than the ps5's vram budget!" argument actually doesnt work. quite literally no one takes into account of how steam, ea, dwm and many more stuff gobbles up a lot of VRAM even at idle. and then you expect these GPUs to push premium fancy stuff above consoles too, with similar or lower VRAM budget

even if you reduce clutter a lot, spiderman forspoken and certain other games will refuse to use empty %10-15 vram anyways (most likely to take account for the stuff above)

this is why it is 16 gb or bust. even 12 gb gpus are not worth it from now on
 

lestar

Member
This situation is going to get worst when developers start to use AI language models in games, even 24GB will be not enough
 

RobRSG

Member
https://www.techpowerup.com/306713/...s-simultaneous-access-to-vram-for-cpu-and-gpu
Microsoft has implemented two new features into its DirectX 12 API - GPU Upload Heaps and Non-Normalized sampling have been added via the latest Agility SDK 1.710.0 preview, and the former looks to be the more intriguing of the pair. The SDK preview is only accessible to developers at the present time, since its official introduction on Friday 31 March. Support has also been initiated via the latest graphics drivers issued by NVIDIA, Intel, and AMD. The Microsoft team has this to say about the preview version of GPU upload heaps feature in DirectX 12: "Historically a GPU's VRAM was inaccessible to the CPU, forcing programs to have to copy large amounts of data to the GPU via the PCI bus. Most modern GPUs have introduced VRAM resizable base address register (BAR) enabling Windows to manage the GPU VRAM in WDDM 2.0 or later."

A shared pool of memory between the CPU and GPU will eliminate the need to keep duplicates of the game scenario data in both system memory and graphics card VRAM, therefore resulting in a reduced data stream between the two locations. Modern graphics cards have tended to feature very fast on-board memory standards (GDDR6) in contrast to main system memory (DDR5 at best). In theory, the CPU could benefit greatly from exclusive access to a pool of ultra quick VRAM, perhaps giving an early preview of a time when DDR6 becomes the daily standard in main system memory.



One step closer to AMD's decade-old "Fusion" hUMA/HSA dream for the PC.

pyhpVoa.jpg
Thanks for showing the article, but I don’t see how it relates to the Last of Us Part I.

In reality, the game is now fixed with the latest optimization patch - even without ReBar enabled.

They fixed VRAM utilization without any visible compromise, and even improved lower texture quality in low and medium settings
 
Last edited:

rnlval

Member
16GB total with 2GB reserved for OS. IIRC console games allocate about a third for system memory and 2 thirds for VRAM, so about 9GB VRAM.
PS5 has 16 GB GDDR6-14000 and 512 MB DDR4.

16 GB - 2 = 14 GB

Using Killzone Shadow Fall PS4's 3.1 GB GPU: 1.5 GB CPU memory usage example. The PS5 game with 14 GB usage could have 9.3 GB to 11 GB for the GPU. CPU with 3GB usage can store twice the geometry control points data and AI when compared to Killzone Shadow Fall PS4's 1.5 GB.

Windows 10 /11 DWM has discrete VRAM usage e.g. 0.6 GB to 1 GB (4K desktop).
 

rnlval

Member
Thanks for showing the article, but I don’t see how it relates to the Last of Us Part I.

In reality, the game is now fixed with the latest optimization patch - even without ReBar enabled.

They fixed VRAM utilization without any visible compromise, and even improved lower texture quality in low and medium settings

 

rnlval

Member
it is not the same not because of targets but because of actual vram targets

2 gb really went obsolete due to being extremely short of what consoles had

however 4 gb never had any problem at 1080p/console settings, similar to PS4. ps4/xbox one usually allocated 3.5-4 GB for GPU memory data and 1.5-2 GB for CPU memory data to games.

you can play any most recent peak ps4 games at console settings just fine with 4 GB VRAM;

problems stem when you want to above consoles in terms of resolution and texture quality with 4 GB. and if you have a 1050ti/1650 super/970, you will have to use CONSOLE equivalent settings to get playable framerates regardless.

in the same respect, new consoles have around 10 gb alloctable memory for gpu operations, and 3.5 gb for cpu memory operations. but they do this at 4k/upscaling with 4k textures. reducing textures just a bit, and using a lower resolution target 1440p/1080p/DLSS combinations will allow 8 GB to last through a quite bit of time yet. its just that problem stems from angry 3070 users who bought the decrepit VRAM'ed GPU for upwards of 1000+ bucks. this is why hw unboxed is super salty about this topic, as the same budget can be spent towards a 6700xt/6800 and even have leftover money on top.

for me personally, I always see 3070 as a high/reduce textures a bit card. But I practically got it for almost free, and I'd never pay the full price for it. I always discouraged people from getting it.

But people with 2070s/2060super/2070 should be fine. they should just reduce background clutter to a minimum and should not chase ultra settings. 3070/3070ti is a bit different as it can push ultra with acceptable framerates in certain titles, and puts people into these situations, sadly.

people who act like 4 GB was dead in recent years (2018 to 2021) are people who believe 4 GB is dead for 1440p/high settings whereas most 4 GB cards are not capable of that to begin with. so I don't where this "4 gb was ded in bla bla year" argument began.

4 gb vram was enough for almost all ps4 ports at ps4 equivalent settings. almost.
The problem with 4GB VRAM is with mid-gen PS4 Pro (8 GB GDDR5 + 1 GB DDR3, allows 5.5 GB GDDR5 for games) and X1X (12GB GDDR5, allows 9GB? for games).

GTX 970's 4GB VRAM is not real when it's actually 3.5 GB VRAM with 0.5 GB fake VRAM and NVIDIA was sued for this issue. https://www.eurogamer.net/digitalfo...g-lawsuit-over-gtx-970-deceptive-conduct-blog

GTX 980 Ti 6GB can cover the entire XBO/PS4 to PS4 Pro/X1X generation.
 
Last edited:

yamaci17

Member
The problem with 4GB VRAM is with mid-gen PS4 Pro (8 GB GDDR5 + 1 GB DDR3, allows 5.5 GB GDDR5 for games) and X1X (12GB GDDR5, allows 9GB? for games).

GTX 970's 4GB VRAM is not real when it's actually 3.5 GB VRAM with 0.5 GB fake VRAM and NVIDIA was sued for this issue. https://www.eurogamer.net/digitalfo...g-lawsuit-over-gtx-970-deceptive-conduct-blog

GTX 980 Ti 6GB can cover the entire XBO/PS4 to PS4 Pro/X1X generation.
that doesn't really matter, 4 gb 1650 super still runs most games like a champ. same goes for 970

you can play rdr2 at 1080p with ps4 equivalent settings on a 970 smoothly as well. RDR2 is quite literally peak for PS4 capability, and if a 970 can run that game smoothly at equivalent settings, it is game over for this discussion. same for spiderman.

5.5 gb is for total budget, 1.5-2 gb of it will be used for sound/game logic etc. that does not have to reside on VRAM on PC. most PS4 games on PS4 uses around 3.5-4 GB VRAM. games like horizon zero dawn with minimal amount of simulation most likely uses upwards of 4 GB of VRAM for graphics. so 3.5 gb barely scrapes by, and 4 gb simply plays these games greatly. there are tons of 4 gb users who played late ps4 gen ports without any problem or whatsoever. not even texture degration (you can use ps4 equivalent textures in practically all of them with a 4 gb buffer at 1080p.)





1650s is another beast





ps4 equivalent settings in these games doesn't even fill up the 4 gb buffer in some cases (see above)

so no, 4 gb 1650 super/1050ti (and to some extent '3.5 GB' 970) can cover almost the entiret PS4 generation. gtx 970 only falters in horizon zero dawn since that game uses very minimal amount of CPU data to begin with

also, one x targets 4k and ps4 pro targets 1440p. but that's not the topic. what is important is the baseline 5.5 GB budget PS4 has, since 970/1650 super similarly targets 1080p. and at that resolution, with matched settigs, you get high, stable framerates without any VRAM issue.

barring 970 and extreme outlier situations; 4 GB was never a problem with PS4 era with PS4 equivalent settings at PS4 resolution.

I hope I'm being clear.

once you go past ps4 equivalent settings, both 1650 super and 970 will drop below 40 FPS in most games so it is not an argument point either. you will likely want to utilize ps4 centric optimized low med high mixed settings to hit upwards of 50+ frames in late PS4 games, regardless. which brings us to the original point; 4 GB is not a problem on such cards, and never have been.
 
Last edited:

winjer

Gold Member
that doesn't really matter, 4 gb 1650 super still runs most games like a champ. same goes for 970

you can play rdr2 at 1080p with ps4 equivalent settings on a 970 smoothly as well. RDR2 is quite literally peak for PS4 capability, and if a 970 can run that game smoothly at equivalent settings, it is game over for this discussion. same for spiderman.

5.5 gb is for total budget, 1.5-2 gb of it will be used for sound/game logic etc. that does not have to reside on VRAM on PC. most PS4 games on PS4 uses around 3.5-4 GB VRAM. games like horizon zero dawn with minimal amount of simulation most likely uses upwards of 4 GB of VRAM for graphics. so 3.5 gb barely scrapes by, and 4 gb simply plays these games greatly. there are tons of 4 gb users who played late ps4 gen ports without any problem or whatsoever. not even texture degration (you can use ps4 equivalent textures in practically all of them with a 4 gb buffer at 1080p.)





1650s is another beast





ps4 equivalent settings in these games doesn't even fill up the 4 gb buffer in some cases (see above)

so no, 4 gb 1650 super/1050ti (and to some extent '3.5 GB' 970) can cover almost the entiret PS4 generation. gtx 970 only falters in horizon zero dawn since that game uses very minimal amount of CPU data to begin with

also, one x targets 4k and ps4 pro targets 1440p. but that's not the topic. what is important is the baseline 5.5 GB budget PS4 has, since 970/1650 super similarly targets 1080p. and at that resolution, with matched settigs, you get high, stable framerates without any VRAM issue.

barring 970 and extreme outlier situations; 4 GB was never a problem with PS4 era with PS4 equivalent settings at PS4 resolution.

I hope I'm being clear.

once you go past ps4 equivalent settings, both 1650 super and 970 will drop below 40 FPS in most games so it is not an argument point either. you will likely want to utilize ps4 centric optimized low med high mixed settings to hit upwards of 50+ frames in late PS4 games, regardless. which brings us to the original point; 4 GB is not a problem on such cards, and never have been.


But games on the PS4 era ha to contend with an HDD. So streaming systems were very slow on console.
This means a game on PC, even with a simple SATA SSD can stream data into the GPU much faster that the PS4 could. So PC games could do with 4Gb of vram, and stream the rest of the data, very easily.
But with the PS5, not only it has 16Gb, it also has a decompression system and a file system, that is light years ahead of what PC games use today.
This means the streaming system on PC cannot keep up, so it has to cache more data into VRAM.
And considering that adoption of Direct Storage has been very, very slow, things on PC are going to worse.
 

rnlval

Member
that doesn't really matter, 4 gb 1650 super still runs most games like a champ. same goes for 970

you can play rdr2 at 1080p with ps4 equivalent settings on a 970 smoothly as well. RDR2 is quite literally peak for PS4 capability, and if a 970 can run that game smoothly at equivalent settings, it is game over for this discussion. same for spiderman.

5.5 gb is for total budget, 1.5-2 gb of it will be used for sound/game logic etc. that does not have to reside on VRAM on PC. most PS4 games on PS4 uses around 3.5-4 GB VRAM. games like horizon zero dawn with minimal amount of simulation most likely uses upwards of 4 GB of VRAM for graphics. so 3.5 gb barely scrapes by, and 4 gb simply plays these games greatly. there are tons of 4 gb users who played late ps4 gen ports without any problem or whatsoever. not even texture degration (you can use ps4 equivalent textures in practically all of them with a 4 gb buffer at 1080p.)


1650s is another beast


ps4 equivalent settings in these games doesn't even fill up the 4 gb buffer in some cases (see above)

so no, 4 gb 1650 super/1050ti (and to some extent '3.5 GB' 970) can cover almost the entiret PS4 generation. gtx 970 only falters in horizon zero dawn since that game uses very minimal amount of CPU data to begin with

also, one x targets 4k and ps4 pro targets 1440p. but that's not the topic. what is important is the baseline 5.5 GB budget PS4 has, since 970/1650 super similarly targets 1080p. and at that resolution, with matched settigs, you get high, stable framerates without any VRAM issue.

barring 970 and extreme outlier situations; 4 GB was never a problem with PS4 era with PS4 equivalent settings at PS4 resolution.

I hope I'm being clear.

once you go past ps4 equivalent settings, both 1650 super and 970 will drop below 40 FPS in most games so it is not an argument point either. you will likely want to utilize ps4 centric optimized low med high mixed settings to hit upwards of 50+ frames in late PS4 games, regardless. which brings us to the original point; 4 GB is not a problem on such cards, and never have been.

Note that PS4 (November 2013) was released in the same year as GTX 770 2GB (May 2013) and GTX 780 3 GB (May 2013). Kelper GPUs were gimped with Async Compute.

GTX 970 was released in Sep 2014. GTX 970 is about 11 months late.

PS4 Pro was released in Nov 2016. Mid-gen console release.

Xbox One X was released in Nov 2017. Mid-gen console release.

The "fine wine" GPU during the PS4's 2013 launch window was Radeon HD 7950 3GB and 7970 3GB when compared to NVIDIA's Kelper Refresh SKUs.
-------------
Note that PS5 (November 2020) was released in the same year as RTX 3070 8GB (Sep 2020) and RTX 3080 10 GB (Sep 2020).

GTX 1650 was released in April 2019 which doesn't exist in PS4's 2013 launch window.

RTX 3060 12 GB was released on Jan 2021.
RTX 3070 Ti 8 GB was released in May 2021.
RTX 4070 Ti 12 GB was released in April 2023.
 
Last edited:

yamaci17

Member
But games on the PS4 era ha to contend with an HDD. So streaming systems were very slow on console.
This means a game on PC, even with a simple SATA SSD can stream data into the GPU much faster that the PS4 could. So PC games could do with 4Gb of vram, and stream the rest of the data, very easily.
But with the PS5, not only it has 16Gb, it also has a decompression system and a file system, that is light years ahead of what PC games use today.
This means the streaming system on PC cannot keep up, so it has to cache more data into VRAM.
And considering that adoption of Direct Storage has been very, very slow, things on PC are going to worse.
thats their problem to solve/fix/remedy. implications of what you say is too gravely. it could even mean or imply that not even 16 gb may be enough. but if a solid solution is to be presented, reverse would happen where even 8 gb would cherish and endure.

rnlval rnlval we're talking about memory budgets here. so a 4 gb gpu that is released in 2022 is not any better or worse for ps4 ports. i've given the 1650 example because it actuallu supports async and scales well with recent ports.
you can find a 4 gb 980 and it performs fine too from 2014.

I don't even know what you are trying to argue however. 4 GB was and still is fine for PS4 ports. you were originally arguing 4 gb wasn't okay with ps4 ports but now 970 is okay because it is released 11 months after=? what are you trying to say here? be consistent with what you're arguing please.

and do not dereail the discussion as well.

the original user is telling 8 gb became the new 2 gb.
whereas 8 gb became the new 4 gb instead. even logic tells us that. and 3-4 GB gpus do fine by console equivalent settings.

series s alone is proof that 8 gb gpus will endure and live on. I know it sounds weird but there's a reason why NVIDIA and AMD can still confidently release 8 GB GPUs. for 1080p / optimized console settings, 8 GB will most likely last the whole generation, unless what Winjer told happens. (and if that happens, such an implication would also make 12-16 GB GPUs obsolete too)

4 gb was the entry level but it also procured good enough graphics (not ps2 textures or garbage graphics, mind you) 8 will be the same. no reason not to.
 
Last edited:

winjer

Gold Member
thats their problem to solve/fix/remedy. implications of what you say is too gravely. it could even mean or imply that not even 16 gb may be enough. but if a solid solution is to be presented, reverse would happen where even 8 gb would cherish and endure.

Graphics cards need to have more than 8Gb of vram, from now on.
But on the streaming side, companies have to implement Direct Storage. We have only one game that uses it on PC. This is really bad.
AMD, Intel and NVidia are so busy pushing RT, FSR, DLSS, XeSS, HairWorks, CaCAO, etc, and forgetting something that is much more important: Direct Storage.
I bet that it Direct Storage was the standard for most games, we wouldn't be talking so much about stutters, CPU usage and performance limitations on PC.
 

rnlval

Member
thats their problem to solve/fix/remedy. implications of what you say is too gravely. it could even mean or imply that not even 16 gb may be enough. but if a solid solution is to be presented, reverse would happen where even 8 gb would cherish and endure.

rnlval rnlval we're talking about memory budgets here. so a 4 gb gpu that is released in 2022 is not any better or worse for ps4 ports. i've given the 1650 example because it actuallu supports async and scales well with recent ports.
you can find a 4 gb 980 and it performs fine too from 2014.

I don't even know what you are trying to argue however. 4 GB was and still is fine for PS4 ports. you were originally arguing 4 gb wasn't okay with ps4 ports but now 970 is okay because it is released 11 months after=? what are you trying to say here? be consistent with what you're arguing please.

and do not dereail the discussion as well.

the original user is telling 8 gb became the new 2 gb.
whereas 8 gb became the new 4 gb instead. even logic tells us that. and 3-4 GB gpus do fine by console equivalent settings.

series s alone is proof that 8 gb gpus will endure and live on. I know it sounds weird but there's a reason why NVIDIA and AMD can still confidently release 8 GB GPUs. for 1080p / optimized console settings, 8 GB will most likely last the whole generation, unless what Winjer told happens. (and if that happens, such an implication would also make 12-16 GB GPUs obsolete too)

4 gb was the entry level but it also procured good enough graphics (not ps2 textures or garbage graphics, mind you) 8 will be the same. no reason not to.
1. You're citing Maxwell v2 GPUs that were released 11 months after PS4's release.

2. Being on par with PS4 with higher gaming PC cost is LOL. For a given console generation, the higher-cost gaming PC should be delivering superior performance when compared to game consoles.
 

yamaci17

Member
1. You're citing Maxwell v2 GPUs that were released 11 months after PS4's release.

2. Being on par with PS4 with higher gaming PC cost is LOL. For a given console generation, the higher-cost gaming PC should be delivering superior performance when compared to game consoles.
2. no gtx 970 is not on par with ps4. with such settings, you get 50-75 fps depending on the game. that's 2x-2.5x framerate increase over ps4 in most cases

1. and here we're talking about 8 gb gpus viability nearly 2.5 years after consoles. yet they will be viable, at 1080p, with console equivalent settings. it would better if they , at max, cost 300 bucks. but they will be here and be continue to be the new 4 gb of the yore whether some like it or not
 
Last edited:

rnlval

Member
that doesn't really matter, 4 gb 1650 super still runs most games like a champ. same goes for 970

you can play rdr2 at 1080p with ps4 equivalent settings on a 970 smoothly as well. RDR2 is quite literally peak for PS4 capability, and if a 970 can run that game smoothly at equivalent settings, it is game over for this discussion. same for spiderman.

5.5 gb is for total budget, 1.5-2 gb of it will be used for sound/game logic etc. that does not have to reside on VRAM on PC. most PS4 games on PS4 uses around 3.5-4 GB VRAM. games like horizon zero dawn with minimal amount of simulation most likely uses upwards of 4 GB of VRAM for graphics. so 3.5 gb barely scrapes by, and 4 gb simply plays these games greatly. there are tons of 4 gb users who played late ps4 gen ports without any problem or whatsoever. not even texture degration (you can use ps4 equivalent textures in practically all of them with a 4 gb buffer at 1080p.)





1650s is another beast





ps4 equivalent settings in these games doesn't even fill up the 4 gb buffer in some cases (see above)

so no, 4 gb 1650 super/1050ti (and to some extent '3.5 GB' 970) can cover almost the entiret PS4 generation. gtx 970 only falters in horizon zero dawn since that game uses very minimal amount of CPU data to begin with

also, one x targets 4k and ps4 pro targets 1440p. but that's not the topic. what is important is the baseline 5.5 GB budget PS4 has, since 970/1650 super similarly targets 1080p. and at that resolution, with matched settigs, you get high, stable framerates without any VRAM issue.

barring 970 and extreme outlier situations; 4 GB was never a problem with PS4 era with PS4 equivalent settings at PS4 resolution.

I hope I'm being clear.

once you go past ps4 equivalent settings, both 1650 super and 970 will drop below 40 FPS in most games so it is not an argument point either. you will likely want to utilize ps4 centric optimized low med high mixed settings to hit upwards of 50+ frames in late PS4 games, regardless. which brings us to the original point; 4 GB is not a problem on such cards, and never have been.




Horizon Zero Dawn, at original PS4 graphics settings on GTX 970, 4GB VRAM has been reached.
 
Last edited:

Spyxos

Gold Member
NVIDIA researchers have developed a novel compression algorithm for material textures.

NVIDIA-NTC-HERO-BANNER-1200x382.jpg


In a paper titled “Random-Access Neural Compression of Material Textures”, NVIDIA presents a new algorithm for texture compression. The work targets the increasing requirements for computer memory, which now stores high-resolution textures as well as many properties and attributes attached to them to render high-fidelity and natural-looking materials.

The NTC is said to deliver 4 times higher resolution (16 more texels) than BC (Block Compression), which is a standard GPU-based texture compression available in many formats. NVIDIA’s algorithm represents textures as tensors (three dimensions), but without any assumptions like in block compression (such as channel count). The only thing that NTC assumes is that each texture has the same size.

Random and local access is an important feature of the NTC. For GPU texture compression, it is of utmost importance that textures can be accessed at a small cost without a delay, even when high compression rates are applied. This research focuses on compressing many channels and mipmaps (textures of different sizes) together. By doing so, the paper claims that the quality and bitrate are better than JPEG XL or AVIF formats.

NVIDIA-NTC-850x443.jpg


NVIDIA-NTC2-850x448.jpg
 

Bojji

Member
NVIDIA researchers have developed a novel compression algorithm for material textures.

NVIDIA-NTC-HERO-BANNER-1200x382.jpg


In a paper titled “Random-Access Neural Compression of Material Textures”, NVIDIA presents a new algorithm for texture compression. The work targets the increasing requirements for computer memory, which now stores high-resolution textures as well as many properties and attributes attached to them to render high-fidelity and natural-looking materials.

The NTC is said to deliver 4 times higher resolution (16 more texels) than BC (Block Compression), which is a standard GPU-based texture compression available in many formats. NVIDIA’s algorithm represents textures as tensors (three dimensions), but without any assumptions like in block compression (such as channel count). The only thing that NTC assumes is that each texture has the same size.

Random and local access is an important feature of the NTC. For GPU texture compression, it is of utmost importance that textures can be accessed at a small cost without a delay, even when high compression rates are applied. This research focuses on compressing many channels and mipmaps (textures of different sizes) together. By doing so, the paper claims that the quality and bitrate are better than JPEG XL or AVIF formats.

NVIDIA-NTC-850x443.jpg


NVIDIA-NTC2-850x448.jpg

This won't be used in games for years. We still heaven't seen usage of SFS (and this could potentially help Vram usage) and mesh shaders and that's 2018 tech.
 

winjer

Gold Member

Recently, it was discovered that a company called Gxore will soon unveil a groundbreaking development in the graphics card market. The company plans to offer its latest innovation, RTX 3070 cards with 16 GB of standard memory. This exciting news is spreading like wildfire among technology enthusiasts and gamers who are already waiting impatiently.
 

mrcroket

Member
It's funny to see pc gamers who always cry because "consoles constrain the graphics because specs" crying because now games need more vram because consoles specs.
 

Fuz

Banned
And here's me with my 3070 playing Fallen Order at Ultra settings with native 4k......
And here's me with my 1060 3gb playing System Shock, Genshin, ToF and almost all the other stuff I like at 1080 at maximum setting and 60fps
2Rce3.gif
 

SF Kosmo

Al Jazeera Special Reporter
A lot of these games are written for consoles with unified RAM and ported lazily without much effort paid to VRAM management. If you look at post release patches for games like Last of Us and Forspoken which manage to use textures that are 4-8x higher res while using LESS VRAM than the launch version and you get a sense of how much of this lays at the feet of developers.

That said, the fact that 8GB should be plenty is cold comfort when these shoddy ports continue to be so common.
 

yamaci17

Member
A lot of these games are written for consoles with unified RAM and ported lazily without much effort paid to VRAM management. If you look at post release patches for games like Last of Us and Forspoken which manage to use textures that are 4-8x higher res while using LESS VRAM than the launch version and you get a sense of how much of this lays at the feet of developers.
.

to achieve that, devs most likely had to or rather "would" have to spent more money/workhour than they'd actually earn off of from 8 GB GPU owners. at that point, devs will see how fruitless it is to do so. it is not about being able to do it or not, it is about profitability. exact same reason why many devs bitched / keep bitching about Series S and its limited memory buffer. it is always possible to optimize things, but the more tighter the budget, the more work it requires.

we're at a time where nvidia still releases brand new 8 gb gpus, and most likely NVIDIA funded them to fix textures for 8 GB cards (this is not a joke, as they themselves said in their VRAM defending post about how games received "patches". this means that they most likely motivated these studios to do actual, high impact fixes.. . but that won't last either. it may last for another 1.5 - 2 years (to keep 4060 4060ti 8 gb revelant for another 2 years of scamming end users)
 
Last edited:
Bought a 4060ti (yeah, come at me bro) from Best Buy for ~$300 (10% off code and had $70 in certificates) plus an extra 10% cash back on top of the normal 5% for using my BB credit card. Replacing an AMD 6800 that was horrible with 1% lows and performance all over the place in my eGPU set-up. NO plausible explanation for this given the 6800 is a significantly more powerful card, particularly when OC'd, other than AMD laziness and horrible drivers. The 4060ti doesn't drop below 60 in any of the games I play at 1440p or higher, while the AMD card would go into the 30-40 range at times for seemingly no reason. I thought it was Thunderbolt bandwidth limitations but turns out nope, just terrible AMD optimization.
 
Last edited:
Top Bottom