And that is why a console is usually a better option in the beginning of a generation. Will wait another year before update my PC. Video cards will likely come with much more memory.As soon as the PS4 was announced with 8GB GDDR5 it should have been clear as day to anyone that 4GB would be the minimum to get you comfortably through this generation. 3GB cards should hold on for a while yet, but that's the absolute minimum VRAM capacity that even a midrange user should be aiming for.
People are still recommending stuff like a 2GB GTX 770 every day in the PC thread and it's never sat well with me.
Same applies to a 780 Ti. I'd you're buying it for the long term then 6GB option is the only one you should be considering. If you change your card every 12 months then that is a different matter.
Indeed.Fuck. Thats pretty hardcore.
How will does the ps4 version run it?
The hardware fragmentation that is inherent in the PC space, means that developers cannot rely on the user's GPU having sufficient GPGPU capability. So, they don't take advantage of GPGPU so often and the burden is placed on the CPU for calculations such as the interaction of particles and other complex physics simulation.
Indeed.Thank god I went with a 4GB 770.
I know how it works.That's not how it works. That 8gb is not VRAM, it's really 5GB of everything ram since 3 is used for OS. You can't lump it all together and say, ha they have so much VRAM, cause they don't. They have to use a lot of that ram the same way a PC uses its ram to load and run shit.
It's not really the hardware so much as the driver stack. UAV serialization is the biggest problem with GPGPU in the PC space, although it can be avoided in some cases with auxiliary libraries that hint to the driver underneath the stack, and/or some other tricky techniques.
The bigger issue is render target memory. Right now a variety of techniques are being utilized/explored on console because they have essentially unbounded render target memory. Things like cached shadow backing stores allow us to avoid regenerating shadowmaps every frame, but they require large amounts of render target memory to be effective. These have the double benefit on PC of reducing draw call counts per frame, but are least likely to be leverage-able there.
And then there are forward looking techniques like OIT to volume render targets, of course.
Btw the ue4 wasn't optimized for console until 4.1 so.. That explain it
Indeed.
Nobody wanted to listen that the VRAM usage will go up really high this gen. Thankfully I listened to myself and not to advises of others.
PCs have DDR3 RAM for the stuff that doesn't require such high bandwidths.It was developed with 8 GB of GDDR5 in mind; I'm surprised it even runs on most PCs!
Currently?I've never hidden the fact that I sit in team green for all manner of reasons (stability, S3D, gsync, faster driver support, less hitching and previously Nvidia Inspector but RadeonPro looks to have surpassed it now) but they're currently dropping the ball in terms of VRAM. At every important pricepoint AMD seem to give you an extra 1GB of VRAM and if you're buying a card to last you 2+ years (which I firmly believe encompasses most gamers) then that's going to make a huge difference.
The GTX 770 has the horsepower to see through this console generation yet it barely has another year or two's life in it because 95% of cards are sold with a piddling 2GB GDDR5.
It probably a PS4 port. I think they aren't using the DDR3 ram of the PC at all?Judging by how the game looks I'd put it down to terrible optimisation. Doesn't look nearly good enough for the resources it hogs
Why would devs want to simulate looking at a game through a pair of bad/cheap corrective glasses?
780s are damn expensive. I bought my 770 4GB open box and used for $250, you can find deals out there.How so people recommending the GTX760/770 are doing that based on numbers we do know. The main point is that if you get a GTX770 4GB you might as well get a GTX780 for a bit more and it will destroy the 770 in the vast majority of situations even in the future. The GTX760 does not have the computational power to justify 4GB RAM, based on the numbers we do have. PC-GAF tries to give you best value for your money. If you want to waste money for very specific scenario's, be my guest.
I've never hidden the fact that I sit in team green for all manner of reasons (stability, S3D, gsync, faster driver support, less hitching and previously Nvidia Inspector but RadeonPro looks to have surpassed it now) but they're currently dropping the ball in terms of VRAM. At every important pricepoint AMD seem to give you an extra 1GB of VRAM and if you're buying a card to last you 2+ years (which I firmly believe encompasses most gamers) then that's going to make a huge difference.
The GTX 770 has the horsepower to see through this console generation yet it barely has another year or two's life in it because 95% of cards are sold with a piddling 2GB GDDR5.
no arguments with the 4gb but I replaced my 770 with a 780 during the price drop last September and there is no 6gb option available. What then, are you saying I should have stuck with my 770 4gb instead of going for the 780 3gb, and that the 770 would have lasted me longer than the 780?
There must be something we are all missing here.
It probably a PS4 port. I think they aren't using the DDR3 ram of the PC at all?
Why do 780's only have 3GB when 760's and 770's have versions with 4GB? Always found that odd and also stupid.
Why do 780's only have 3GB when 760's and 770's have versions with 4GB? Always found that odd and also stupid.
Speaking of sufficient RAM; I knew it was going to be expensive, but actually seeing that price listed next to a graphics card, you feel like you're taking crazy pills.
https://www.komplett.se/search?q=295x2
Roughly $2100.
It probably a PS4 port. I think they aren't using the DDR3 ram of the PC at all?
Arkham knight is going to melt cards left, right and centre.
780Ti better than 780 SLI, lol? And 780Ti SLI equal to 780Ti?
What's that? CPU bottleneck?
The UE4 Elemental demo only uses about 1.5 GB and the UE4 Cave demo uses about 700MB (and Daylight does not look better than them), so it is safe to say that we are looking at some kind of caching.
Arkham knight is going to melt cards left, right and centre.
As in, it simply uses as much ram as is available for caching but doesn't actually require the memory necessarily? I'm sorry if I missed it in the thread but are people experiencing the telltale plummet in performance due to running out of VRAM memory?
Looking at the graph a few posts up, you'd expect the min framerate of 2 GB cards to be single digits if it was running out of VRAM that it needed.
It's min and average. You don't average minimums.It's probably an average of minima, not the lowest point.
Then again, reviewers never clarify, so...
So the 3GB 780 Ti is beating the 6GB Titan at 1080p here.
http://gamegpu.ru/action-/-fps-/-tps/daylight-test-gpu.html
Does the game hitch or something? Does anyone sprechen Sie Deutsch?
So the 3GB 780 Ti is beating the 6GB Titan at 1080p here.
http://gamegpu.ru/action-/-fps-/-tps/daylight-test-gpu.html
Does the game hitch or something? Does anyone sprechen Sie Deutsch?
So as I thought, UE4 is just intelligently caching.
No, that's why you don't build multi-GPU setups780Ti better than 780 SLI, lol? And 780Ti SLI equal to 780Ti?
What's that? CPU bottleneck?
It's min and average. You don't average minimums.
780Ti better than 780 SLI, lol? And 780Ti SLI equal to 780Ti?
What's that? CPU bottleneck?
It's not taking advantage of SLI at all at the moment. That's the issue.
Seems like an Unreal Engine 4 thing. Both the realistic rendering and cave demos take up ~2.5GB VRAM, meanwhile the Elemental demo hits 3+GB VRAM.