To posit a frame of reference for current GPU hardware, the 8800GTX launched back in 2006 around the same time as the PS3 and the year after the Xbox 360 with 768MBs of VRAM (50% more than either console's combined total amount of RAM). The 8800GTX was a $600-650 monster of its day that trashed both consoles in performance. In comparison, the 780/290/970 level stuff of today has at least such an advantage over the latest consoles, perhaps even more if anything. However, they do not have more VRAM than both systems have total, not even the same amount, potentially not even the same amount actually usable only for GPU-centric assets . It is not unpredented that more VRAM than what we've been generally given would be utilized given the strength of current GPUs. The 780/Ti in particular is far more powerful than only 4x the old 8800GTX, but it only has 4x the VRAM. So where's the problem here?
The problem is GPU manufacturers (especially Nvidia) have not been scaling up VRAM amounts with increases in power properly, they've been very deliberately restricting VRAM amounts to bare minimums outside of their new (ridiculous) premium Titan line and as the (arguably) leading GPU manufacturer of the world, they should have known requirements would raise, yet have elected to do nothing about it or perhaps even actively restrict VRAM in the hopes people would have to upgrade arbitrarily over it. Assigning blame towards the developers vs. the hardware manufacturers is a tricky line, but at the end of the day, the end-result is the same. Nvidia should have and probably did foresee this, but they've taken no preventative measures and have been far too cheap on VRAM. People with less can make do, certainly, but this will restrict high-end hardware from being able to render high-resolution textures as they're fully capable of doing and, in SLI especially, might even face significant AA or resolution limitations. The 780s in particular having only 3GBs and the same for 770s with 2GBs is actually rather disgusting and regrettably, I sold my 780 for a sidegrade to a 970 because I was already slamming into my 780's VRAM wall in certain scenarios (like modded Skyrim, Space Engine, Watch Dogs, tested scenario of certain games downsampled from 4K, etc.). It's a disgusting realization, but the blame rests squarely on the hardware manufacturers for this situation, regardless of whether developers are using too much or not; it is Nvidia (and AMD's) jobs to create the balanced hardware necessary to render the latest and upcoming software and to make smart engineering decisions for their products, and Nvidia have dropped the ball miserably here because it's a lucrative business opportunity. No excuses.