And I just bought a new computer with a 7970...
Chiphell usually comes through with believable numbers before launch.Most are speculating at HardOCP now that they're referring to the GK110 numbers (680) instead of the GK104 (670) for the 50% increase. This makes a lot more sense to me. Not that I believe random numbers from China. :S
And I just bought a new computer with a 7970...
Because nVidia knows that 2GB is enough for 95%+ of people buying that card (Single Monitor).What the fuck? Only 2GB?
Guess I'm waiting until someone releases a 4GB card then.
Seriously why the fuck are we still stuck with 2GB cards? Even now we're bumping into that. That raw power is useless if you can't actually fit it all into the VRAM.
Because nVidia knows that 2GB is enough for 95%+ of people buying that card (Single Monitor).
No need to spend extra money on the 'base' SKU, people who want 4GB can buy a 4GB card for more.
It's my impression that when you have vRAM hitting above 1.2GB it's just not optimized at all (Crysis 2) and the graphics boosts you get are negligible.
Did you try disabling SLi when you have two cards?Well it took care of my stutter in swtor on fleet. 2x 480gtx good frames but stutter 1 7970 no stutter.
Did you try disabling SLi when you have two cards?
It's a legitimate concern for people rocking giant displays, it's just not a good business decision.Well, I'm guessing he likes an uber high resolution that you just need 2+GB for these days.
He was right about the die size. We shall see if the others like pricing, performance etc. will pan out like he said it would.Right about what? Maybe 1 of the 100 whacky things he said? He also delivered some weird spiel that GK104 would have a built in physx chip, and only be faster than Tahiti in physx accelerated games or some nonsense. http://semiaccurate.com/2012/02/01/physics-hardware-makes-keplergk104-fast/
I swear charlie retroactively changed his article though. I'm 99% he originally said gk104 would have dedicated physx hardware (which everybody thought was nonsense) but now the article just talks about physx optimizations not dedicated hardware.
I think Nvidia will flood the market with 2GB cards to start with.It's a legitimate concern for people rocking giant displays, it's just not a good business decision.
7950 and 7970 are 3GB. There's your answer. I'd guess the 680 would match that or have 4GB, but a 2GB one wouldn't surprise me. People who needed the 4GB version would spend another $100 for that model.
Nope.has this been posted?
It's my impression that when you have vRAM hitting above 1.2GB it's just not optimized at all (Crysis 2) and the graphics boosts you get are negligible.
Nope.
Interesting. This goes against yesterdays' "confirmed" news for an April launch.
If you're not gaming at ultra high resolutions, or using multiple monitors, you don't need a GTX 680 or HD 7970. There's just nothing out there currently that can give our current cards issues. The consoles long life span is keeping the lowest denominator low, so PC games haven't even really utilized what we have now. The cards we have now are plenty strong, it's just not being utilized correctly because we're getting port jobs left and right.
I'll upgrade from my GTX 460 once new consoles are out that push the low bar higher and we start getting better quality games from the ground up instead of getting a console port with a slab of paint on it
I can't make a thread.. perhaps someone should? new threads for new news?
In some cases the cards we use even in quad fire etc. cannot overcome poor coding. I'm not pining The Witcher 2 with that, because the game looks phenomenal, but there's no reason to get too upset when in some cases these cards have more than enough muscle. Some devs are including these uber-modes just to tease us and obviously help force people to buy crazy configurations (as we've seen from some of the set ups on GAF alone). It really helps to drive the enthusiast market.
It's strange you should say that, because distinctly in the memory banks was all the shit talking that preceded the disastrous FX series almost a decade ago. :SLol at the facebook post. I guess it's a good sign they are talking shit, I was starting to wonder after reading this thread.
It's strange you should say that, because distinctly in the memory banks was all the shit talking that preceded the disastrous FX series almost a decade ago. :S
He was right about the die size. We shall see if the others like pricing, performance etc. will pan out like he said it would.
if charlie demarjaran of all people says it's going to kick ass, it's going to kick ass o_o
About ~10% on average. Basically they'll be pretty even which is an accomplishment since GK104 is supposedly simplier.So if all the rumors are to believed, a chip with smaller die size than the 7970 and a 256-bit memory bus width beats it by 20%?
Certainly not impossible, but reason to be skeptical.
AMD make 3 GB 7900 because it has 384-bit bus and they have two options: 1,5 GBs or 3 GBs. 1,5 GBs is low for a top card these days so. 2 GBs should be enough for most of games for several years to come.AMD will make 2.5GB+ cards because that's what they do. nVidia always cuts back a bit.
http://semiaccurate.com/2012/02/06/how-big-is-the-keplergk104-die/I dont think he said anything about die size, it was just extrapolated by others from other stuff he said.
I think most of that info was known or speculated before anyway.
http://semiaccurate.com/2012/02/06/how-big-is-the-keplergk104-die/
He was the first one on the web to claim it was it was smaller than Tahiti and later on with his article he got the die size bang on correct.
We stated earlier, Kepler wins in most ways vs the current AMD video cards. How does Nvidia do it with a $299 card? Is it raw performance? Massive die size? Performance per metric? The PhysX ‘hardware block’? Cheating? The easy answer is yes, but lets go in to a lot more detail.
GK104 is the mid-range GPU in Nvidia’s Kepler family, has a very small die, and the power consumption is far lower than the reported 225
Folks at B3D put the die size to be between 320-340mm2, smaller than Tahiti (7970). Charlie was right.
putting the range from 324-361mm^2.
There isnt much difference in the die size of Cayman/GF114, no guarantee the same ratios would exist especially when both AMD/Nvidia are moving on to new architectures.Yeah, but is it really a feat to guess that the device with their midrange part code would have a smaller die than the competitor's top-end?
If you're not gaming at ultra high resolutions, or using multiple monitors, you don't need a GTX 680 or HD 7970. There's just nothing out there currently that can give our current cards issues.
I could be wrong but from what I was told (last week) this is incorrect.Nvidia will launch GK104/Kepler/GTX680 in a week - March 12 Unveil.
http://semiaccurate.com/2012/03/05/nvidia-will-launch-gk104keplergtx680-in-a-week/
I could be wrong but from what I was told (last week) this is incorrect.
Nvidia will launch GK104/Kepler/GTX680 in a week - March 12 Unveil.
http://semiaccurate.com/2012/03/05/nvidia-will-launch-gk104keplergtx680-in-a-week/
AMD and Nvidia both tend to do media briefings prior to launch. It doesnt mean the launch day is the same as press day. Remember AMD did a press day on a aircraft carrier, did we get reviews the same day?It says "a lot of people" received launch event invitations with the GTX680 mentioned, so I'm not sure where there is room to be incorrect, short of being a deliberate fabrication.
We didn't, because it took a while for the carrier to go ashore.Remember AMD did a press day on a aircraft carrier, did we get reviews the same day?
xyzcan someone please explain the branding of kepler to me? I don't understand, is the 680 the successor to 580 or not?
lol?We didn't, because it took a while for the carrier to go ashore.
GK104 if going by Nvidia's normal nomeclature should end up as GTX660Ti but latest rumors are suggesting that it would end up as GTX670Ti and now GTX680.can someone please explain the branding of kepler to me? I don't understand, is the 680 the successor to 580 or not?
Kepler is real, people have seen it, editors are getting cards and even notebooks based on it as we speak, and the launch happens later this month.
We can confirm a few things now. Kepler has a 256-bit memory interface and the fastest one to launch in the second half of March is GK104. The GK106 and GK108 are also on the way, and of course they will end up slower than GK104.
The GK104 is much smaller than Tahiti, Radeon HD 7970 family and according to our intel it should end up faster in some cases. In all DirectX 11 games, Nvidias GK104 based Kepler card should end up faster, at least this is what a few sources are confirming.
Nvidia is producing its 28nm GK104 chips for a while and the focus is on big OEMs, especially notebook manufacturers, but there will be plenty of add-in-board partners to have cards on launch, at least until they sell out.
Sources close to Nvidia are confident that the green team will win this round, but as always lets take this with a grain of salt. Either way, AMD already has 28nm products in the shops and it commands a clear lead over Nvidia, so there is a lot of catching up to do.
This actually happened. The guy has 0 integrity.I swear charlie retroactively changed his article though. I'm 99% he originally said gk104 would have dedicated physx hardware (which everybody thought was nonsense) but now the article just talks about physx optimizations not dedicated hardware.
if charlie demarjaran of all people says it's going to kick ass, it's going to kick ass o_o
What's a "successor"? GK104/680 will be faster than 580 but it won't be the fastest of GK GPUs.can someone please explain the branding of kepler to me? I don't understand, is the 680 the successor to 580 or not?
Polymorph Engine 2.0And the rumors keep rumoring.
http://fudzilla.com/home/item/26179-kepler-256-bit-faster-in-dx11-games-than-7970
What's a "successor"? GK104/680 will be faster than 580 but it won't be the fastest of GK GPUs.