• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia Kepler - Geforce GTX680 Thread - Now with reviews

J-Rzez

Member
Nice, my income tax refund will be back in time for me to buy one of these new ones and sell my current set up. Also, have a belated bday present coming from my gf, I may actually go nuts and go all out this time with video cards. No real reason for me to jump from the 2600k yet I don't believe.
 

Hazaro

relies on auto-aim
Most are speculating at HardOCP now that they're referring to the GK110 numbers (680) instead of the GK104 (670) for the 50% increase. This makes a lot more sense to me. Not that I believe random numbers from China. :S
Chiphell usually comes through with believable numbers before launch.

Honestly I wouldn't be surprised at those numbers. It's new arch + new process. That's what we should be expecting. Whether it's true is another matter.
 

Izayoi

Banned
What the fuck? Only 2GB?

Guess I'm waiting until someone releases a 4GB card then.

Seriously why the fuck are we still stuck with 2GB cards? Even now we're bumping into that. That raw power is useless if you can't actually fit it all into the VRAM.
 

Hazaro

relies on auto-aim
What the fuck? Only 2GB?

Guess I'm waiting until someone releases a 4GB card then.

Seriously why the fuck are we still stuck with 2GB cards? Even now we're bumping into that. That raw power is useless if you can't actually fit it all into the VRAM.
Because nVidia knows that 2GB is enough for 95%+ of people buying that card (Single Monitor).
No need to spend extra money on the 'base' SKU, people who want 4GB can buy a 4GB card for more.

It's my impression that when you have vRAM hitting above 1.2GB it's just not optimized at all (Crysis 2) and the graphics boosts you get are negligible. I was surprised to see how low the performance impact of going from 1080 to 3x1080 was though.

AMD will make 2.5GB+ cards because that's what they do. nVidia always cuts back a bit.
 

gatti-man

Member
Because nVidia knows that 2GB is enough for 95%+ of people buying that card (Single Monitor).
No need to spend extra money on the 'base' SKU, people who want 4GB can buy a 4GB card for more.

It's my impression that when you have vRAM hitting above 1.2GB it's just not optimized at all (Crysis 2) and the graphics boosts you get are negligible.

Well it took care of my stutter in swtor on fleet. 2x 480gtx good frames but stutter 1 7970 no stutter.
 

Hazaro

relies on auto-aim
Well, I'm guessing he likes an uber high resolution that you just need 2+GB for these days.
It's a legitimate concern for people rocking giant displays, it's just not a good business decision.
7950 and 7970 are 3GB. There's your answer. I'd guess the 680 would match that or have 4GB, but a 2GB one wouldn't surprise me. People who needed the 4GB version would spend another $100 for that model.
 

artist

Banned
Right about what? Maybe 1 of the 100 whacky things he said? He also delivered some weird spiel that GK104 would have a built in physx chip, and only be faster than Tahiti in physx accelerated games or some nonsense. http://semiaccurate.com/2012/02/01/physics-hardware-makes-keplergk104-fast/

I swear charlie retroactively changed his article though. I'm 99% he originally said gk104 would have dedicated physx hardware (which everybody thought was nonsense) but now the article just talks about physx optimizations not dedicated hardware.
He was right about the die size. We shall see if the others like pricing, performance etc. will pan out like he said it would.

It's a legitimate concern for people rocking giant displays, it's just not a good business decision.
7950 and 7970 are 3GB. There's your answer. I'd guess the 680 would match that or have 4GB, but a 2GB one wouldn't surprise me. People who needed the 4GB version would spend another $100 for that model.
I think Nvidia will flood the market with 2GB cards to start with.
 
has this been posted?

http://www.techpowerup.com/161587/GK104-Based-Products-Arriving-March-23.html

Expreview cited sources in the AIC (add-in card) vendors in pinning the launch of GeForce Kepler 104 (GK104) based products to March 23. The products launched are expected to be NVIDIA's first in its next-generation. Some label the top part based on GK104 as "GeForce GTX 670 Ti", while others call it "GeForce GTX 680". A March 23 launch explains reports of hectic activity in the green camp starting this week. NVIDIA typically enters NDAs with its partners over a wide time range, probably this one extends to April (since the launch is now reported to be towards late-March), which led some to believe Kepler was "delayed" to April. NVIDIA recently posted on its Facebook wall that people will be rewarded for their patience with an "unbeatable" product.
 

ta-va

Banned
If you're not gaming at ultra high resolutions, or using multiple monitors, you don't need a GTX 680 or HD 7970. There's just nothing out there currently that can give our current cards issues. The consoles long life span is keeping the lowest denominator low, so PC games haven't even really utilized what we have now. The cards we have now are plenty strong, it's just not being utilized correctly because we're getting port jobs left and right.

I'll upgrade from my GTX 460 once new consoles are out that push the low bar higher and we start getting better quality games from the ground up instead of getting a console port with a slab of paint on it
 

tokkun

Member
It's my impression that when you have vRAM hitting above 1.2GB it's just not optimized at all (Crysis 2) and the graphics boosts you get are negligible.

I would probably ignore VRAM for the most part if not for modded games. Throw a lot of mods and high-res texture packs on and watch the memory footprint soar.
 

Hawk269

Member
If you're not gaming at ultra high resolutions, or using multiple monitors, you don't need a GTX 680 or HD 7970. There's just nothing out there currently that can give our current cards issues. The consoles long life span is keeping the lowest denominator low, so PC games haven't even really utilized what we have now. The cards we have now are plenty strong, it's just not being utilized correctly because we're getting port jobs left and right.

I'll upgrade from my GTX 460 once new consoles are out that push the low bar higher and we start getting better quality games from the ground up instead of getting a console port with a slab of paint on it

Witcher 2 in Uber sampling at 1080p says "HELLO". Sorry to burst you bubble, but some of us want the highest possible IQ in our games and a 580 or 7xxx at 1080p does not cut it at max settings or in Witcher 2's Uber Mode. I have 2x580 EVGA Classified's paired with an OC 2600k at 4.6 and I cant sustain 60fps at 1080p with Witcher 2 in Uber Sampling mode. The best I can get is an unsteady 28-40fps in that game.

So while most games a 580/7970 can handle at 60fps with all bells and whistles, there are some games that these cards cant pull off, even in SLI/Crossfire. Yes, people like me are insane and want the very best without a drop below 60fps, but to tell people they dont need a 680 or AMD equivelant is not accurate if they want to max. Sure, most games we can, but for those of us looking for the ultimate, no sacrafices being made IQ at 1080p, we do need a 680 or better.
 

pestul

Member
In some cases the cards we use even in quad fire etc. cannot overcome poor coding. I'm not pining The Witcher 2 with that, because the game looks phenomenal, but there's no reason to get too upset when in some cases these cards have more than enough muscle. Some devs are including these uber-modes just to tease us and obviously help force people to buy crazy configurations (as we've seen from some of the set ups on GAF alone). It really helps to drive the enthusiast market.
 

Hawk269

Member
In some cases the cards we use even in quad fire etc. cannot overcome poor coding. I'm not pining The Witcher 2 with that, because the game looks phenomenal, but there's no reason to get too upset when in some cases these cards have more than enough muscle. Some devs are including these uber-modes just to tease us and obviously help force people to buy crazy configurations (as we've seen from some of the set ups on GAF alone). It really helps to drive the enthusiast market.

You are correct. I have learned that if a PC game does not run well, for the most part it is due to coding...still, some games that are coded well and just require an insane amount of power to run at full settings, 60fps, 1080p (Witcher 2 Uber, Crysis 2 DX11 Ultra Textures) etc.
 

Jtrizzy

Member
Lol at the facebook post. I guess it's a good sign they are talking shit, I was starting to wonder after reading this thread.
 

pestul

Member
Lol at the facebook post. I guess it's a good sign they are talking shit, I was starting to wonder after reading this thread.
It's strange you should say that, because distinctly in the memory banks was all the shit talking that preceded the disastrous FX series almost a decade ago. :S
 
He was right about the die size. We shall see if the others like pricing, performance etc. will pan out like he said it would.

I dont think he said anything about die size, it was just extrapolated by others from other stuff he said.

I think most of that info was known or speculated before anyway.

if charlie demarjaran of all people says it's going to kick ass, it's going to kick ass o_o

He said it was going to kick ass but then he reneged, saying on forums it was slower than 7970 and then writing that physx article to seemingly cover all bases (it'll be super fast, but with a catch! Only in physx games). No matter what the card is he wont be "wrong" since he never said anything specific that I know of.

Main thing with this card is the pricing, seems like it will come close to 7970 +/- which is good enough for me. Now will it be 399 or less?

March 23 would be nice, only a couple of weeks after pitcairn launch, I kind if want to wait until at least both pitcairn and gk104 are on the table before making a purchase decision.
 

dr_rus

Member
So if all the rumors are to believed, a chip with smaller die size than the 7970 and a 256-bit memory bus width beats it by 20%?

Certainly not impossible, but reason to be skeptical.
About ~10% on average. Basically they'll be pretty even which is an accomplishment since GK104 is supposedly simplier.
No info on GK110 yet.

AMD will make 2.5GB+ cards because that's what they do. nVidia always cuts back a bit.
AMD make 3 GB 7900 because it has 384-bit bus and they have two options: 1,5 GBs or 3 GBs. 1,5 GBs is low for a top card these days so. 2 GBs should be enough for most of games for several years to come.
 
1.5 is not enough but 2gb is enough for "several years"?

I do agree though 2GB is ok, it's 1Gb and even 1.5 I'd steer away from.
 
http://semiaccurate.com/2012/02/06/how-big-is-the-keplergk104-die/

He was the first one on the web to claim it was it was smaller than Tahiti and later on with his article he got the die size bang on correct.

Dunno if he was "the first on the web" but anyway...

We stated earlier, Kepler wins in most ways vs the current AMD video cards. How does Nvidia do it with a $299 card? Is it raw performance? Massive die size? Performance per metric? The PhysX ‘hardware block’? Cheating? The easy answer is yes, but lets go in to a lot more detail.

GK104 is the mid-range GPU in Nvidia’s Kepler family, has a very small die, and the power consumption is far lower than the reported 225

http://semiaccurate.com/2012/02/01/physics-hardware-makes-keplergk104-fast/

Charlie will probably be wrong on all that. Dont see 299 for sure. If it is 299 it's going to upend a lot.

Edit:

Folks at B3D put the die size to be between 320-340mm2, smaller than Tahiti (7970). Charlie was right.

Well, Charlie said
putting the range from 324-361mm^2.

If you're giving a range it's easy to be "right". But fair enough I cede the point.

It's not that big a deal though, Tahiti is 350mm^2, almost the same. Guess it's just rare for Nvidia to actually compete on performance/mm, they havent for a several gens now.
 

tokkun

Member
Yeah, but is it really a feat to guess that the device with their midrange part code would have a smaller die than the competitor's top-end?

Wasn't his original estimate just based on the idea that it would be about the same size as the equivalent code from the previous line?
 

artist

Banned
Yeah, but is it really a feat to guess that the device with their midrange part code would have a smaller die than the competitor's top-end?
There isnt much difference in the die size of Cayman/GF114, no guarantee the same ratios would exist especially when both AMD/Nvidia are moving on to new architectures.
 

Sethos

Banned
If you're not gaming at ultra high resolutions, or using multiple monitors, you don't need a GTX 680 or HD 7970. There's just nothing out there currently that can give our current cards issues.

There's plenty out there that will give the current generation of cards problems, if you're looking for that magic 60FPS at maxed out settings, it's called anti-aliasing, supersampling and especially AO.

These things can murder your framerate but are becoming the norm in so many games for high-end settings, Nvidia are even putting a lot of focus on AO through their drivers. So if you want the best IQ, current generation of cards are still not optimal.
 

tokkun

Member
I could be wrong but from what I was told (last week) this is incorrect.

It says "a lot of people" received launch event invitations with the GTX680 mentioned, so I'm not sure where there is room to be incorrect, short of being a deliberate fabrication.
 

artist

Banned
It says "a lot of people" received launch event invitations with the GTX680 mentioned, so I'm not sure where there is room to be incorrect, short of being a deliberate fabrication.
AMD and Nvidia both tend to do media briefings prior to launch. It doesnt mean the launch day is the same as press day. Remember AMD did a press day on a aircraft carrier, did we get reviews the same day?
 

Corky

Nine out of ten orphans can't tell the difference.
can someone please explain the branding of kepler to me? I don't understand, is the 680 the successor to 580 or not?
 

artist

Banned
We didn't, because it took a while for the carrier to go ashore.
lol?

can someone please explain the branding of kepler to me? I don't understand, is the 680 the successor to 580 or not?
GK104 if going by Nvidia's normal nomeclature should end up as GTX660Ti but latest rumors are suggesting that it would end up as GTX670Ti and now GTX680.

The move is likely based on performance of GK104 or the delay of GK110 or both.
 
And the rumors keep rumoring.

http://fudzilla.com/home/item/26179-kepler-256-bit-faster-in-dx11-games-than-7970

Kepler is real, people have seen it, editors are getting cards and even notebooks based on it as we speak, and the launch happens later this month.

We can confirm a few things now. Kepler has a 256-bit memory interface and the fastest one to launch in the second half of March is GK104. The GK106 and GK108 are also on the way, and of course they will end up slower than GK104.

The GK104 is much smaller than Tahiti, Radeon HD 7970 family and according to our intel it should end up faster in some cases. In all DirectX 11 games, Nvidia’s GK104 based Kepler card should end up faster, at least this is what a few sources are confirming.

Nvidia is producing its 28nm GK104 chips for a while and the focus is on big OEMs, especially notebook manufacturers, but there will be plenty of add-in-board partners to have cards on launch, at least until they sell out.

Sources close to Nvidia are confident that the green team will win this round, but as always let’s take this with a grain of salt. Either way, AMD already has 28nm products in the shops and it commands a clear lead over Nvidia, so there is a lot of catching up to do.
 

Durante

Member
This is good timing, the fan on my 460 is starting to act up.

I swear charlie retroactively changed his article though. I'm 99% he originally said gk104 would have dedicated physx hardware (which everybody thought was nonsense) but now the article just talks about physx optimizations not dedicated hardware.
This actually happened. The guy has 0 integrity.
 

Corky

Nine out of ten orphans can't tell the difference.
What's a "successor"? GK104/680 will be faster than 580 but it won't be the fastest of GK GPUs.

460 -> 560
470 -> 570
480 -> 580 -> x

so the fastest one will be the 690? And it won't be dual gpu?
 
Top Bottom