• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia Kepler - Geforce GTX680 Thread - Now with reviews

squicken

Member
I wouldn't spend $500 on a card with only 2GB of VRAM. I still don't know why they did that. I'm sure the reviews will have very good explanations, but 1.5GB 580s were hitting a VRAM wall. At least with the 7950 and 7970 getting passed by, early adopters can talk themselves into CFX down the line
 

mr2xxx

Banned
whats the consensus on the winning "handily" against the 7900? Seems like from the leaked benches that it's probably 10-15% better, solid but not winning handidly unless the benches were wrong.
 

squicken

Member
whats the consensus on the winning "handily" against the 7900? Seems like from the leaked benches that it's probably 10-15% better, solid but not winning handidly unless the benches were wrong.

Granted a lot of this is what I've pieced together from rather spirited partisan debates, but the 680 is supposed to be more energy efficient AND win the benches
 

lowrider007

Licorice-flavoured booze?
Do not even consider buying this card if you already have either the:

GTX 570, GTX 580, and the GTX 590.

Why?, are these cards not a big jump over the 500 series? I have a 570 and I'm looking for a pretty big jump in performance, although I could probably only afford a 670 rather than the 680 tbh.
 
amazon1.jpg

and im selling my gfx card =P

Damn how long have you been saving gift cards ?
 

J-Rzez

Member
Wait a bit more...bigger and better things are on the near horizon.

I'd prefer the non-reference versions myself, hopefully they wouldn't take long to get to market. Unless you're talking about a better series, which I'd be interested in. Pricing is rather stiff on this thus far if rumors are true. Not $600-ouch, but $500-530-ouchie. Wonder how long it will take for other versions and/or models.
 

jambo

Member
I'm currently running the following system and wanted some general advice and opinions on what to do in regards to upgrading in the next few months

Core i7 870
8GB DDR3
2x GTX 570 1280 in SLI
HP lp3065 @ 2560x1600

I'm thinking the best thing to do would be to wait for the 690, grab one and then a few months later grab another 690 and SLI again.

Is there a rough ETA on the 690 or whatever other cards NVIDIA are bringing out?

Thanks.
 

_woLf

Member
I currently have two GTX 260's and I'm definitely interested in the 680, but I feel like it'll be replaced by a 3/4GB 690 really quickly...does anyone have any idea how long that'll take to come out? Probably another 6 months, yeah?
 

artist

Banned
Why are we even discussing 4k at this point? Can you even purchase them? If so how much? Even then, what'll you need 3 of the highest end kepler cards to play a game?

Also, what is EVFC?
That is NVENC, the thing that Charlie confused for dedicated PhysX block in Kepler.

It does encoding/transcoding
 

mxgt

Banned
Tempted to replace my 570 with it but it's more than enough at the moment so I'll wait for the better cards to arrive
 
Whoa, nearly the same power consumption as the 560 Ti. Pretty clear-cut victory with the benchmarks, too. Bring on the $200 variant, I like where this is going.
 

Durante

Member
Power usage under load is very impressive.

Edit: On the other hand, this is a GTX 660, and it's more or less on par with 560.
 

Rootbeer

Banned
Well, waiting to see the price on the eVGA model, but I am strongly considering replacing my eVGA 570 SC with a 580 :) Slot it into my current aging rig (i7 720, 6gb drr3, X58mobo) and wait it out until Intel Ivy Bridge rolls out so I can build a new PC around it.
 

bee

Member
definitely getting one although i think i will leave it 2 weeks to avoid the price gouging. looking forward to the triple screen benches and yes 2gb is enough for that as its what i run now
 

lowrider007

Licorice-flavoured booze?
I think after looking at those benches I may stay with my o/c'd 570 for this gen, the minimums aren't really a massive jump and I don't really care about frame-rates beyond 60 fps.
 
Minimums in general ( except for Battlefield 3 ) are better for HD 7970 which i consider a better architecture with more potential and much more elegant. This plus long idle tdp advantage that also is important if your leave your rig on at nights...
 

artist

Banned
This may or may not get covered in the reviews but here goes ..

How did Nvidia manage to triple core count in such a small die? They ditched the hotclocks! Yes, probably half of the 3d geek world already knows this answer (hello dr_rus) but what else did Nvidia change? For starters they increased the density of the SMs. Second, in the instruction issue pipeline, they replaced the several Fermi blocks dedicated for scheduling by a single more intelligent scheduler. Third, each core was optimized, both in terms of area and timing.

The seed of hotclocks was laid with G80. Back then, on 90nm Nvidia needed more than 1GHz clocks to achieve their performance goals. So they decided to lengthen the pipelines, a trick they learned from G70 (7800GTX) (G70 had some logic dedicated (~20M transistors) for pipeline lengthening to hit its clock). So instead they could architect the G80 with 128 cores @ 1.35GHz rather than going for 256 cores at ~700MHz. 256 cores wouldnt have been feasible on 90nm either, so hotclock was a outcome of bypassing a constraint. They continued the hotclock approach with GT200 (GTX280) and GF100/GF110 (GTX480/GTX580) simply because the process nodes could not achieve the clock targets Nvidia had in mind. However their clock target was getting closer and closer with each new node. With 28nm, they knew that even in the worst case scenario target clocks were missed, they could work on leakage, timing, wiring lengths and other aspects to almost get there.

With Kepler, Nvidia hit all their desired goals - minimize the die area by aggressively tuning each unit of the core, remove the hot clocks, invest the gained area in increasing the core count & reduce power consumption (removing hotclocks reduces power consumption).

So why 1536 and not 768? GF114 (560Ti) normalized i.e without hotclocks would be 768CC. With new architecture on a new node, the expectation is to double the raw performance, 768x2=1536.
 
Finally, a real antialiasing solution for real time rendering. I expect many torches and pitchforks to land at Nvidia's door from those that like an over sharpened aliased mess but in time most should recognise the benefits.

As long as there is none of the ghosting prevalent in older temporal AA solutions I'm all for it.
 
Top Bottom