• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia Kepler - Geforce GTX680 Thread - Now with reviews

cilonen

Member
Asus better be ready with a DirectCUII model around launch since I delayed my new and first build all because of this waitforKepler/Buya7970 fiasco

I'm in exactly the same position. Man, last night I was considering dual 580GTX when just last week I was sure I would go 7970 right after deciding SLI looked like too much trouble....

Still no closer to a firm decision on my GPU. I give it another month or so until I start getting really twitchy.
 

Monarch

Banned
Fiasco? Are you new to the hardware scene?

Yep, and I realize it's commonplace to wait several months between two different architecture come out but it's my first rig indeed, so the wait is unbearable :(
Even more harder when I have all my others parts ready next to my couch
 
I'm in exactly the same position. Man, last night I was considering dual 580GTX when just last week I was sure I would go 7970 right after deciding SLI looked like too much trouble....

Still no closer to a firm decision on my GPU. I give it another month or so until I start getting really twitchy.

Yep. The GPU is what I'm waiting for and have been for some time. I have been close to buying the GTX 580, the 7970, and have been waiting for Kepler. I just want it over with already.
 

Sethos

Banned
Yep, and I realize it's commonplace to wait several months between two different architecture come out but it's my first rig indeed, so the wait is unbearable :(
Even more harder when I have all my others parts ready next to my couch

Oh okay, was just about to say :p

I understand you, been in a similar situation before. Honestly, I'd probably just jump on the 7970 and get the Asus DirectCUII, overclock it even further and you will have a card that can easily rival Nvidia's upcoming cards. Only reason why I'm waiting is because I have a 580 which can tide me over and despite all my bad experiences I want to go dual card again. Nvidia are much better at dual card solutions because they have tech built onto their cards to eliminate microstutter etc. and their support is far superior. I doubt their single-card solutions will be that much better.

What has me worried is Nvidia's first card isn't even from the top-tier of cards, so I'd have to wait even longer :|
 

artist

Banned
The rule of thumb is the games you play. If you are not getting the performance you want, in the games you play. You splurge, get the best bang for buck card. Enjoy it as much as you can, if the rumor mill is exceedingly positive about upcoming cards, sell it and get the new ones.
 
http://www.brightsideofnews.com/new...k1042c-geforce-gtx-670680-specs-leak-out.aspx

First and foremost, in NVIDIA's internal nomenclature, this part should be named GeForce GTX 660 (the company is debating GeForce GTX 660, 670 or 680 - and the final verdict will 99% be GTX 680). This is a 349-399 dollar part which in conventional way would replace the 300-dollar "GeForce GTX 560 Ti 2GB", but will offer higher performance than GTX 580. Significantly higher… and more importantly, not just beating the $449 Radeon HD 7950 3GB, but also endangering the $549 Radeon HD 7970. Yeah, it is that fast.

Why? Because we're talking about 1536 CUDA cores divided in four Graphics Processing Clusters (GPC), all of which contain four Streaming Multiprocessors (SM).

The GPU clock is estimated at 950MHz, but our sources are telling us that there are different clocks running in Lab: 772MHz for clock-per-clock versus GTX 580, 925MHz for clock-per-clock versus Tahiti XT, while the clock range for the shipping parts is between 950 and 1000MHz. We were told that NVIDIA did not laugh too much at Verdetrol performance enhancing pills http://www.legitreviews.com/news/12443/ and that the company is trying to tweak the BIOS (more importantly, thermal envelope) in order to get the parts running at 1GHz. If NVIDIA fails, the partners are certain to offer a 1GHz board (just like in case of Tahiti XT and 3rd party vendors).

The memory is set at 1.25 GHz in Quad-Data Rate (QDR, i.e. 5GHz "effective"). This 25% boost over GF100/GF110 is something that thrilled NVIDIA engineers, since this is the first time their memory controllers were able to reach AMD with stable default clock frequency. Remember, unlike GDDR3 memory, GDDR5 is "activelly driven" and memory controller does much more than it used to. Given that AMD is actually the company that creates the memory standard, AMD's GPU engineers actually have a good advantage in terms of just how high can they clock the GDDR5 memory.

This clock results in 160GB/s video memory bandwidth, a drop from GTX 580 (192.4GB/s), but a big boost over GTX 560 Ti and its 128.27GB/s (excluding the OEM versions), and just a bit higher from GTX 560 Ti OEM (GF110 die), GTX 560 Ti 448 Cores LE and GTX 570, all having the same GDDR5 memory clock and bandwidth of 152GB/s.

All of this results with 2.9 to 3.05 TFLOPS single-precision, i.e. 486-500 GFLOPS double-precision. Quadro and potential Tesla versions of this board will feature unlocked double-precision, meaning identically clocked board would have around the same amount of DP-GFLOPS as GTX 580 had single-precision… an impressive boost indeed. In any case higher than what Fermi-based Quadros and Teslas were able to achieve.

Looks tasty. Only negative I see is it will be a little bandwidth crippled at the high end. But near 7970 performance for 349-399 sounds great.
 

artist

Banned
Real NVIDIA Kepler, GK104, GeForce GTX "670/680" Specs Leak Out

In the past few weeks, we've seen various fishy rumors on the product specifications of first discrete GPU using the upcoming 28nm Kepler architecture the GK104. While we have known parts of the specifications, such as no hot clocks, the doubling of Streaming Multiprocessor (SM) node from 48 to 96 CUDA cores (i.e. Stream Processors), 256-bit memory controller, the real specifications are (finally) here... even though, our information differes minimally from information originally posted on 3DCenter.org.

First and foremost, in NVIDIA's internal nomenclature, this part should be named GeForce GTX 660 (the company is debating GeForce GTX 660, 670 or 680 - and the final verdict will 99% be GTX 680). This is a 349-399 dollar part which in conventional way would replace the 300-dollar "GeForce GTX 560 Ti 2GB", but will offer higher performance than GTX 580. Significantly higher… and more importantly, not just beating the $449 Radeon HD 7950 3GB, but also endangering the $549 Radeon HD 7970. Yeah, it is that fast.

Why? Because we're talking about 1536 CUDA cores divided in four Graphics Processing Clusters (GPC), all of which contain four Streaming Multiprocessors (SM). Given that there are 96 Stream Processors (or CUDA cores, NVIDIA seems they cannot make up their minds how to call them), we can see that for instance, the entry-level Kepler has a single SM unit with 96 CUDA cores/Stream Processors. Can you say… a mobile GPU part that allegedly taped out ages ago… and just by some accident, ended in a Samsung notebook? Only time will tell for those.

The base combinations for NVIDIA future GPUs now are 96 (1SM), 384 (1GPC), 768 (2GPC), 1536 (4GPC), 2304 CUDA cores/Stream Processors (6GPC). Given that we our sources are telling us the big monolithic die comes with 2304 SP, the question is what can be done with the memory controller. The logic dictates Kepler can come with the following memory controller configuration: 64-bit, 128-bit, 192-bit, 256-bit, 320-bit and 512-bit: to us, it is most logical that we see 64-bit low-end, 128-bit mainstream, 256-bit high-end and either 384-bit / 512-bit on the high-end compute side - and GeForce GTX 690, but this time as a single monolithic die, instead of typical mix'n'match of two high-end GPUs.

Continuing with the GK104 GPU, the chip has the same amount of fixed-function logic as competing Tahiti XT - 32 ROPs (Raster OPeration Units) and 128 TMUs (Texture Memory Units). As you can see in our architectural mockup, the decision to go with 256-bit memory controller results in 2GB GDDR5 and this is the only part where NVIDIA really loses to AMD: both 7950 and 7970 come with 3GB GDDR5 memory. True, the difference in planned price is estimated at $100 less for NVIDIA boards ($349-399 versus $449/7950 and $549/7970), which should mitigate the paper advantage of the HD 7900 Series.

How high can it go?
Just like GF110, the GK104 comes in two different versions: the GeForce board will run double-precision at one quarter rate - while Quadro and Tesla will run at half-rate.

The GPU clock is estimated at 950MHz, but our sources are telling us that there are different clocks running in Lab: 772MHz for clock-per-clock versus GTX 580, 925MHz for clock-per-clock versus Tahiti XT, while the clock range for the shipping parts is between 950 and 1000MHz. We were told that NVIDIA did not laugh too much at Verdetrol performance enhancing pills http://www.legitreviews.com/news/12443/ and that the company is trying to tweak the BIOS (more importantly, thermal envelope) in order to get the parts running at 1GHz. If NVIDIA fails, the partners are certain to offer a 1GHz board (just like in case of Tahiti XT and 3rd party vendors).

The memory is set at 1.25 GHz in Quad-Data Rate (QDR, i.e. 5GHz "effective"). This 25% boost over GF100/GF110 is something that thrilled NVIDIA engineers, since this is the first time their memory controllers were able to reach AMD with stable default clock frequency. Remember, unlike GDDR3 memory, GDDR5 is "activelly driven" and memory controller does much more than it used to. Given that AMD is actually the company that creates the memory standard, AMD's GPU engineers actually have a good advantage in terms of just how high can they clock the GDDR5 memory.

This clock results in 160GB/s video memory bandwidth, a drop from GTX 580 (192.4GB/s), but a big boost over GTX 560 Ti and its 128.27GB/s (excluding the OEM versions), and just a bit higher from GTX 560 Ti OEM (GF110 die), GTX 560 Ti 448 Cores LE and GTX 570, all having the same GDDR5 memory clock and bandwidth of 152GB/s.

All of this results with 2.9 to 3.05 TFLOPS single-precision, i.e. 486-500 GFLOPS double-precision. Quadro and potential Tesla versions of this board will feature unlocked double-precision, meaning identically clocked board would have around the same amount of DP-GFLOPS as GTX 580 had single-precision… an impressive boost indeed. In any case higher than what Fermi-based Quadros and Teslas were able to achieve.

You won't need to wait for too long, as NVIDIA is already starting pre-sale activities, and getting ready to counter AMD and their momentum with the Radeon 7700 (Cape Verde, February 15), 7800 (Pitcairn, March 6) and 7900 Series (released).

GK104
  • 1536 Cuda Cores
  • 32 ROPs
  • 128 TMUs
  • Hot clock gone (for shaders)
  • Core clock between 950MHz-1GHz
  • 256-bit
  • 160GB/s memory bandwidth (GDDR5 @ 5GHz)
  • Will be branded as GTX680
  • 2 GB frame buffer
  • $349-399

GK110/GK112
  • 2304 Cuda Cores
  • 384-bit/512-bit
  • Will be branded as GTX690
  • 3/4 GB frame buffer

Sound good?

edit: Damn got beat .. bbbbut, I was bolding, summarizing! :p
 

Hazaro

relies on auto-aim
Sounds good if true. What I would actually expect from a new arch and process.
Give me that low end magic card too.
 

Hawk269

Member
For me, I game on HDTV at 1080p. Im running a Intel I72600k OC to 4.6, 2x580 3gb Classifieds. Everything I play runs great and at rock solid 60fps, but on occassion I run accross some games that cant if everything is maxed out.

So...would it be possible that any card coming out in 2012 will be able to run let's say Witcher 2 in Ubersampling mode at 60fps, 1080p?? I am more referring to a single card.
 

PowerK

Member
For me, I game on HDTV at 1080p. Im running a Intel I72600k OC to 4.6, 2x580 3gb Classifieds. Everything I play runs great and at rock solid 60fps, but on occassion I run accross some games that cant if everything is maxed out.

So...would it be possible that any card coming out in 2012 will be able to run let's say Witcher 2 in Ubersampling mode at 60fps, 1080p?? I am more referring to a single card.

I doubt it since you mentioned "single" card. I don't think any single card is going to beat GTX580 SLI this year.
 

Corky

Nine out of ten orphans can't tell the difference.
I doubt it since you mentioned "single" card.

Not only that, but people have to concede the thought of having 60fps+ no dips all settings max in every single game at that point regardless of gpu.

SLI 680? Beast of a setup. But bet your ass that during a 12 month span of you buying those cards ( again just taking dual flagship gpus as an example ) there will be a game ( assuming you play on a monitor that reflects the need for 1500 usd worth of gpus ) that will bring the setup under 60 at some point, be it momentarily or constantly. Doesn't even have to a crazy technical/beautiful game at that, could be driver issues, could be software issues with regards to the game itself, could be bad optimization, etc etc.

edit : just for the sake of posterity, ( and yes I'm well aware that benchmark results can differ greatly from real world results ) take into consideration the Alan Wake benchmarks, a 7970 3gb gives you 35 (?) fps at 2560x1440 and max settings. Slap in another 7970 and you're at a whopping 70 avg given flawless cf scaling. And anyone who knows benchmarks knows that when it says 70 avg they know it doesn't mean 65 fps half the time and 75 fps the other half.

I'm still dreaming of a future where the fucking flagship gpu would "be enough"... I'd happily pay 900 dollars for a gpu if that there was absolutely no reason to go dual gpu.
 

Hawk269

Member
Not only that, but people have to concede the thought of having 60fps+ no dips all settings max in every single game at that point regardless of gpu.

SLI 680? Beast of a setup. But bet your ass that during a 12 month span of you buying those cards ( again just taking dual flagship gpus as an example ) there will be a game ( assuming you play on a monitor that reflects the need for 1500 usd worth of gpus ) that will bring the setup under 60 at some point, be it momentarily or constantly. Doesn't even have to a crazy technical/beautiful game at that, could be driver issues, could be software issues with regards to the game itself, could be bad optimization, etc etc.

edit : just for the sake of posterity, ( and yes I'm well aware that benchmark results can differ greatly from real world results ) take into consideration the Alan Wake benchmarks, a 7970 3gb gives you 35 (?) fps at 2560x1440 and max settings. Slap in another 7970 and you're at a whopping 70 avg given flawless cf scaling. And anyone who knows benchmarks knows that when it says 70 avg they know it doesn't mean 65 fps half the time and 75 fps the other half.

I'm still dreaming of a future where the fucking flagship gpu would "be enough"... I'd happily pay 900 dollars for a gpu if that there was absolutely no reason to go dual gpu.

One can dream Corky, one can dream.

So based on the stuff above, the 690 will be Nvidia's single GPU Flagship card then? So this would be the next gen 580? Their numbering is a bit confusing. You would think a 690 would be dual-gpu like the 590 is today.

Based on the cost of the new 680 estimations, what would be be thinking the cost of a 690 would be then? And those more involved in the rumor/speculation, is the 690 going be the big gun for Nvidia or is there something bigger coming later in the year (again, single GPU)?
 

mr_nothin

Banned
So is the midrange card being named a 680? Or is this just a 580 refresh to make sure the market isnt leaving Nvidia behind (in terms of "new cards")?
 
The rule of thumb is the games you play. If you are not getting the performance you want, in the games you play. You splurge, get the best bang for buck card. Enjoy it as much as you can, if the rumor mill is exceedingly positive about upcoming cards, sell it and get the new ones.
or use it for PhysX. it's why i went with a 550Ti on my recent build. saved me $$ and so far has performed well in skyrim and spacemarine @ 1080p. have to downgrade a few "ultra" settings to "high" and scale back the AF/AA some, but i'm not complaining. looks sexy on the big screen.
 
Here's part of the full monty, according to EXPreview Lenzfire (original source):

GTX 690: 2x1.75GB, 2x6.4 billion transistors, $999, Q3 2012
GTX 680: 2GB, 6.4 billion transistors, $649, April
GTX 670: 1.75GB, 6.4 billion transistors, $499, April
GTX 660 Ti: 1.5GB, 6.4 billion transistors, $399, Q2/Q3 2012
GTX 660: 2GB, 3.4 billion transistors, $319, April
GTX 650 Ti: 1.75GB, 3.4 billion transistors, $249, Q2/Q3 2012
GTX 650: 1.5GB, 1.8 billion transistors, $179, May
GTX 640: 2GB, 1.8 billion transistors, $139, May

EXPreview Lenzfire posted plenty of other details about each GPU, but what's really interesting is how Kepler's performance supposedly scales. According to EXPreview's charts, the GTX 680 and 670 will outpace AMD Radeon's HD 7970 by around 45 percent and 20 percent, respectively, and GTX 670 will run around 20 percent faster than the 7950.

Source: Maximum PC

Legit?
 

Sethos

Banned
The VRAM amount is disappointing if true. Though it seems completely unlikely.

EDIT: Oh, it's the same drivel that was posted before.
 

Corky

Nine out of ten orphans can't tell the difference.
Not interested in the GK104 but dat GK110....come to me

lol smokey I could've sworn I read you saying "I'm done... that's it, I'm done with the build" less than 48h ago in the I need a pc thread :3
 

Sethos

Banned
lol smokey I could've sworn I read you saying "I'm done... that's it, I'm done with the build" less than 48h ago in the I need a pc thread :3

Well, there's nothing more to do then - Perfect time to think about his next build :p

Think I might go all-out this gen and go for 2 x 690s
 

kevm3

Member
1 690 would destroy just about every game. 2 690s seems more than excessive. Aren't the x90 series essentially two cards in one chassis?
 

Corky

Nine out of ten orphans can't tell the difference.
Well, there's nothing more to do then - Perfect time to think about his next build :p

Think I might go all-out this game and go for 2 x 690s

Lol, well when you put it that way...

Also, I'm strongly leaning towards going single gpu and OCing the shit out of it. I have a strong feeling that the prices are going to be just as mental as the AMD cards, so I probably won't even be able to afford SLI 660 Tis...

Rather get 1 680 I guess.

Just release them already sheesh. Also SLI 690s Sethos? Gpus for ( probably ) 2grand? I want your job :I
 

artist

Banned
New (bad) news ..

Nvidia in for a tough year, pressure on both high end and low end?

By now, it's clear that the very high end Nvidia Kepler parts aren't going to see the light until much later this year, while the GK104 definitely can't fight AMD GPUs for the performance crown. On top of that, Intel is eating away at the low end, where Sandy Bridge Xeons chip at profitable entry level Quadro business, and Ivy Bridge is to follow...

Nvidia, as a corporate entity, still suffers from certain heavy structural weaknesses - the lack of an X86 CPU cannot be compensated by ARM Tegra, as there are a dozen other ARM vendors, plus the hungry upcoming Chinese mobile CPU companies, which fight for the same market, some of them with far higher financial resources - and, in a few cases, own fabs - than Nvidia itself. Add to that a speculative story which I heard in China, that only a very large order of many thousands of Nvidia Tesla GPGPU cards for their Tianhe multi-petaflop supercomputer few years ago actually saved Nvidia from a rumoured Intel takeover attempt - not that I would recommend such thing to Intel for their own corporate health anyway - by artificially boosting their stock price at the time, and the situation becomes truly intriguing.

Having exploited their own past performance leads to the fullest, Nvidia knows well the benefits of the 'waterfall effect' where many users, including our own readers, watch who holds the current speed lead at a given time and, based on that, affect the buying decisions for the lower-cost parts towards the same vendors. Right now, AMD holds that lead with the HD7970 and, soon, the HD7990.

The GK100 'Kepler' and its GK110 follow on were supposed to wrest that lead back to Nvidia, just a month or two from now. However, right now it looks a gone case, at least for the next half year. For whatever reasons, the GK104 upper mid-range part, a follow on to the GeForce 560 family, is now the top-end part to launch, and it can't - I mean can't - win over the HD7970 in any meaningful use. That role was reserved for the GK100 and follow ons.

Either way, the lack of the premium 'crown holder' part is bound to affect Nvidia's image. Unlike AMD and its - problematic at times, too - CPU business, and with the chipset presence long gone, Nvidia has no other major PC foothold aside the GPUs. AMD can, at least, alternate between attractive CPU/APU and GPU launches to keep the market buzzing, even if some things go wrong. On the other hand, the 'fashion fad' tablets and smartphones, with zero CPU vendor loyalty to boot, will never fully replace the long term stability of the PC market.

On the other hand, as covered here before, Intel had - at truly minimal cost to themselves - suddenly started eroding Nvidia's key profit center and 'leadership refuge' of sorts, where they invested top dollar, and gained top profits in return: the Quadro OpenGL professional GPU family. Even though the Sandy Bridge Xeon E3 integrated GPUs can only target the entry level of Quadro line, the 'good enough yet certified performance' approach simply works well for polygon-based, zero-effects but large and profitable 3-D CAD market. The Ivy Bridge will, with further performance doubling and more app certifications, deepen the attack as well.

So, Nvidia has to address both the high end and low end GPU market problems this year. The performance leadership is lacking, and that will affect things down the product line too. On the high end, besides obviously pushing forward the true high end part if the circumstances allow, it should also embrace one good idea that AMD has, if they want more acceptance in mainstream compute GPGPU in the future - no deliberate crippling of double precision FP math functionality in high end consumer parts. AMD never curtailed the DP FP performance in their high end HD6900 and HD7900 lines, and gained more traction even in HPC because of that. After all, the HD7990 is a Teraflop-class double precision FP monster.

On the low end, cutting Quadro profits to keep the line survive might be a good long term idea, to justify the users to still bother about the products. Simply, even if the Intel OpenGL graphics in their processors is somewhat slower (and how much speed besides vertex processing you need anyway for typical polygon-based 3-D building or machine models), it comes in fully certified, supported, and - free. The company has invested so much over the years in this flagship product brand and the market segments it covers, it would be a pity to see it squeezed hard by Intel from the low end and AMD at the high end by the increasingly assertive FirePro OpenGL line too, with improved drivers and apps certifications now...

Read more: http://vr-zone.com/articles/nvidia-...igh-end-and-low-end-/14937.html#ixzz1myJS9iWR

Report: NVIDIA is releasing GK104 in March as GeForce GTX 670 Ti

A report from Sweclockers claims that NVIDIA is finally set to release the first Kepler GPU in March. As expected, first in line will be GK104 - to be branded as GeForce GTX 670 Ti.

The GTX 670 Ti branding was first revealed by VR-Zone Chinese earlier this month and Sweclockers' sources - Taiwanese AIB partners - confirm the same. They peg the release date in March after Cebit 2012. This suggests a release some time between March 11th and March 31st. The GeForce GTX 670 Ti is vaguely indicated to have performance "better than GeForce GTX 580 and Radeon HD 7950", which implies average performance somewhere between HD 7950 and HD 7970.

Rumours suggest GK104 will bring a radically new architecture, with specifications including 1536 shaders, 128 TMU, 32 ROP and 2 GB 256-bit memory GDDR5. Core clock for the top bin, GTX 670 Ti, is expected to be around 950 MHz.

As the nomenclature suggests, GK104 / GTX 670 Ti is more of a successor to the GF114 / GTX 560 Ti. The flagship GTX 580 sucessor is at least half a year away, and will not be released till Q3 or Q4 2012. Till then, AMD is expected to hold on to the single GPU crown with Radeon HD 7970 with the GTX 670 Ti and HD 7950 following behind.

Read more: http://vr-zone.com/articles/report-...s-geforce-gtx-670-ti/14952.html#ixzz1myJbWRqh
 

1-D_FTW

Member
Eh. I could give two rips about the high end market. If it truly is superior to the 580 and has excellent performance/watt like rumored, that'll be a sexy card in my eyes.

And kind of sensationalist by VR-Zone.

http://www.engadget.com/2012/02/15/nvidia-q4-2012-earnings/

Nvidia released their financials a couple days ago and they doubled their income in 2011 (up to 581 million last year.) Very few tech companies are even making money in this climate, and they're raking in almost 600 million. I'd think the doubters would finally concede at some point that they're an efficiently run company with smart people setting road maps. Blips in the manufacturing process aside, they keep their financials in order and keep a steady path.
 

pestul

Member
Well, that about wraps it up then. If Nvidia drags their feet that much for the flagship parts, AMD will absolutely crush them in the fall with a refresh (actually the 7990 will be crushing them all along anyway). Now AMD just needs to moneyhat the devs to get some driver issues sorted out.
 

tokkun

Member
Were all of you guys really looking forward to a card rumored at $750?

As long as the GK104 part is still on track to bring down prices in the sub-$400 market, I'm happy.
 

pestul

Member
Were all of you guys really looking forward to a card rumored at $750?

As long as the GK104 part is still on track to bring down prices in the sub-$400 market, I'm happy.

Yeah, this is a very good point. I hope it's a good overclocker too, since that would really force AMD on prices.
 
Top Bottom