• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia Kepler - Geforce GTX680 Thread - Now with reviews

artist

Banned
When and why did $300 become the accepted mid range price point? $200-$220 for a 1GB GTX460 was the sweet spot a couple years ago, now it's $300?
It didnt?

$549 and above = Ultra Enthusiast
$349-549 = Enthusiast
$249-349 = Performance
$179-249 = Midrange
$129-179 = Mainstream
$129 and lower = Budget


Physx needs to die already. Just take out behind the shed and put it out of it's misery.
lol, I remember saying the below when Nvidia bought out AEGIA:
Same reason why I think Nvidia was better off buying VIA over that shithole AEGIA.
 
I see proprietary crap like Physx, which seems only to exist to insure games run worse on the competitors hardware, as nothing but bad for the PC platform.
 

LiquidMetal14

hide your water-based mammals
I see proprietary crap like Physx, which seems only to exist to insure games run worse on the competitors hardware, as nothing but bad for the PC platform.

Considering Physx is not allowed to offload onto CPU and detects when ATI/foreign GPU's are connected and cripples performance, I see nothing other than shady tactics on nvidia's part. I never liked Physx because of that alone.
 
How do you compete against that if your AMD? If PhysX really takes off, AMD either becomes irrelevant or they come up with their own proprietary crap that runs horribly on Nvidia's cards and pushes that hard. It's such a bad direction to send things.
 

artist

Banned
How do you compete against that if your AMD? If PhysX really takes off, AMD either becomes irrelevant or they come up with their own proprietary crap that runs horribly on Nvidia's cards and pushes that hard. It's such a bad direction to send things.
AMD has Bullet Physics and Havok (owned by Intel) that are PhysX's main competitors and both of them should be neutral to AMD/Nvidia hardware.

FYI Bullet Physics is open source and was developed by an R&D guy in Sony. :D
 
If GK104 is $299 and really does have 'peaky' performance (whereas my OC'd GTX 570 has very consistent performance), I will probably end up either waiting for high end Kepler or saying fuck it and buying a custom PCB/coolered 7950 and OC'ing it to 1.3-1.4 Ghz.

Let's hope Nvidia delivers or AMD is going to curbstomp them.
 

Reallink

Member
It didnt?

$549 and above = Ultra Enthusiast
$349-549 = Enthusiast
$249-349 = Performance
$179-249 = Midrange
$129-179 = Mainstream
$129 and lower = Budget



lol, I remember saying the below when Nvidia bought out AEGIA:

GK104 is rumored to be the GTX660 though isn't it? Even assuming $299 is a Ti/Higher VRAM variant, it still represents a price increase over the then equivalent 560Ti ($250?) and certainly the 1GB 460 ($220?) series. Just wondering where the bang for buck is going to be here. $299 reference will likely result in $330+ street prices once they charge their brand name, OC, and supply contrained price premiums. Historically, that would be getting more into the X70 tier than the X60.
 

dr_rus

Member
It would have taken much less time for you to look it up versus typing it two big paragraphs. Again, unless you back up your claims it's nothing but hot air.
What makes you think that I'll find it faster than you? I've tried but most of results are leading to guys trying to add dedicated PhysX GeForce to their Radeons. If you don't trust me -- it's your choice.

Again, its probably a reference to the PPU. They might wanted to differentiate between PhysX (SDK) and PPU. Charlie's using AEGIA reference too and for good reason.
That PPU was called PhysX P1. It's pretty easy to differentiate P1 from SDK since the first is a chip/card and the second is a software (which is kind of hard to be built into hardware).

Charlie's using a lot of stuff in that piece and most of it doesn't even makes sense -- although it's normal for his write-ups.

Charlie said:
Kepler is said to have a very different shader architecture from Fermi, going to much more AMD-like units, caches optimised for physics/computation, and clocks said to be close to the Cayman/Tahiti chips.
What the hell is "AMD-like units"? If that's "GCN-like units" than these units are in turn a lot like NV's CUDA cores being used since G80. So if Kepler's using these then it's basically using the same units as all NV GPUs since G80. Why call them "AMD-like"? And I don't think that NV's gone VLIW (which was AMD-like) in Kepler since Kepler is supposed to be a continuation of Tesla->Fermi architecture plus it's just doesn't make any sense for them to do it now when even AMD switched to G80-like scalar execution.

Charlie said:
Performance is likewise said to be a tiny bit under 3TF from a much larger shader count than previous architectures. This is comparable to the 3.79TF and 2048 shaders on AMD’s Tahiti, GK104 isn’t far off either number. With the loss of the so called “Hot Clocked” shaders, this leaves two main paths to go down, two CUs plus hardware PhysX unit or three. Since there is no dedicated hardware physics block, the math says each shader unit will probably do two SP FLOPs per clock or one DP FLOP.
There are no CUs in NVIDIA's GPUs. What is talking about? Why not use NV's terms for NV's GPUs especially since they've been around for much longer time than that "CU" of GCN? What's having or not "dedicated hardware physics block" have to do with how many FLOPs a "shader" can do? This whole paragraph just doesn't make sense.

Charlie said:
but also leads to questions of how those shaders will be fed with only a 256-bit memory path
Yeah, that's a new problem. We didn't have it for like, ages already. Kepler is surely unique here. Next time Charlie will make a discovery that compute capabilities are improving much faster than off-chip bandwidth and after that he'll discover that on-chip caches are there for a reason and complex GPU effects/features like POM, DoF, tesselation are all there basically to create something on-chip while waiting for external data. So much to learn.

Charlie said:
The net result is that shader utilisation is likely to fall dramatically, with a commensurate loss of real world performance compared to theoretical peak.
On what code? "Shader" (I think it's time to start calling them compute cores or CUDA cores or shader processors because "shaders" are programs and they don't have any "utilisation") utilisation is always relevant to something that is running on them. Without software SP utilisation is always zero. So if we have a game like Crysis 2 DX11 which uses a lot of shader calculations then SP utilisation will be high. And if we have some GLQuake running then their utilisation will be low. SPs utilisation isn't something that can be judged outside of what's running on them.

Charlie said:
In the same way that AMD’s Fusion chips count GPU FLOPS the same way they do CPU FLOPS in some marketing materials, Kepler’s 3TF won’t measure up close to AMD’s 3TF parts.
It is completely the other way around right now. Newly launched HD7950 have 2.9TFlops peak compute performance while a GXT580 card which is more or less on par with it in performance has only 1.58TFlops peak. Why would that change so suddenly with Kepler? Charlie's reasoning about it being severely limited by 256 bit bus makes zero sense here. 256 bit bus with fast GDDR5 will give GK104 more bandwidth than GTX580 had. Also...

Charlie said:
Benchmarks for GK104 shown to SemiAccurate have the card running about 10-20% slower than Tahiti. On games that both heavily use physics related number crunching and have the code paths to do so on Kepler hardware, performance should seem to be well above what is expected from a generic 3TF card. That brings up the fundamental question of whether the card is really performing to that level?
What level? 10-20% slower than Tahiti which has 3.79TFlops peak is 3.0-3.4TFlops which is exactly what he's saying GK104 will have. If it'll beat Tahiti with such specs in some physics- or compute-optimised benchmarks -- that'd be great since just from Charlie's own raw numbers it looks like it shouldn't. (By the way, what benchmarks are these? there is zero new PC game releases between now and Metro LL in 4Q so all the benchmarks are already here -- what are these "games that both heavily use physics related number crunching and have the code paths to do so on Kepler hardware"? Batman AC? BF3? Crysis 2?)

Charlie said:
If physics code is the bottleneck in your app, A goal Nvidia appears to actively code for, then uncorking that artificial impediment should make an app positively fly. On applications that are written correctly without artificial performance limits, Kepler’s performance should be much more marginal.
If PhysX code is the bottleneck in your app then all chances are that this app with GPU PhysX on is faster on some GTX560 than on HD7970 right now. No need for Kepler/GK104. Problem solved. facepalm.jpg

Charlie said:
Since Nvidia is pricing GK104 against AMD’s mid-range Pitcairn ASIC, you can reasonably conclude that the performance will line up against that card, possibly a bit higher. If it could reasonably defeat everything on the market in a non-stacked deck comparison, it would be priced accordingly, at least until the high end part is released.
Is this basically all the reasoning behind everything written above? Sure, Charlie, it's not like any company ever started a price war by putting a product on the market with a much lower price than its competitors. Pitcairn is supposed to be on GF114/GTX560 levels of performance. I don't know how NV need to screw up to end up with a GK104 which will have the same performance as GF114. It's just impossible to do because a straight shrink of GF114 to 28nm would give them a better performance while being a much smaller chip than GK104 is supposed to be.

Charlie said:
All of the benchmark numbers shown by Nvidia, and later to SemiAccurate, were overwhelmingly positive. How overwhelmingly positive? Far faster than an AMD HD7970/Tahiti for a chip with far less die area and power use, and it blew an overclocked 580GTX out of the water by unbelievable margins. That is why we wrote this article. Before you take that as a backpedal, we still think those numbers are real, the card will achieve that level of performance in the real world on some programs.

The problem for Nvidia is that once you venture outside of that narrow list of tailored programs, performance is likely to fall off a cliff, with peaky performance the likes of which haven’t been seen in a long time. On some games, GK104 will handily trounce a 7970, on others, it will probably lose to a Pitcairn.
A smart man would assume that he's been shown a cherry picked benchmarks in which GK104 is much faster than 7970 (which is very impressive by itself) and in all the other not-so-cherry-picked benchmarks they'll end up being more or less close to each other, with GK104 loosing 10-20% on average (which would be impressive as well considering that GK104 supposedly have less Flops, at best 67% of bandwidth and is a less complex design). But Charlie somehow arrives at a conclusion that it'll loose even to Pitcairn in other programs which is buffling to say the least. Why not go straight to Cape Verde while we're at it?

Charlie said:
Nvidia is going out of their way to have patches coded for games that tend to be used as benchmarks by popular sites.
Written this down. I'll see how many benchmarks will get Kepler-specific patches. (I'll be amused if it'll be more than two or three of them all, yeah. It's generally close to impossible to push some ISV to even patch out his own bugs from his game.)

Charlie said:
Since Nvidia’s Fermi generation GPUs are very good at handling stencil buffers, they perform very well on this code.
As far as I remember Fermi is identical to Evergreen and NI in it's handling of stencil buffers (I suppose he's talking about depth buffer fillate here).

Charlie said:
Since most modern GPUs can compute multiple triangles per displayable pixel
They can't. That would simply kill perfomance even on Fermi.

Charlie said:
Since most modern GPUs can compute multiple triangles per displayable pixel on any currently available monitor, usually multiple monitors, doubling that performance is a rather dubious win. Doubling it again makes you wonder why so die area was wasted.
Sure because every PC game now have a lot of triangles in every screen pixel. And tesselation is everywhere and it doesn't kill perfomance on Radeons at all. Clearly we don't need more tesselation perfomance since it's all triangles everywhere now. doublefacepalm.jpg

Charlie said:
If the purported patch does change performance radically on specific cards, is this legitimate GPU performance? Yes. How about if it raises performance on Kepler cards while decreasing performance on non-Kepler cards to a point lower than pre-patch levels? How about if it raises performance on Kepler cards while decreasing performance only on non-Nvidia cards? Which scenario will it be? Time will tell.
How about you shut up until that time will tell you something then? This is the reason why I don't like his write-ups. He usually has some solid info but it is almost lost in-between loads of such bullshit coming from him and him only.

Charlie said:
This is important because it strongly suggests that Nvidia is accelerating their own software APIs on Kepler without pointing it out explicitly. Since Kepler is a new card with new drivers, there is no foul play here, and it is a quite legitimate use of the available hardware.
What software APIs are those? PhysX? So they accelerate only two games from 2011 while basically ignoring all the others like Crysis 2, The Witcher 2, BF3? That's a smart move. /sarcasm

Charlie said:
Then again, they have been proven to degrade the performance of the competition through either passive or active methods.
And competition has been proven to do the same to them. News at eleven.

Charlie said:
Since Nvidia controls the APIs and middleware used, the competition can not ‘fix’ these ‘problems with the performance of their hardware’.
Again what APIs is he talking about? Crysis 2 doesn't use any NVIDIA APIs. Battlefield 3 doesn't use any NVIDIA APIs.

Charlie said:
Is the performance of Kepler cards legitimate? Yes. Is it the general case? No. If you look at the most comprehensive list of supported titles we can find, it is long, but the number of titles released per year isn’t all that impressive, and anecdotally speaking, appears to be slowing.

When Kepler is released, you can reasonably expect extremely peaky performance. For some games, specifically those running Nvidia middleware, it should fly. For the rest, performance is likely to fall off the proverbial cliff. Hard. So hard that it will likely be hard pressed to beat AMD’s mid-range card.
And for the third time: what NVIDIA middleware? Why would performance of a 3TFlops part "fall off the proverbial cliff" "so hard that it will likely be hard pressed to beat AMD’s mid-range card" which is rumoured to have only 1408 SPs which would give it ~2,86Tflops @950 MHz and exactly the same 256 bit GDDR5 memory bus? And what is he smoking and where can I get that too?

Charlie said:
What does this mean in the end?
I wonder.

This and especially when nvidia is trying devs to use their special stuff.
No thanks man.
Yes, because better graphics and interaction is bad. Oh, wait.

Considering Physx is not allowed to offload onto CPU and detects when ATI/foreign GPU's are connected and cripples performance, I see nothing other than shady tactics on nvidia's part. I never liked Physx because of that alone.
PhysX is running fine on CPU (PhysX 3.x SDK is using all CPU cores automatically now) and nothing cripples on any non-NVIDIA GPU.
 

artist

Banned
GK104 is rumored to be the GTX660 though isn't it? Even assuming $299 is a Ti/Higher VRAM variant, it still represents a price increase over the then equivalent 560Ti ($250?) and certainly the 1GB 460 ($220?) series.
We dont know what branding it would end up eventually but it certainly is the same category as the GF104/GF114 (one step down from the monolith die). Also keep in mind that prices havent been the same as in previous gens which is why we're now seeing AMD price their cards at $549/449 as opposed to $379/299 in previous gens.

If the really small die and low power consumption figures are true, coupled that with the 256b mem interface and we should see prices quickly settle.
 

artist

Banned
What makes you think that I'll find it faster than you? I've tried but most of results are leading to guys trying to add dedicated PhysX GeForce to their Radeons. If you don't trust me -- it's your choice.
Finally conceding the point. You didnt have to be a dick about it and saying "Google was banned?"

That PPU was called PhysX P1. It's pretty easy to differentiate P1 from SDK since the first is a chip/card and the second is a software (which is kind of hard to be built into hardware).
Its easy but what if its not dumb enough for the readers to get it.

It is completely the other way around right now. Newly launched HD7950 have 2.9TFlops peak compute performance while a GXT580 card which is more or less on par with it in performance has only 1.58TFlops peak. Why would that change so suddenly with Kepler? Charlie's reasoning about it being severely limited by 256 bit bus makes zero sense here. 256 bit bus with fast GDDR5 will give GK104 more bandwidth than GTX580 had. Also...
lol

<rant snipped>
You need to relax. Take deep breaths, you sure not in an Nvidia office right now? Embrace yourself as Charlie is just getting started .. :p GK104 sounds promising despite Charlie's twisted talk, enough of information for me to be interested in buying it.
 

dr_rus

Member
Finally conceding the point. You didnt have to be a dick about it and saying "Google was banned?"
OK, you're right. But I've read the same stuff before GT200 launch and then again before Fermi's unveiling. And it didn't make sense even then, it makes even less sense now. I'm sorry that I don't save links to all the stuff I read on the Internet.

Its easy but what if its not dumb enough for the readers to get it.
Who needs dumb readers?

GTX580 have 192.4 GB/s of bandwidth. Currently avialable GDDR5 chips can be clocked up to 1750 MHz. You'd need to clock them on 1,5GHz to reach the same b/w on 256-bit bus as on GTX580. "Lol"?

You need to relax. Take deep breaths, you sure not in an Nvidia office right now?
I'm relaxed, thanks. And please address such questions to people writing such articles.
 

Durante

Member
This is just a thought but why not ignore the lunatic?
That would be too easy. I can't believe he's still around, I stopped reading his ramblings what seems like a decade ago.

Anyway, I hope NV pushes PhysX hard, particularly APEX clothing and hair. I feel it's the only way we'll see that get to the level it could be in games in this decade.
 

artist

Banned
OK, you're right. But I've read the same stuff before GT200 launch and then again before Fermi's unveiling. And it didn't make sense even then, it makes even less sense now. I'm sorry that I don't save links to all the stuff I read on the Internet.
I'm pretty sure if this rumor was repeated the third time now (as per your yet unproven claim) then it should be easy to search and locate. Hell, I can find rumors dating 4-5 years back easily that have not been repeated thrice.

Who needs dumb readers?
This.is.Sparta.Internet

GTX580 have 192.4 GB/s of bandwidth. Currently avialable GDDR5 chips can be clocked up to 1750 MHz. You'd need to clock them on 1,5GHz to reach the same b/w on 256-bit bus as on GTX580. "Lol"?
Available != Nvidia will use them. I doubt Nvidia will go any higher than 1.5GHz, at which you'd be just equalling the 580 let alone getting more bandwidth. Lets mark this one down for March/April and see who is right, shall we?

I'm relaxed, thanks. And please address such questions to people writing such articles.
Good, now you need to practice what you are saying. And why would I address such question to Charlie? Its clearly and widely known what his agenda is.
 

1-D_FTW

Member
He has got the only rumors on the Kepler as of now and it was VERY positive to start with (and unbelievable coming from him).

Even his latest piece, if you take out his hate .. GK104 still sounds very promising. Like I said earlier for 80-90% performance of the 7970 at $299, I might get one.

Really comes down to power draw. Hopefully it is a great mid range card and it truly is well below 225 TDP.

As someone who's kind of had his wind taken out of his sails on 3D, I'm not really seeing any personal needs for super high end anyways. If it could absolutely nail 1080P @ 60fps for absolutely any game I threw at it (including physics heavy games), and was able to do it at say 150 watts, I'd consider that an outstanding card.
 

dr_rus

Member
Available != Nvidia will use them. I doubt Nvidia will go any higher than 1.5GHz, at which you'd be just equalling the 580 let alone getting more bandwidth. Lets mark this one down for March/April and see who is right, shall we?
OK, I think that they'll use the same chips that are used in 7970 and clock them accordingly. Kepler should have better b/w utilisation than Fermi due to new architecture and improved caches thus having lower b/w than GF110 on a mid range part won't hurt it that much.

Good, now you need to practice what you are saying. And why would I address such question to Charlie? Its clearly and widely known what his agenda is.
Practice what? This article is so full of shit that it just can't be taken seriously. The only plausible bits from it are raw TFlops figures and a hint on TDP being quite lower than 225W. I was just reacting to the rest of it. Your summary is quite OK.
(And while we're at it -- no, it's not 1024 SPs. Hot clock is gone and the chip is supposed to be close to 3TFlops. If you do the math you'll see that you'll need ~1500 SPs @1GHz to reach that number.)
 

artist

Banned
OK, I think that they'll use the same chips that are used in 7970 and clock them accordingly. Kepler should have better b/w utilisation than Fermi due to new architecture and improved caches thus having lower b/w than GF110 on a mid range part won't hurt it that much.
You're backpedalling more than Charlie now .. :D


Practice what? This article is so full of shit that it just can't be taken seriously. The only plausible bits from it are raw TFlops figures and a hint on TDP being quite lower than 225W. I was just reacting to the rest of it. Your summary is quite OK.
(And while we're at it -- no, it's not 1024 SPs. Hot clock is gone and the chip is supposed to be close to 3TFlops. If you do the math you'll see that you'll need ~1500 SPs @1GHz to reach that number.)
Practice relaxing. And not having a meltdown over a Charlie article.

If you want to act like you are in the know-how then why dont you come forward and admit it?

The 1500SPs have been speculated for a while now, hot clock being gone also speculated for a while .. if you really are in the know-how then tell us something we dont?
 

52club

Member
Looking forward to these new cards driving down prices. Nvidia has earned my business by making pretty solid drivers, compared to some of the situations I've heard about AMD (ATI).
 

artist

Banned
New hint on possible launch date.

RussianJ said:
CUDA demo on unspecified live 28nm card vs GTX 580 was roughly 28% faster. No clue again what card. Also showed 580 vs Tesla cards.

When asked on the new cards launch date I was told "your next 6 paychecks so save up now." Take it as you will, I'm thinking under 45 days out.
That probably puts the launch in mid-April.
 

Antiochus

Member
http://lenzfire.com/2012/02/entire-nvidia-kepler-series-specifications-price-release-date-43823/

It appears to corroborate (somewhat) what Mssrs Obrovsky and Demerijian have been saying regarding GK104/GTX 660. It appears the GTX 660 and GTX 660 Ti have certain....aspects....of their spec switched, in a most comedic fashion.

If, and if, this chart proves to be prophetic, we will truly be in a troubling new GPU generation. Marginal improvements at radically marked up prices, a la the GTX 670-680.

Kepler based Nvidia GTX 600 series GPUs &#8211; Highlights
Nvidia Kepler GTX690

750MHz Core clock
2×1.75 GB 4.5GHz GDDR5 Memory
2×1024 Stream Processors
2x448bit Bus Width
Priced at $999
Both AMD and Nvidia are following the same trend in maintaining the order of the series in GPUs. Like HD 7990, GTX 690 has a core clock lesser than GTX 680 and Bus width similar to GTX 670. Compared to AMD HD 7990&#8242;s price point of $ 849, GTX 690 is priced at $ 999. But comparing to the performance the GTX 690 can offer, HD 7990 is definitely over priced, and sure AMD has to reduce their prices drastically after the release of GTX 600 kepler series.

Nvidia Kepler GTX680

850MHz Core clock
2 GB 5.5GHz GDDR5 Memory
1024 Stream Processors
512bit Bus Width
Priced at $649
45% faster than HD 7970
Nvidia has a clear winner in its hands and that&#8217;s why they have commented that they had expected much more from AMD. But still we already know that AMD is preparing its revised GCN HD 8000 series graphic cards, which may be release in the latter half of this year.

Nvidia Kepler GTX670

850MHz Core clock
1.75 GB 5GHz GDDR5 Memory
896 Stream Processors
448bit Bus Width
Priced at $499
20% faster than HD 7970
Nvidia Kepler GTX660Ti

850MHz Core clock
1.5 GB 5GHz GDDR5 Memory
768 Stream Processors
384bit Bus Width
Priced at $399
10% faster than HD 7950
Nvidia Kepler GTX660

900MHz Core clock
2 GB 5.8GHz GDDR5 Memory
512 Stream Processors
256bit Bus Width
Priced at $319
Performance similar to GTX580
Nvidia Kepler GTX650Ti

850MHz Core clock
1.75 GB 5.5GHz GDDR5 Memory
448 Stream Processors
224bit Bus Width
Priced at $249
Performance similar to GTX570
Nvidia Kepler GTX650

900MHz Core clock
1.5 GB 5.5GHz GDDR5 Memory
256 Stream Processors
292bit Bus Width
Priced at $179
Performance similar to GTX560
Nvidia Kepler GTX640

850MHz Core clock
2 GB 5.5GHz GDDR5 Memory
192 Stream Processors
128bit Bus Width
Priced at $139
Performance similar to GTX550Ti
 

Antiochus

Member
As usual with these rumors, solid kernels of truth mixed in with rodent excrement for disguise. The performance comparisons are whats key
 
I&#8217;m in a real bind.

My GTX590 some how died on me the other day.
Not sure how it happened but I&#8217;m sure it wasn&#8217;t a temperature problem as always monitor the temps.

And even better is that I can&#8217;t find my recepit so I can&#8217;t send the card back.

So now I don&#8217;t know if I should buy another one because they are pretty expensive ( although I did love that performance from the card )
And also with these new nvidia cards coming out soon I don&#8217;t know how long it will take so ill be missing all my gaming until then.

What should I do?
 

Ramblin

Banned
I’m in a real bind.

My GTX590 some how died on me the other day.
Not sure how it happened but I’m sure it wasn’t a temperature problem as always monitor the temps.

And even better is that I can’t find my recepit so I can’t send the card back.

So now I don’t know if I should buy another one because they are pretty expensive ( although I did love that performance from the card )
And also with these new nvidia cards coming out soon I don’t know how long it will take so ill be missing all my gaming until then.

What should I do?

Find that receipt!
 

Gav47

Member
What should I do?
By losing the receipt I presume you bought at a brick and mortar store and have lost the physical receipt, you haven't just deleted the confirmation email for an online retailer, right? Contact the store. If you bought the card with a card your statement might be enough to convince them with a little pleading.
But really you should just RMA it with the card manufacturer, it should still be under warranty. It'll take longer (I've seen people wait 4-6 weeks) than just walking into the store and leaving with a replacement but you'll get your new card all the same.
 
Video card that's not out yet will outperform video card that's out now.

News at 11.


Same thing I was thinking. Tahiti is out first, then nVidia drops Kepler, then AMD drops Pitcairn... Cape Verde, etc. and the battle continues. IMO if AMD is out now, they're winning now, when nVidia launches and if on top, they'll be winning then until AMD releases their next GPU in their roadmap, it's not like AMD won't have an answer soon after Kepler, not sure I understand the logic of cards launching a differen times. Simultaneous releases would be another story.
 

artist

Banned
I&#8217;m in a real bind.

My GTX590 some how died on me the other day.
Not sure how it happened but I&#8217;m sure it wasn&#8217;t a temperature problem as always monitor the temps.

And even better is that I can&#8217;t find my recepit so I can&#8217;t send the card back.

So now I don&#8217;t know if I should buy another one because they are pretty expensive ( although I did love that performance from the card )
And also with these new nvidia cards coming out soon I don&#8217;t know how long it will take so ill be missing all my gaming until then.

What should I do?
1. Try to find the receipt. If you find it, proceed with the RMA.
2. Buy the 7950, OC to hell. You could probably get within striking distance of the 590 in a few games without all the multi-GPU headache.
3. Watch the rumor mills. If the buzz is very positive about Kepler in regards to pricing, sell the 7950 and wait for Kepler to drop.

Same thing I was thinking. Tahiti is out first, then nVidia drops Kepler, then AMD drops Pitcairn... Cape Verde, etc. and the battle continues. IMO if AMD is out first, they're winning. AMD will have an answer for nVidia as well, not sure I understand the logic of cards launching a differen times. Simultaneous releases would be another story.
I think like that person you quoted, a lot of people are just reading the title of the thread and hitting the "post reply" button. The rumor here is that Nvidia's performance part ala GTX460 or 560Ti is going to handily beat the 7970. That hasnt happened since well five years .. so this is not your typical "news at 11" thing.
 

bill0527

Member
I’m in a real bind.

My GTX590 some how died on me the other day.
Not sure how it happened but I’m sure it wasn’t a temperature problem as always monitor the temps.

And even better is that I can’t find my recepit so I can’t send the card back.

So now I don’t know if I should buy another one because they are pretty expensive ( although I did love that performance from the card )
And also with these new nvidia cards coming out soon I don’t know how long it will take so ill be missing all my gaming until then.

What should I do?

Did you buy it from an online retailer? You should easily be able to find a copy of your receipt in order history.

Did you register the card with the manufacturer when you bought it? That usually registers your warranty also.
 

Pimpbaa

Member
If, and if, this chart proves to be prophetic, we will truly be in a troubling new GPU generation. Marginal improvements at radically marked up prices, a la the GTX 670-680.

High end is ridiculous, but midrange has a card that's as fast as the 580 for $319. With 2GB, the Nvidia Kepler GTX650Ti is definitely gonna be a favorite for a lot of people.
 
Did you buy it from an online retailer? You should easily be able to find a copy of your receipt in order history.

Did you register the card with the manufacturer when you bought it? That usually registers your warranty also.

Nah I bought it from a store with my credit card.

I found it on my credit card statement although it doesnt say the item.
Hopefully that is enough.
 

Moaradin

Member
Damn I hope the 660 isn't any more than $300. Been waiting to upgrade my 460 but didn't want to do it at 560. $400 seems ridiculous for Nvidia's top selling "budget" 460/560 line.
 

Hazaro

relies on auto-aim
Nah I bought it from a store with my credit card.

I found it on my credit card statement although it doesnt say the item.
Hopefully that is enough.
You can just go to the store help desk and they should be able to bring up something.
They keep track of everything.
 

TheExodu5

Banned
Why are people complaining about those specs/prices? 45% faster than 7970 at $650 is ridiculously good. GTX 580 performance for $319 is equally ridiculously good.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
Why are people complaining about those specs/prices? 45% faster than 7970 at $650 is ridiculously good. GTX 580 performance for $319 is equally ridiculously good.

Exactly. You have to look at performance per price.
 

subversus

I've done nothing with my life except eat and fap
it's good to watch this hardware race from sidelines knowing that I'm not buying a new card soon. I mean if the game doesn't run @30-40 fps maxed out on a top card from previous gen this game is not worth it due to horrible optimizations. Why should I pay for someone's laziness?

These rumoured prices are ridiculous.
 

TheExodu5

Banned
it's good to watch this hardware race from sidelines knowing that I'm not buying a new card soon. I mean if the game doesn't run @30-40 fps maxed out on a top card from previous gen this game is not worth it due to horrible optimizations. Why should I pay for someone's laziness?

That's a naive way of looking at things. Some games simply demand more because they do a lot more. Metro 2033 demands performance for a reason.
 

sk3tch

Member
it's good to watch this hardware race from sidelines knowing that I'm not buying a new card soon. I mean if the game doesn't run @30-40 fps maxed out on a top card from previous gen this game is not worth it due to horrible optimizations. Why should I pay for someone's laziness?

These rumoured prices are ridiculous.

Some games are just worth it. I'm hooked on Crysis 2 MP these days because it A) absolutely taxes my 7970 so I can validate my OCs are stable and B) it is gorgeous. Even at 1080p I can't max it out completely, I have everything except particles, post-processing, and water set to Extreme. I can maintain 60 FPS vsync.

I don't know how you can play at 30-40 FPS. My metric is 60. I, for one, am happy that I can get 45% more performance than my 7970 (rumored) with the new GTX 680. I'll probably do SLI, eventually (I said I wouldn't go back...but I'm just nuts...it has taken every ounce of energy to not buy another 7970 for CrossFireX - even knowing they are HORRIBLE with CFX drivers).
 

subversus

I've done nothing with my life except eat and fap
That's a naive way of looking at things. Some games simply demand more because they do a lot more. Metro 2033 demands performance for a reason.

Metro runs at 40 fps average on my card maxed out (minus DOF) according to its benchmark. If I remember right.


Some games are just worth it. I'm hooked on Crysis 2 MP these days because it A) absolutely taxes my 7970 so I can validate my OCs are stable and B) it is gorgeous. Even at 1080p I can't max it out completely, I have particles, post-processing, and water set to Extreme. Everything else is maxed and I can maintain 60 FPS vsync.

I don't know how you can play at 30-40 FPS. My metric is 60. I, for one, am happy that I can get 45% more performance than my 7970 (rumored) with the new GTX 680. I'll probably do SLI, eventually (I said I wouldn't go back...but I'm just nuts...it has taken every ounce of energy to not buy another 7970 for CrossFireX - even knowing they are HORRIBLE with CFX drivers).

if you want stable 60 fps FOR ANY game, then yes, it's worth it. Most games run at 60 fps for me, BF3 runs at 40-60 fps with dips into 30s which is fine by me. These are not fast-paced games. Racers, twitch shooters, slashers, fighters, brawlers, platformers should run @ 60 fps, I agree. I can't remember a single taxing game in these genres which was released recently.
 
Top Bottom