Zombie James
Banned
Wouldn't an updated driver increase the performance?
Not necessarily, no.
http://benchmark3d.com/amd-catalyst-12-1-preview-driver-benchmark
Wouldn't an updated driver increase the performance?
That's scandalous. I never liked Tom's because they felt like a poor man's Anandtech.
I have 3 spare GPU's in case I need to buy a new one. I have a spare 8800 GT, my iGPU on my 2500k, and a spare GTX 570 1.28 GB sitting in a box. I plan to swap one of those if I sell my 2.5 GB GTX 570 and upgrade to a 680.
I don't know if I trust that. It could just be some marketing guy fumbling right? Even then it's just for their implementation.
Images of Leaked Tom's Hardware charts have been deleted! These are UNAPPROVED images taken from Tom's Hardware's system. Don't post any more images of the charts from the leaked info, until the official release of the GTX 680. The NDA has not passed for the GTX 680 and this information being released will not be tolerated at Tom's Hardware.
Thanks!
Tom's Hardware Moderators!
1536 is the correct amount, according to this:I don't know if I trust that. It could just be some marketing guy fumbling right? Even then it's just for their implementation.
Are those full CUDA cores or did nVidia opt for more simple ones similar to AMD?
I don't believe that NVIDIA have gimped it one bit. On the contrary - just think about how big their GPGPU business is.Sounds like a silly setup (Renderstream) given GTX680's gimped compute ability (going by Tom's leaked slides lol)..
RenderStream stuff isn't for general GPGPU, it's for... rendering. And in that use case gimped double precision doesn't matter much.Sounds like a silly setup (Renderstream) given GTX680's gimped compute ability (going by Tom's leaked slides lol).. 7970s would be much more sensible in a workstation. :S
Their GPGPU business is profitable because they are selling the same chips for 10x the price. And that's why they "gimp" the GPGPU capabilities of their non-Tesla cards.I don't believe that NVIDIA have gimped it one bit. On the contrary - just think about how big their GPGPU business is.
670 will be faster than 7950 which is a little bit faster than 580.When I see those words, I think 199.99 or 219.99. Not 299.99. If it's the low 200s, I'll give you an e-kiss. Seems way too hard to believe. But I want to. I wanna believe, GAF. Believe.
EDIT: Grr. Misread and thought you said 670 was going to be priced at that point. If it has 570 level performance, it better have really low power numbers. Low power and price is the only combo that would temp me to make that type of upgrade.
Next couple of weeks -- no. Next couple of months -- yes.Are we getting anything but the 680 these next couple of weeks? I'm looking for a 250-300 dollar GPU, and if we're not getting a 660 or something for that price, I'll just go with a 7850...
AMD isn't the only one who can make decent performance drivers. Any driver-based performance improvements will be more or less equal for both AMD and NV.AMD needs to get a decent performance driver out for the 79xx series (5-10% increase) to compete in the 1080p segment.
Tomorrow for 680.So any idea on when the new Nvidia GPUs are actually coming out? I've been dying to upgrade my 5770 for about a year--ATI's had a shitty couple of years for compatibility--and I'd either get a 570 or 680, depending on the price of each at the 680's launch.
The cores are the same, you can't get any less "beefy" than a MADD core. They are organized differently now though.I'm well aware the 1500 number is correct, just not if that they are as beefy as the old style cores.
GK104 is a gaming oriented GPU which was meant for a mid range market (GF114 replacement). As it is it's not compute oriented at all thus it being worse at compute than Tahiti and on par with Pitcairn isn't that surprising.DP is gimped no doubt but I dont think compute performance is optimized yet, atleast it shouldnt be that low.
I'm well aware the 1500 number is correct, just not if that they are as beefy as the old style cores.
120hz, 120fps, 1080P. That's an order of magnitude better than console output.So, when will that architechture hit 100$? Not that card, but somthing based on that. Prices for high end stuff is so dumb, its not THAT much better than consoles.
So, when will that architechture hit 100$? Not that card, but somthing based on that. Prices for high end stuff is so dumb, its not THAT much better than consoles.
No one is contesting that and why. Last time however the x80 model of Nvidia served well as Tesla product. That isnt the case this time around.GK104 is a gaming oriented GPU which was meant for a mid range market (GF114 replacement). As it is it's not compute oriented at all thus it being worse at compute than Tahiti and on par with Pitcairn isn't that surprising.
lolAnd yeah, you laughed, but it is in production already.
Nvidia is filling the low-end ($100) with Fermi shrinks, so not any time soon.So, when will that architechture hit 100$? Not that card, but somthing based on that. Prices for high end stuff is so dumb, its not THAT much better than consoles.
You realize a GTX 680 is roughly around 10-15x more powerful than a 360/PS3 GPU, if not more? It's far better than the hardware in consoles. In fact, GTX 680/7970 are most likely better GPU's than what will be in Xbox 3/PS4.
Buy a 6870 if you want good value.So, when will that architechture hit 100$? Not that card, but somthing based on that. Prices for high end stuff is so dumb, its not THAT much better than consoles.
120hz, 120fps, 1080P. That's an order of magnitude better than console output.
Plus, those PS3 games you played are being rendered at lower than 1080p, despite being on a 1080p display.
Oh I know they are 720p and then upscaled. Im not a retard. Sure real 1080p looks way clearer, but thats really it. I have not done a comparision of the same gam, but say KZ3 in 720p vs Singularity/L4D2/Crysis 2 etc in 1080p is not a OMGWTFBBQHAX difference. And spending 200 more on a comp I would want to get that.
The folks at B3D are talking Tom's apart, at this rate we need to discard the review.
7970/7950 becoming 10% slower, exact same benchmark and settings.
They're the same. Compute density is rising much faster than bandwidth. It has been like this since the beginning of computers. As for the card being huge there are many changes in things surrounding compute cores that make this number possible in approximately the same transistor budget as that of GF110. Loosing hot clocks for cores is one of such things.They're not. They're a simpler design. If it was 1536 CUDA cores, Fermi-style, they'd need at least a 384-bit or 512-bit bus to have the memory bandwidth to keep up with that amount of processing power. Not to mention the card would be HUGE.
DP isn't the only thing that gets gimped in gaming grade GPUs which aren't made for Teslas. I think it's pretty telling that they're not launching a new CUDA version with GK104.What I was suggesting was that despite the DP being gimped, the compute performance in other workloads will not be that dismal.
Really? 8) So should I say again that it's already in production? As in not sampling but is already in production.There is a difference between sampling and production.
Gotcha. Well, like I said, not just about 1080p, but the 120hz makes a huuuuuuuuge difference. One of those weird issues that straddle the line between objective and subjective. Not that big of a deal to you, so no worries.Oh I know they are 720p and then upscaled. Im not a retard. Sure real 1080p looks way clearer, but thats really it. I have not done a comparision of the same gam, but say KZ3 in 720p vs Singularity/L4D2/Crysis 2 etc in 1080p is not a OMGWTFBBQHAX difference. And spending 200 more on a comp I would want to get that.
Isnt that obvious?DP isn't the only thing that gets gimped in gaming grade GPUs which aren't made for Teslas. I think it's pretty telling that they're not launching a new CUDA version with GK104.
Ok, keep going on, I guess if you keep repeating it might actually become true!Really? 8) So should I say again that it's already in production? As in not sampling but is already in production.
The question now is when (and IF) will they decide to make a GeForce on it? That will depend on TSMC's capacities and AMD's ability to push something faster than 680 out there. Right now they seem to feel very comfortable selling a 294 mm^2 GPU for cards retailing for $500/€500 and don't see much reason to make a GK110 based GeForce at all.
It is if you're using a computer monitor.Oh I know they are 720p and then upscaled. Im not a retard. Sure real 1080p looks way clearer, but thats really it. I have not done a comparision of the same gam, but say KZ3 in 720p vs Singularity/L4D2/Crysis 2 etc in 1080p is not a OMGWTFBBQHAX difference. And spending 200 more on a comp I would want to get that.
Apparently not if you're thinking that GK104's compute performance should be higher than that of Pitcairn just because it has more cores.Isnt that obvious?
Quite a few jumps you are making there. You'll see it's compute performance quite a bit higher than Pitcairn, irrespective of the amount of CCs argument.Apparently not if you're thinking that GK104's compute performance should be higher than that of Pitcairn just because it has more cores.
I do 1080p on lots of titles on a 4830 at 30-50fps. Sure it is better, but its not really "This was worth 500" better. I plugged in my PS3 in the same monitor (24 1080p) and KZ3/Uncharted 2 looked amazing too. But Im going off topic.
They're the same. Compute density is rising much faster than bandwidth. It has been like this since the beginning of computers. As for the card being huge there are many changes in things surrounding compute cores that make this number possible in approximately the same transistor budget as that of GF110. Loosing hot clocks for cores is one of such things.
W1zzard (admin of TechPowerUp) and owner of GPUZ, removed the "shader clock" and replaced it with "boost" in the latest revision.artist said:
I do 1080p on lots of titles on a 4830 at 30-50fps. Sure it is better, but its not really "This was worth 500" better. I plugged in my PS3 in the same monitor (24 1080p) and KZ3/Uncharted 2 looked amazing too. But Im going off topic.
BP, hot clock is gone.
W1zzard (admin of TechPowerUp) and owner of GPUZ, removed the "shader clock" and replaced it with "boost" in the latest revision.
I do 1080p on lots of titles on a 4830 at 30-50fps. Sure it is better, but its not really "This was worth 500" better. I plugged in my PS3 in the same monitor (24 1080p) and KZ3/Uncharted 2 looked amazing too. But Im going off topic.
Nice benches for 1080P, might have to pick this up now. Would love to see a witcher 2 @1080p ultra/4xAA benchmark.
Joy at being able to play metro 2033 @ 60FPS with max settings and 4xAA @1080p.
In all honesty, as someone who has been eagerly waiting for the next nvidia cards, I don't think "low/mid/high"- end/range means shit anymore if this is the pricing model they're using.
This is an enthusiast top of the line card retailing for ~$550. Their 680/790/4200 ti/whatever kepler flagship can be twice as fast as 7970 but what's the point for the 99.99% of the "enthusiast" pc market if it ends up with a price tag close to a grand?
Fuck I miss the days of the gtx 460.
Bingo..But I want to see Witcher 2 at 1080p with the Ubermode enabled. On my current 580 SLI cards, I get around 28-45fps...based on what I can see on the the Tom Hardware benches and knowing what my FPS are in the games they used, it seems like 2x680's will finally deliver me solid 60fps in Witcher 2 and in ubermode...at least I hope.
I am not sure I understand the obsession with uber mode. It doesn't really look that much better, it's just supersampling. And you realize it can be set to Uber x2, x3, x4, and so on and so forth...? So actually, there's still levels of uber beyond the default. It's a pointless goal to spend money on.
Jorona said:Yeah, Nvidias typical naming convention is G(Core Designation)(Revision)(Performance Class)
And before GT200 it was just G(Revision)(Performance Class)
0 is highest performance class, 8 lowest.
So GF100 was Graphics Core Fermi Revision 1.0 Top Performance Class (GTX480)
GT215 was Graphics Core GT Revision 2.1 Low-Mid Performance (GT240)
G80 was Graphics Core Revision 8 Top Performance Class (8800GTS/GTX/Ultra)
GK104 Is Graphics Core Kepler Revision 1.0 High Mid Performance Class. It should get the 660 designation, but its being pushed into the high end role. Its eqather because GK100 or GK110 have issues, or Nvidia truely believes they don't need a bigger gun for this fight.
Sounds like a silly setup (Renderstream) given GTX680's gimped compute ability (going by Tom's leaked slides lol).. 7970s would be much more sensible in a workstation. :S
Tom's is now back pedalling, in full damage control. Review numbers will be different?
You are right. Kyle Bennet confirmed it ..I don't get a sense that they're denying or downplaying the numbers, just that they don't want to get sued for breaching the NDA.
Kyle_Bennett HardOCP Editor-in-Chief said:While I cannot vouch for the validity of the data, and I never would for another website, I can tell you this. Chris Angelini THG Editor-in-Chief did verify that "someone accessed our internally-facing CMS and exported all of the charts that I had uploaded in preparation for my GTX 680 review." So those are THG charts and such.
I am not sure I understand the obsession with uber mode. It doesn't really look that much better, it's just supersampling. And you realize it can be set to Uber x2, x3, x4, and so on and so forth...? So actually, there's still levels of uber beyond the default. It's a pointless goal to spend money on.