• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Nvidia Kepler - Geforce GTX680 Thread - Now with reviews

Can't wait to here more about this. My crossfire 1gb 5870s are getting a little long in the tooth. If the price/performance is right I'll be switching back to nvidia next round.
 

Doc Holliday

SPOILER: Columbus finds America
Oh oH.......

http://hardforum.com/showthread.php?t=1668756

Stay classy Nvidia! Viral marketing discovered at chiphell, Kepler rumors debunked
Quote:
Several media are a variety of domestic Kepler low prices, HD7900 will follow the diving like the article, take a look at the foreign media, who did not even reported, along with some media editors forum explosive in the major so-called "material "It is clear that a variety of moves NV's marketing activities in the water behind the army to suppress the HD7900 as well as leading the sales momentum, CHH to see these posts I will lock posts, I hope the members can self-load.
rough chinese translation. ALL of the posts related to kepler being 299$ from chiphell, and speculated performance, were from marketing shills (proven by the owners of the forum) and have been banned.

Charlie from SA was bamboozled as well. Keep it classy Nvidia.
 

Shambles

Member
Not shocking at all. This level of desperation would leave us to believe that kepler is going to fair poorly, either way though just the existence of competition will bring prices down somewhat. I'll be waiting for IB/Kepler to do a build even if to just get the price of the 7950 down to a more reasonable level.
 
Not shocking at all. This level of desperation would leave us to believe that kepler is going to fair poorly, either way though just the existence of competition will bring prices down somewhat. I'll be waiting for IB/Kepler to do a build even if to just get the price of the 7950 down to a more reasonable level.

Just die shrinking Fermi should get Nvidi competitive easily. And they wont have to go much to beat 7970.

Now, blowing away the 7970 for 299 or whatever is what was always questionable.

It will likely settle out about like it is right now. The 7970 may eventually have to be cut to $350 or so, it will be a nice card but not the performance leader, just like 6970 is now. It's a small chip so shouldn't be a problem. AMD is just milking it while they can.
 
as someone looking to build in the next 2-3 weeks with a GTX580, am i better off waiting or downgrading with the idea of upgrading once the kepler cards are out?
 

Doc Holliday

SPOILER: Columbus finds America
as someone looking to build in the next 2-3 weeks with a GTX580, am i better off waiting or downgrading with the idea of upgrading once the kepler cards are out?

Just wait till the dust settles. Too many new cards coming out in the next few months. No point paying close to 500 bucks for a card that's getting replaced soon.
 
Just wait till the dust settles. Too many new cards coming out in the next few months. No point paying close to 500 bucks for a card that's getting replaced soon.
well i don't really want to put off building the system for several months. is it too much of a bottleneck to massively downgrade the card (like GTX550 Ti) and then replace it later with something beastly?
 

Doc Holliday

SPOILER: Columbus finds America
well i don't really want to put off building the system for several months. is it too much of a bottleneck to massively downgrade the card (like GTX550 Ti) and then replace it later with something beastly?


Hmm, buddy of mine was in the same boat. He bought a 560ti and is planning on upgrading when Kepler shows up. I mean you can even pick up a cheap used 6850 or 560 and it will perfectly fine for most games until then.
 

Jtrizzy

Member
well i don't really want to put off building the system for several months. is it too much of a bottleneck to massively downgrade the card (like GTX550 Ti) and then replace it later with something beastly?

I have a solution for you...buy a 460 to tide you over till the new cards come out. When they come out, you can use the 460 for PhysX.
 

Theonik

Member
You missed the ones where he basically said it wont be as fast as 7970.

Or not sure if those were tweets or forum posts...

Edit: here http://semiaccurate.com/forums/showp...&postcount=321
Could either mean "it won't be a significant difference to justify dropping a card you just bought", "not as powerful" and several sub-interpretations if that statement were to be put into context. Pretty vague I say.
Less hinting at stuff, more actual details so I can decide on a 7950 or whatever the nVidia price point equivalent is.
Yup.
Edit: I sort-of expected this to be viral marketing. Sounds pretty sound from nVidia's perspective at this point.
 

artist

Banned
Nvidia's Kepler suffers wobbly perturbations

Today's AMD Radeon 7950 release saw the embargo lifted on a funny little saga which you can read about here. But another hell-on-earth yarn from the rumour mill is even more interesting. TechEye has heard Nvidia has been asking hacks why the reviews have been so good.

Nvidia absolutely denies the rumour which suggests representatives have been asking journalists to file a short report on what is exactly so great about the 7950. Whether or not Nvidia is canvassing reviewers, there may be reasons the Green Goblin would like hacks to do its homework for it.

More industry whispers suggest the reason Nvidia's upcoming hope, the GK104, is late to the table is all the redesigns it has had to go through. Parts on the 32nm had to be redesigned for 28nm, but along the way there were challenges with packaging and interconnects. Some industry watchers suggest that Nvidia gave up a lot of space on its chip, trying to buff up Kepler by bringing Ageia to the hardware.

But the murmurs suggest Nvidia has been dedicating a lot of resources to get physics and fluid dynamics operating properly, which has so far, allegedly, taken half of its gaming engineers and six months to get right.


One industry watcher said to TechEye the company is in "holy s**t" mode - having been confident that the GK104 would fight off and trounce the competition, but the timing is out of whack. When Nvidia does get its high-end Kepler chip out in the second half of the year, the competition is going to be ready with something else.

There are no doubts that full fluid dynamics is going to wow the crowds - on demos specifically catering to fluid dynamics. Writing code for games to get the performance right, though, is trickier. While Nvidia's team is working overtime, its rivals just may be able to clean up.

"Their SDK is great," someone familiar with the matter said to TechEye. "But it was foolish of them to try putting Ageia hardware onto a GPU. The amount of engineering work you have to do to make that useful in games - you get one game every six months that gives you the benefits of that."

Read more: http://news.techeye.net/chips/nvidias-kepler-suffers-wobbly-perturbations#ixzz1l4sR8M8Q

First time a rumor mentioning that PhysX was getting its own dedicated logic and not being done by the CUDA cores in an Nvidia GPU. Could be great but a bulletpoint for sure .. I'd trade in that for 10% extra performance :p
 
It was idiotic of Nvidia to add custom hardware onto the GPU for PhysX if this is true. They should've kept focusing on utilizing CUDA cores for it so they wouldn't compromise on the design and raw power of the card.

Even with hardware PhysX that's currently being designed to run on CUDA cores, most developers simply don't give a fuck unless they are moneyhatted by Nvidia. It is a lose/lose proposition for Nvidia to try to pull an Ageia and have yet another way for physics to be run.They need to cut the bullshit, focus on pushing CPU PhysX + GPU PhysX if that's so important to them, and put out the best Kepler products they can. If King Kepler doesn't arrive til later this year and Prince Kepler is sent as a substitute, I see Nvidia getting their teeth kicked in with AMD's 8xxx refresh.

I really hope a massive fuck-up like this is something they avoid, because it'll mean that they'd have gone almost two years between release of the GTX 580 and its Kepler successor. That is a fiasco along the lines of Intel's Pentium 4 and AMD's Bulldozer v1.
 
it's too bad PhysX just didn't work out, a game looks extra nice with it but it's such a killer on performance. I wish there was something better than Havok that everyone could use and implement without it being a performance killer.
 
it's too bad PhysX just didn't work out, a game looks extra nice with it but it's such a killer on performance. I wish there was something better than Havok that everyone could use and implement without it being a performance killer.

It was stupid to have it as an exclusive feature.

So little games use it, why buy a feature that will hardly be used. Should have converted it to OpenCL so it would be more adopted.
 

1-D_FTW

Member
I'd love dedicated physics if it actually performed well. Right now it's not supported because it's a performance killer and only people with dual cards really benefit. If they could make this a standard every future Nvidia owner would have, I'm sure support would greatly improve in the future.
 

dr_rus

Member
First time a rumor mentioning that PhysX was getting its own dedicated logic and not being done by the CUDA cores in an Nvidia GPU. Could be great but a bulletpoint for sure .. I'd trade in that for 10% extra performance :p
That rumour is like a replay of a GF100 launch and that was a replay of the same stupid shit from GT200 launch. It was false two times already. Third time is the charm?
 
Got a 7950 in November(I think? It was like not-quite top of the line) and I'm pretty happy with it. No reason to upgrade for me, can still basically run new games on 'ultra' settings, but I'm not into the brand loyalty thing so I will quickly turn my back on AMD if I feel like a new NVidia card would be a worthy investment.
 

chaosblade

Unconfirmed Member
Got a 7950 in November(I think? It was like not-quite top of the line) and I'm pretty happy with it. No reason to upgrade for me, can still basically run new games on 'ultra' settings, but I'm not into the brand loyalty thing so I will quickly turn my back on AMD if I feel like a new NVidia card would be a worthy investment.

6950? 7950 just came out like yesterday. I'd hope you wouldn't feel the need to upgrade again already!
 
Giving Charlie Demerjian's insane rantings any sort of credence is like an Internet IQ test.

Sadly people keep failing it repeatedly.
 

Scotch

Member
Got a 7950 in November(I think? It was like not-quite top of the line) and I'm pretty happy with it. No reason to upgrade for me, can still basically run new games on 'ultra' settings, but I'm not into the brand loyalty thing so I will quickly turn my back on AMD if I feel like a new NVidia card would be a worthy investment.

6950, most likely. The 7950 came out yesterday. :)

edit: beaten
 

artist

Banned
Banned in Google?
Tried but nothing came up.

I don't see any point in refreshing anything.
Onus is on you to provide links to back up your claim.

The one who wrote that bullshit this time doesn't even know that there is no Ageia anymore.
Probably in reference to the AEGIA PPU chip that existed before they were bought by Nvidia:

chip.jpg
 

dr_rus

Member
Tried but nothing came up.
Onus is on you to provide links to back up your claim.
I've seen it on multiple forums before both new architectures announcements. Don't have time to search the Internet for this now. GT200 was supposed to have "PhysX hardware", Fermi was supposed to have it, now the same people think that Kepler will surely have it. It seems that many people just don't get that "physics hardware" and "graphics hardware" are essentially "compute hardware" and are the same.

I don't like Charlie's write-ups but he's right when he says that "it is simply shaders and optimisations". And as such being almost pure software they can't have any effect on GK104 developement (well, they can at a stage of designing a new architecture but certainly not at a point of making a chip on that architecture).

Probably in reference to the AEGIA PPU chip that existed before they were bought by Nvidia:
Ageia was a company, not a hardware which can be "built in" anywhere.
 

artist

Banned
New article up from Charlie, I got a headache now ..

Physics hardware makes Kepler/GK104 fast
That is the marketing claim anyway

Nvidia’s Kepler/GK104 chip has an interesting secret, a claimed Ageia PhysX hardware block that really isn’t. If you were wondering why Nvidia has been beating the dead horse called PhysX for so long, now you know, but it only gets more interesting from there.
Sources tell SemiAccurate that the ‘big secret’ lurking in the Kepler chips are optimisations for physics calculations. Some are calling this PhysX block a dedicated chunk of hardware, but more sources have been saying that it is simply shaders, optimisations, and likely a dedicated few new ops. In short, marketing may say it is, but under the heat spreader, it is simply shaders and optimisations.

The market has treated hardware PhysX like an unexplained sore that shows up a week after a night you can’t remember through a tequila induced haze. Numbers vary about the absolute magnitude of PhysX’s overwhelming success, but counts of 2011 game releases supporting hardware acceleration range from a low of two to a high of six. The snowball has pretty much stopped rolling, or to be more accurate, it never started, all the developers SemiAccurate spoke with indicate that their use of PhysX hardware acceleration was a cash flow positive experience, but we didn’t talk to all six listed.
With this new bit of information, one big question is answered, but specific hardware implementations details are a bit murky. Is the ‘hardware block’ dedicated to physics calculations when there are some being issued, or is it a AMD/GCN like multiple instruction issue? Is it just shaders with an added op or two that speed up math routines heavily used by physics simulations? How much die area is spent on this functionality? This isn’t very clear, and given the marketing materials Semiaccurate has seen, explanations will only serve to impede the impending hype.

That said, SemiAccurate is told Kepler/GK104 will be marketed as having a dedicated block, and this will undoubtedly be repeated everywhere, truth not withstanding. Luckily, since most of the target audience isn’t technically literate, it may “become fact” through the VIECOOCDF (Vast Internet Echo Chamber Of Often Repeated Dubious Facts). Lowering the collective intelligence can be profitable if not ethical. Luckily, the story doesn’t end here, it gets much worse.

This part ties in to the story SemiAccurate published a few weeks ago saying that Nvidia would win this generation. A lot of people have been asking about Kepler/GK104 performance and if it is really that good. The short story is yes and no, depending on your views on some very creative ‘optimisations’ around physics.

We stated earlier, Kepler wins in most ways vs the current AMD video cards. How does Nvidia do it with a $299 card? Is it raw performance? Massive die size? Performance per metric? The PhysX ‘hardware block’? Cheating? The easy answer is yes, but lets go in to a lot more detail.

GK104 is the mid-range GPU in Nvidia’s Kepler family, has a very small die, and the power consumption is far lower than the reported 225W. How low depends on what is released and what clock bins are supported by the final silicon. A1 stepping cards seen by SemiAccurate had much larger heatsinks than the A2 versions, and recent rumours suggest there may be an A3 to fix persistent PCIe3 headaches.

To date, an A3 spin has not been confirmed, but if it is necessary, it will likely push out the late March/early April release date by at least a month. One other possibility is for Nvidia to pull an Intel and release cards without the official PCI SIG stamp, adding it when A3 silicon is available. In any case, the number of PCIe3 supporting computers on the market is minimal, so functionally speaking, it doesn’t matter. You may loose a small bit of theoretical performance, but for a mid-range part, it is unlikely to be noticeable. Marketing is a completely different story though, one not closely tied to the reality most of us live in.

The architecture itself is very different from Fermi, SemiAccurate’s sources point to a near 3TF card with a 256-bit memory bus. Kepler is said to have a very different shader architecture from Fermi, going to much more AMD-like units, caches optimised for physics/computation, and clocks said to be close to the Cayman/Tahiti chips. The initial target floating among the informed is in the 900-1000MHz range. Rumours have it running anywhere from about 800MHz in early silicon to 1.1+GHz later on, with early stepping being not far off later ones. Contrary to some floating rumours, yields are not a problem for either GK104 or TSMC’s 28nm process in general.

Performance is likewise said to be a tiny bit under 3TF from a much larger shader count than previous architectures. This is comparable to the 3.79TF and 2048 shaders on AMD’s Tahiti, GK104 isn’t far off either number. With the loss of the so called “Hot Clocked” shaders, this leaves two main paths to go down, two CUs plus hardware PhysX unit or three. Since there is no dedicated hardware physics block, the math says each shader unit will probably do two SP FLOPs per clock or one DP FLOP.

This would be in line with the company’s earlier claims of a large jump in compute capabilities, but also leads to questions of how those shaders will be fed with only a 256-bit memory path. Given the small die sizes floating around, it is unlikely to be Itanium-esque brute forcing through large caches. The net result is that shader utilisation is likely to fall dramatically, with a commensurate loss of real world performance compared to theoretical peak.

In the same way that AMD’s Fusion chips count GPU FLOPS the same way they do CPU FLOPS in some marketing materials, Kepler’s 3TF won’t measure up close to AMD’s 3TF parts. Benchmarks for GK104 shown to SemiAccurate have the card running about 10-20% slower than Tahiti. On games that both heavily use physics related number crunching and have the code paths to do so on Kepler hardware, performance should seem to be well above what is expected from a generic 3TF card. That brings up the fundamental question of whether the card is really performing to that level?

This is where the plot gets interesting. How applicable is the “PhysX block”/shader optimisations to the general case? If physics code is the bottleneck in your app, A goal Nvidia appears to actively code for, then uncorking that artificial impediment should make an app positively fly. On applications that are written correctly without artificial performance limits, Kepler’s performance should be much more marginal. Since Nvidia is pricing GK104 against AMD’s mid-range Pitcairn ASIC, you can reasonably conclude that the performance will line up against that card, possibly a bit higher. If it could reasonably defeat everything on the market in a non-stacked deck comparison, it would be priced accordingly, at least until the high end part is released.

All of the benchmark numbers shown by Nvidia, and later to SemiAccurate, were overwhelmingly positive. How overwhelmingly positive? Far faster than an AMD HD7970/Tahiti for a chip with far less die area and power use, and it blew an overclocked 580GTX out of the water by unbelievable margins. That is why we wrote this article. Before you take that as a backpedal, we still think those numbers are real, the card will achieve that level of performance in the real world on some programs.

The problem for Nvidia is that once you venture outside of that narrow list of tailored programs, performance is likely to fall off a cliff, with peaky performance the likes of which haven’t been seen in a long time. On some games, GK104 will handily trounce a 7970, on others, it will probably lose to a Pitcairn. Does this mean it won’t actually do what is promised? No, it will. Is this a problem? Depends on how far review sites dare to step outside of the ‘recommenced’ list of games to benchmark in the reviewers guide.
Ethically, this could go either way, and in a vacuum, we would be more than willing to say that the cards are capable of very high performance. The problem is that the numbers that Nvidia will likely show off at the launch are not in a vacuum, nor are they very real, even considering the above caveats. Nvidia is going out of their way to have patches coded for games that tend to be used as benchmarks by popular sites.

Once again, this is nothing new, and has been done many times before. One example that is often mentioned is Starcraft II’s use of stencil buffers. People with inside knowledge of that game’s development have said that Nvidia gave Blizzard help in coding some parts of the game during the final ‘crunch’ period. The code is said to heavily use stencil buffers to fix some issues and patch over minor glitches. Again nothing unusual, AMD, Intel, and almost everyone else does this on a case by case basis, especially for AAA titles released in conjunction with new hardware.

Since Nvidia’s Fermi generation GPUs are very good at handling stencil buffers, they perform very well on this code. Again, this is normal practice, Nvidia put in the effort and now reaps the benefits, good for them. What is odd about this case, is that several knowledgeable sources have said that the code actually net decreases performance on both cards. The above tale may be anecdotal, but Starcraft 2′s release code sure seemed to use stencil buffers a lot more than you would expect, unreasonably so according to many coders. This however doesn’t constitute proof in any way, but it fits what SemiAccurate has seen Nvidia do in prior cases.

More to the point is antialiasing (AA) in Batman: Arkham Asylum. If you recall, AMD stated complaining about that game’s AA routines upon release. They directly stated that if AMD cards were detected, the game would disable AA for non-technical reasons. (Note: The original post that TechPowerUp refers to has the pertinent sections in the comments, not on the front page. It takes a little searching to find the post that also talks about several other games having similar ‘bugs’.) It goes on to state that if the card IDs were changed, the AA in the game functioned correctly on ATI hardware.

Short story, this turned in to the proverbial “epic pissing match”, with Nvidia claiming that it was Eidos that owned the code, and they were free to do with it as they feel fit. This is technically true. Unfortunately, emails seen by SemiAccurate directly contradict this. Those emails state unequivocally that Eidos should not change code written by Nvidia and provided to Eidos as a part of Batman: Arkham Asylum. At the point they were questioned on why, Eidos says they could not do anything due to advice of their attorneys.
Since it was the attorneys objecting, not the coders, we can only speculate that this was due to Nvidia’s financial sponsorship of the game, not any technical reason. Since sources tell SemiAccurate that Batman: Arkham Asylum only uses standard DirectX calls to implement AA, and it appears to function if the graphics card IDs are changed, this seems to be nothing other than Nvidia directly sabotaging their competition and not allowing AMD remove the lockout. Go and re-read the statements from AMD/ATI, Nvidia, and Eidos, then draw your own conclusions.

Why do we bring these two cases up in a Kepler article? Well, we hear that it is happening again. Both AMD and Nvidia have developers that they can and do ‘embed’ at game companies. This is an old and quite legitimate practice for GPU and non-GPU hardware companies. Everyone does it. It can be done ethically or not, with net performance gains for the end user or not, and with the intent to hurt or harm. In general, the more marketing money involved, the more most developers are willing to go out on a shaky ethical limbs.

One last really good example, tesselation. High end Fermi cards, GF100/110/GTX480/GTX580 are heavily biased toward geometry performance. Since most modern GPUs can compute multiple triangles per displayable pixel on any currently available monitor, usually multiple monitors, doubling that performance is a rather dubious win. Doubling it again makes you wonder why so die area was wasted.

Since Nvidia did waste that die area, helping games show that prowess off is a good thing for users, right? Look at Crysis 2, a AAA title that is heavily promoted by Nvidia, it positively flies on Fermi based cards, but performance on AMD GPUs is far less impressive. Why? The amazing detail in things like the concrete blocks, brick walls, and vast expanses of realistically modelled water. Breathtaking isn’t it? All thanks to Nvidia’s efforts to make the game experience better on their hardware. How could this be interpreted as anything but a win for users by a reasonable observer?

Nvidia is said to have around 15 developers they can embed at companies to help ‘optimise’ their code, ‘fix bugs’, and work out ‘performance problems’, even if those problems are not on Nvidia hardware. The count for other companies is less clear, but unlikely to be much different. Sources tell SemiAccurate that about half of them are currently working at Eidos on, wait for it, a patch for the recently released Batman: Arkham City game. Since Both the original and and the new Batman games are flag bearers for Nvidia’s hardware/GPU PhysX acceleration, it doesn’t take a genius to connect the dots. Since neither the patch or Kepler based video cards are out yet, we can only wait to see what the end result is.

If the purported patch does change performance radically on specific cards, is this legitimate GPU performance? Yes. How about if it raises performance on Kepler cards while decreasing performance on non-Kepler cards to a point lower than pre-patch levels? How about if it raises performance on Kepler cards while decreasing performance only on non-Nvidia cards? Which scenario will it be? Time will tell.

How many other games have had this level of attention and optimisation gifted upon them is another open question. One thing we can say is that the list of benchmarks shown off by Nvidia where Kepler has an overwhelming advantage all support PhysX. This is not to say that they are all hardware/GPU PhysX accelerated, they are not, most use the software API.

This is important because it strongly suggests that Nvidia is accelerating their own software APIs on Kepler without pointing it out explicitly. Since Kepler is a new card with new drivers, there is no foul play here, and it is a quite legitimate use of the available hardware. Then again, they have been proven to degrade the performance of the competition through either passive or active methods. Since Nvidia controls the APIs and middleware used, the competition can not ‘fix’ these ‘problems with the performance of their hardware’.

Going back to Kepler, we see that this happy and completely ethical game is going to be starting round 3, or round 17, depending on how you count. Nvidia appears to be stacking the playing field to both cripple the competition and raise their own performance. Is the performance of Kepler cards legitimate? Yes. Is it the general case? No. If you look at the most comprehensive list of supported titles we can find, it is long, but the number of titles released per year isn’t all that impressive, and anecdotally speaking, appears to be slowing.

When Kepler is released, you can reasonably expect extremely peaky performance. For some games, specifically those running Nvidia middleware, it should fly. For the rest, performance is likely to fall off the proverbial cliff. Hard. So hard that it will likely be hard pressed to beat AMD’s mid-range card.

What does this mean in the end? Is it cheating? Is it ethical? Is Kepler/GK104 going to be worth the money? Will it beat AMD’s 7970? These are all subjective decisions for you to make. What software will Nvidia show off as benchmarks to promote Kepler’s performance? That list is a little narrower. What will happen to sites that dare to test software that is not ‘legitimately accelerated’? No idea, but history offers some clues. One thing you can say for sure is that the information released prior to and with the card is unlikely to be the whole story. Legitimacy, performance, honesty, and ethics are unlikely to resemble the official talking points, and the whole truth is likely to be hidden from prying eyes for very partisan reasons. Big grains of salt around this one people, be very skeptical of everything you hear, and take nothing at face value.

Cliffnotes:
  • GK104 compute is 3TFlop (compared to 1.5TFlops of GTX580). 1024CC???
  • Power consumption MUCH lower than 225W.
  • Good yields, A2 silicon out right now. If A3 is required, launch may be pushed a month back to May.
  • No more hot clocks for shaders
  • Clocks are close to Tahiti, could go up to 1.1GHz
  • Performance is 10-20% on average slower than 7970 but faster than 7870 (Pitcairn)
  • Inconsistent performance - best case scenarios it beats 7970 and GTX580OC easily but worst case scenarios it loses to 7870 (Pitcairn) - Probably due to to the memory bandwidth but why would it be losing to Pitcairn which is also 256b?
  • Nvidia lobbying developers to adopt their Physx code into games to ensure wins against AMD, will probably recommend reviewers to test best case scenarios.
 

artist

Banned
I've seen it on multiple forums before both new architectures announcements. Don't have time to search the Internet for this now. GT200 was supposed to have "PhysX hardware", Fermi was supposed to have it, now the same people think that Kepler will surely have it. It seems that many people just don't get that "physics hardware" and "graphics hardware" are essentially "compute hardware" and are the same.

I don't like Charlie's write-ups but he's right when he says that "it is simply shaders and optimisations". And as such being almost pure software they can't have any effect on GK104 developement (well, they can at a stage of designing a new architecture but certainly not at a point of making a chip on that architecture).
It would have taken much less time for you to look it up versus typing it two big paragraphs. Again, unless you back up your claims it's nothing but hot air.

Ageia was a company, not a hardware which can be "built in" anywhere.
Again, its probably a reference to the PPU. They might wanted to differentiate between PhysX (SDK) and PPU. Charlie's using AEGIA reference too and for good reason.
 

artist

Banned
I would be willing to take a 10-20% slower card than the 7970 for $299 but then I'd like to see the worst-case scenarios just to make sure the trade-off is right.
 

cilonen

Member
Hmm, I don't like what I'm hearing; mostly I don't like the talk of delays. Would it kill Nvidia to put out a press release telling people what they're up to?

Unless of course, these parts are nowhere near ready, which in turn makes things even worse.

I'm about a week away from running out of patience for hearing something concrete from Nvidia and opting for a 7xxx Radeon; it might not be the fastest thing out there in six months but at least they're shipping units today rather than hiding behind smoke and mirrors of "possibly end of March or early April if we're lucky"
 

Corky

Nine out of ten orphans can't tell the difference.
eh... I'm waiting for these cards but I really hope that they ( nvidia ) go out of their way in that way to ensure better results than amd. I mean make a better gpu, not a gpu that just uses your proprietary physics code better than your competitor.

PhysX is far from being relevant in my eyes, and that's with me having played batman AC / AA and games like Mirrors Edge with physx, the trade off is almost never worth it.
 
That article is troubling if accurate. It would mean Nvidia would push proprietary middleware even harder. The differences between game performance and the brand of card you own will get larger. That's going to be confusing and annoying.
 

cilonen

Member
That article is troubling if accurate. It would mean Nvidia would push proprietary middleware even harder. The differences between game performance and the brand of card you own will get larger. That's going to be confusing and annoying.

Only if it actually gains some traction with devs. Hopefully Nvidia won't moneyhat too many projects and it'll be solidly ignored.

If Nvidia can't demonstrate a big performance increase across the majority of titles then they're going to have a hard time explaining why their cards suck.
 

artist

Banned
Hmm, I don't like what I'm hearing; mostly I don't like the talk of delays. Would it kill Nvidia to put out a press release telling people what they're up to?
They've done that in the past - talked to selected media outlets on NV30 (Geforce 5800 Series) and GF100 (GTX480 Series) and it didnt turn out too well.
 
New article up from Charlie, I got a headache now ..

This is just a thought but why not ignore the lunatic?

Hmm, I don't like what I'm hearing; mostly I don't like the talk of delays. Would it kill Nvidia to put out a press release telling people what they're up to?

Because if they issued a press release every time some moron on the Internet wrote bad articles full of hearsay and conjecture they'd do nothing but write press releases. Sure we could make exceptions for the biggest horse's ass but that would only encourage people to be the biggest horse's ass.
 

artist

Banned
This is just a thought but why not ignore the lunatic?
He has got the only rumors on the Kepler as of now and it was VERY positive to start with (and unbelievable coming from him).

Even his latest piece, if you take out his hate .. GK104 still sounds very promising. Like I said earlier for 80-90% performance of the 7970 at $299, I might get one.
 

Reallink

Member
He has got the only rumors on the Kepler as of now and it was VERY positive to start with (and unbelievable coming from him).

Even his latest piece, if you take out his hate .. GK104 still sounds very promising. Like I said earlier for 80-90% performance of the 7970 at $299, I might get one.

When and why did $300 become the accepted mid range price point? $200-$220 for a 1GB GTX460 was the sweet spot a couple years ago, now it's $300?
 
Top Bottom