• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

Fermi (Nvidia Next Gen) GCPU Architecture: Thread of promises, waiting and 2010

brain_stew said:
Hardly, Nvidia's GPUs have been outbalanced in terms of ALU:TEX (in favour of the TMUs) since forever and GT200 was a texturing monster. This moves makes all kinds of sense to me.

Could be interesting to see its impact on Crysis mind, as the standard flythrough benchmark has always seemed to be texture bound (hence the middling performance of ATI's pre RV870 parts in it) so it could result in some bad PR. Its a shitty benchmark of course as shader performance is much more important ingame, but no doubt plenty of sites will continue to use it and potentially give consumers the "false" illusion that Crysis performance isn't so hot.
I think you mis-interpreted or I wasnt clear enough. I meant to say that with this move Nvidia has kind of given up on their texturing advantage that they traditionally had.

Durante said:
If much of the delay was caused by parallelizing the whole setup/geometry stuff (as Anand speculates), then this is something ATI will have to do sooner or later as well -- considering that clock speed increases have slowed to a crawl 1 tri/clock isn't going to cut it forever. They'll probably have better timing for it though.

@irfan: where did you get that clock for the GF100 TMUs?
Rys mentioned and others have also mentioned it to be running at half the main hot clock (1400). Where A3 will end up is anyone's guess.

Thanks gofreak, value your opinion because you do not wander off in the twilight (like most at B3D do) with just the tech stuff. :P
 
cryptic said:

First I've seen of it.

--
Charlie Demerjian, historically, has been pretty anti-nVidia, most recently due to some PR kerfuffle that happened in the last couple years. Now, that's not to say he's wrong (and to me, at least, a lot of what he's saying in that article rings true, especially the nVidia-not-showing-Fermi-as-compared-to-a-5890 bit), but I'd take this with a grain of salt. We'll definitely see what's actually going on here by March-ish, and the die size, wattage numbers, costs, clock speeds, and shader numbers on those cards will tell the tale.

Still, if even a portion of this is true, things don't look good for nVidia on the consumer front. Let's remember that most of the money is made in the low-to-midrange markets, and even if Fermi blows Cypress/Hemlock out of the water, that isn't necessarily going to translate into revenue, especially if its lower-priced, lower-performance derivatives are months away.
 
gofreak said:
When are we expect ATi's a) next cards and b) next 'big' refresh (if they're not one and the same)?
Its good as anyone's guess but I expect them to ride out majority portion of this year on Evergreen. Try to get the yeilds perfect, BOMs down, clean up product stack and possibly stream line inventory levels.

I dont think they'll go for a larger GPU as a refresh, last time they did that was 5 years ago X1800 -> X1900. They'll probably wait for the 32 or 28nm node for that.

That Charlie article is fluff especially the software tessellator part, I havent seen the TDP figure disproved yet.
 
adg1034 said:
First I've seen of it.

--
Charlie Demerjian, historically, has been pretty anti-nVidia, most recently due to some PR kerfuffle that happened in the last couple years. Now, that's not to say he's wrong (and to me, at least, a lot of what he's saying in that article rings true, especially the nVidia-not-showing-Fermi-as-compared-to-a-5890 bit), but I'd take this with a grain of salt. We'll definitely see what's actually going on here by March-ish, and the die size, wattage numbers, costs, clock speeds, and shader numbers on those cards will tell the tale.

Still, if even a portion of this is true, things don't look good for nVidia on the consumer front. Let's remember that most of the money is made in the low-to-midrange markets, and even if Fermi blows Cypress/Hemlock out of the water, that isn't necessarily going to translate into revenue, especially if its lower-priced, lower-performance derivatives are months away.

isnt that the guy that shits on Nvidia all the time, while having ATI ads plastered over his site?
 
I was hoping to see some competition from Nvidia so that ATI would have to drop their prices soon. Unfortunately it doesn't look that way even if the new NVidia boards will offer great performance, they likely will be much more expensive and won't put enough pressure on ATI to drop prices.

I actually hope Nvidia has the next thing in the works already and hopefully we'll see some real price/performance wars in Q4 of this year.
 
I actually hope Nvidia has the next thing in the works already and hopefully we'll see some real price/performance wars in Q4 of this year.

Nvidia is not going to be launching 2 generations in a 8 month time span. That is absurd to hope for. We might see the upgraded versions of GF100 by the end of 2010. Of course, it is probably just as likely that GF100 is never delivered in great quantities in 2010.
 
x3n05 said:
Same, I am so close to pulling the trigger on a 5970. Now do I wait, for hope of a price drop, but the risk of another delay, or strike while the iron's hot...

Wait. There's no major rush for a new card. Hell i'm on a 4850 and i have no issues playing anything new so far, you're on an even more powerful card so you should be fine.

I know i'm personally going to try and wait until next year before i upgrade. Grab either intels sandy bridge or amds bulldozer based cpu's along with atis 6x card for amd+amd fusion or whatever nvidia has around that time.

Unless of course i attempt to play something this year that brings my machine to it's knees then i will have to upgrade :lol
 
Technosteve said:
On ATI using .NET to create there drivers, alot of games use .NET to hook in to the graphics card, Fallout 3 does so does most Unreal Tournament games i believe and games for windows live. I also think the PhysX uses .NET api. Just from installing games i have four or five different .NET versions on my computer which annoys me how many different version each game installs.
The drivers aren't made in .NET (I don't even think it would even be possible), only the control panel is. The CCC is also optional: you can download and install only the drivers if having a .NET app in the background bothers you so much. There's an app called ATI Tray Tools which does a great job at replacing the CCC (and is written in C++).
 
x3n05 said:
Same, I am so close to pulling the trigger on a 5970. Now do I wait, for hope of a price drop, but the risk of another delay, or strike while the iron's hot...

Dual GPU setups are pretty much universally horrible, if you've got the cash for it, GF100 almost certainly looks like your best bet at the high end.
 
M3d10n said:
The drivers aren't made in .NET (I don't even think it would even be possible), only the control panel is. The CCC is also optional: you can download and install only the drivers if having a .NET app in the background bothers you so much. There's an app called ATI Tray Tools which does a great job at replacing the CCC (and is written in C++).

It doesn't support x64 OSes so its not worth mentioning.
 
cryptic said:

Getting back to the architecture itself, Jen-Hsun was mocking Intel's Larrabee as "Laughabee" while making the exact same thing himself. As we stated last May, GF100 has almost no fixed function units, not even the tessellator. Most of the units that were fixed in G200 are now distributed, something that is both good and bad.

Isn't this essentially proven false with the recent release of info?
 
Sutanreyu said:
Isn't this essentially proven false with the recent release of info?

From Anandtech:

The use of a fixed-function pipeline in their eyes was a poor choice given the geometric complexity that a tessellator would create, and hence the entire pipeline needed to be rebalanced. By moving to the parallel design of the PolyMorph Engine, NVIDIA’s geometry hardware is no longer bound by any limits of the pipelined fixed-function design (such as bottlenecks in one stage of the pipeline), and for better or for worse, they can scale their geometry and raster abilities with the size of the chip.

Anand (or Ryan Smith, to be precise) basically confirms that fixed-function units are gone, but that NVIDIA thinks that that's actually okay. We'll see.
 
Sutanreyu said:
Isn't this essentially proven false with the recent release of info?

Demerjian really isn't worth listening too. He's got many hobby horses in the form of companies he hates, and nVidia is probably numero uno. If you were to believe him, anything and everything nVidia is to put out will be a complete disaster. Google News even lists him under satire :lol
 
evlcookie said:
Wait. There's no major rush for a new card. Hell i'm on a 4850 and i have no issues playing anything new so far, you're on an even more powerful card so you should be fine.

I know i'm personally going to try and wait until next year before i upgrade. Grab either intels sandy bridge or amds bulldozer based cpu's along with atis 6x card for amd+amd fusion or whatever nvidia has around that time.

Unless of course i attempt to play something this year that brings my machine to it's knees then i will have to upgrade :lol

You're right, there is no 'need' for me to upgrade at the moment, the worst case scenario is I still go with ATI but the cards are cheaper.


brain_stew said:
Dual GPU setups are pretty much universally horrible, if you've got the cash for it, GF100 almost certainly looks like your best bet at the high end.

When I first got my X2 I would agree with you, but lately it has been fine for the most part, only some games don't utilise the second CPU. Having said that, a single GPU card of comparable power is always going to be better than a dual GPU setup.

That said, advice taken, and I will hold out until March before I make any rash decisions. Honestly, the only reason I was going to upgrade was for tessellation for AVP :lol and to a lesser extent Eyefinity (the technology), something that Nvidia also supports, albeit, with only 2 monitors per card.
 
x3n05 said:
You're right, there is no 'need' for me to upgrade at the moment, the worst case scenario is I still go with ATI but the cards are cheaper.




When I first got my X2 I would agree with you, but lately it has been fine for the most part, only some games don't utilise the second CPU. Having said that, a single GPU card of comparable power is always going to be better than a dual GPU setup.

That said, advice taken, and I will hold out until March before I make any rash decisions. Honestly, the only reason I was going to upgrade was for tessellation for AVP :lol and to a lesser extent Eyefinity (the technology), something that Nvidia also supports, albeit, with only 2 monitors per card.

By all accounts tessellation should be much better/faster on GF100 (they've redesigned their whole fixed function pipeline to better accomodate it) so if its tesselation you're interested in then GF100 definitely seems to be your best bet.
 
gofreak said:
Demerjian really isn't worth listening too. He's got many hobby horses in the form of companies he hates, and nVidia is probably numero uno. If you were to believe him, anything and everything nVidia is to put out will be a complete disaster. Google News even lists him under satire :lol


There''s no probably about it.
 
I'll build a new PC soon after March. Just a little project for me and I'm not too fussed about price really. Yeah, stupid I know but if I buy components individually I don't feel it as much.

I'm excited to see what I'll be able to get GPU wise come the time. Is it realistic to expect any major revisions from ATI in that timescale? If not NV will probably be my choice.
 
slider said:
I'll build a new PC soon after March. Just a little project for me and I'm not too fussed about price really. Yeah, stupid I know but if I buy components individually I don't feel it as much.

I'm excited to see what I'll be able to get GPU wise come the time. Is it realistic to expect any major revisions from ATI in that timescale? If not NV will probably be my choice.

I doubt Nvidia GF100s will be readily available in March if all the rumors are true, and no, I am not talking about just the semiaccurate blog. We are most likely going to see very limited quantities from the manufacturers. No, it is not realistic to expect an ATI revision by then. But one can always hope for that as well as hope for Nvidia to have a lot of GF100s at launch.
 
DennisK4 said:
If you have impressions from using this latest version, please share.

You have to run ATT as an administrator or disable UAC in order to get it to work. Otherwise the Low Level Driver and the program will not load. While IMO it's not as good as nHancer, forcing AA doesn't always work, it gets the job done is much more flexible than CCC. Being able to create custom game profiles is a blessing.

One of the reasons why I think nHancer is better is because Ati Tray Tools has some minor issues. For example I have to disable the steam overlay in steam games when using the OSD, because it simply doesn't work. ATT also looks a bit unpolished, icons and widgets everywhere. I haven't encountered a single program crash yet though.
 
adg1034 said:
Did you even read my post?

but I'd take this with a grain of salt

i didnt think you used strong enough figure of speech there ;-).
if i remember correctly, he predicted death of nvidia last year... until that thread on gaf, i never knew there were ati and nvidia fanboys
 
spwolf said:
i didnt think you used strong enough figure of speech there ;-).
if i remember correctly, he predicted death of nvidia last year... until that thread on gaf, i never knew there were ati and nvidia fanboys
I think you are confusing doom and gloom with death, I think even Charlie realises that AMD is easier to go kaput over Nvidia. Amongst his overflowing hate for Nvidia, he's got quite a few things right, which to be fair are more than what I'd like to call lucky. He has good sources but thats about it. I keep saying this over and over again, Charlie's articles are painful to read but you can get the real bit of pieces out and leave the the trash.

It looks like the folks have B3D have arrived at some interesting bits of info:

http://i50.tinypic.com/24o0o47.png <- Looks like a moot bench, like most custom benches. It is also utilising PhysX which makes it even more irrelevant.

http://i47.tinypic.com/17bxg8.png <- seems like the increase over GT200 is exaggerated as it doesnt perform too well at that resolution and GF100 also has a larger framebuffer among other things making the bench pointless.

It looks like everyone is now settling back to the wait and watch approach and have taken all these figures with a grain of salt. Only showing FC2 among real games also raises some eyebrows .. I wish they'd have showed atleast some bit of performance figures. I dont think these paper reveals are slowing down sales of the 5000 series in favor of Fermi.
 
M3d10n said:
The drivers aren't made in .NET (I don't even think it would even be possible), only the control panel is. The CCC is also optional: you can download and install only the drivers if having a .NET app in the background bothers you so much. There's an app called ATI Tray Tools which does a great job at replacing the CCC (and is written in C++).
You still have have to go into CCC for dual display tinkering because Ray thinks dual displays are stupid.
 
JADS said:
You have to run ATT as an administrator or disable UAC in order to get it to work. Otherwise the Low Level Driver and the program will not load. While IMO it's not as good as nHancer, forcing AA doesn't always work, it gets the job done is much more flexible than CCC. Being able to create custom game profiles is a blessing.
h.

Well you can still do that in CCC (although the process is pretty convoluted) but it wouldn't automatically launch the profile once the .exe is detected as running, instead you have to manually launch it from within CCC. Same goes for older versions of Tray Tools and honestly, between Steam and Windows Game Explorer I really don't need/want another way to launch my games, if its not automatic, its just not worth the bother. Is this sorted in the new beta then?

I'd pretty much given up hope of their being an x64 compatible version of tray tools as development seemed to have halted so its nice to see the project is up and running again.
 
irfan said:
I dont think these paper reveals are slowing down sales of the 5000 series in favor of Fermi.

My guess is that Nvidia is trying to "Dreamcast" the 5000 series at this point. They have nothing to sell, they won't for another couple months, and even then we don't know a thing about pricing or availability scenarios. All we have now are promises and some cherry-picked benchmarks that show them well ahead. Similarly, Sony sent it's hype/FUD machine into overdrive to keep people from buying Dreamcasts while they were still months away from the PS2, promising graphics leaps and bounds better than the DC. It wasn't entirely true, but it worked. Peopled saved their money for the PS2 and believed that it was going to destroy the DC out of the gate graphically. We all now know that it took a year or so for that to happen, and by then the DC was dead.
 
Dr. Zoidberg said:
My guess is that Nvidia is trying to "Dreamcast" the 5000 series at this point. They have nothing to sell, they won't for another couple months, and even then we don't know a thing about pricing or availability scenarios. All we have now are promises and some cherry-picked benchmarks that show them well ahead. Similarly, Sony sent it's hype/FUD machine into overdrive to keep people from buying Dreamcasts while they were still months away from the PS2, promising graphics leaps and bounds better than the DC. It wasn't entirely true, but it worked. Peopled saved their money for the PS2 and believed that it was going to destroy the DC out of the gate graphically. We all now know that it took a year or so for that to happen, and by then the DC was dead.
Your comparison is correct in the sense of strategy but not correct on how it might or will play out; GPU cycles are way shorter (6-12 months) than console (4-5 years). Quite a lot of Evergreens were shipped out already (2+ million) and the floodgates have opened with Redwood (5600) and Cedar (5500/5400) next month. These parts will sell like hot cakes, primarily because of DX11 tag and also because their volume is 10-20 times as high as the market for GF100.
 
irfan said:
Your comparison is correct in the sense of strategy but not correct on how it might or will play out; GPU cycles are way shorter (6-12 months) than console (4-5 years). Quite a lot of Evergreens were shipped out already (2+ million) and the floodgates have opened with Redwood (5600) and Cedar (5500/5400) next month. These parts will sell like hot cakes, primarily because of DX11 tag and also because their volume is 10-20 times as high as the market for GF100.
The fact that Redwoods can't really run DX11 games well, however, is a bit of a turn-off for me.
 
irfan said:
Your comparison is correct in the sense of strategy but not correct on how it might or will play out;

Yes, I only meant that they were using the strategy, not that it would have the same success. Sony's campaign succeeded wildly, whereas most PC gamers are smart enough to take a wait-and-see approach to GF100 (unless they are fanboys).
 
Having a strong NVIDIA card would be awesome, I really want to try out 3d and physix. But GTX 295´s are still over 400 euro here and I doubt Fermi will go lower. Too bad.

And adding the costs of a 120 hz monitor (without 3d I´ll play on my tv) and the shutter glasses on top of that would almost double the price of the pc I´ll buy. Fuck D:
 
Mr. Wonderful said:
The fact that Redwoods can't really run DX11 games well, however, is a bit of a turn-off for me.
Depends on the resolution. The shitload of PCs sold by OEMs have cards like 9500GTs or 4350s (if they are not IGP based already) and Redwood should run circles around them.

Fuck I seriously want a brand new midrange GPU from Nvidia. Remember the likes of 6600GT, 7600GT, 8800GT? Sigh, since 8800GT they're just relying on rebrands: 8800GT, 8800 GTS 512, 9800 GTX, 9800 GT, 9800 GTX+, 9800 GX2, 9600 GSO, GTS 150, GTS 250, GTS 240 ... and thats without the mobile versions.
 
TSMC says 40nm yield issues resolved

TSMC has improved yield rates on its 40nm manufacturing process, with the quality now being about the same level as its 65nm node process, according Mark Liu, Senior VP of Operations at Taiwan Semiconductor Manufacturing Company (TSMC). During a company event yesterday, Liu stated that the chamber matching problems that had impacted yield rates for the company's 40nm node have been resolved.

Liu did not elaborate any further.

TSMC on January 19 held a ceremony marking the completion of a new factory building (Phase 5), which is part of the company's Fab 12 located at the Hsinchu Science Park (HSP), Taiwan. Phase 5 is on track to enter volume production of 28nm products in the third quarter of 2010.

TSMC is also planning to construct Phase 6 of Fab 12, which will be mainly used as its 22nm production base, according to Liu.

http://www.digitimes.com/news/a20100120PD204.html

40nm yeilds now good is good news for both Nvidia and ATI. The 28nm bit leads me to think Evergreen refresh (or next gen) will be out Q4 this year, if all goes well.
 
DieH@rd said:
Excelent. And what is the standard yield for the 65nm chips?
There are different node types; LP (low power), GT (high perf), G (general). So it varies and also depends on a lot of things like, chip complexity, granularity, redundancy etc.
 
I didn't realise 28nm would be in volume production so soon. I guess that kills any chance of an ATI product refresh before Q3/Q4 then, it'd make little sense to have one when waiting a few more months could let them move to a smaller process node.

So die shrunk RV870 for the holidays and a brand new architecture for March 2011?

That would be my guess, unless they skip the die shrink and just use 28nm for their next generation architecture.
 
brain_stew said:
I didn't realise 28nm would be in volume production so soon. I guess that kills any chance of an ATI product refresh before Q3/Q4 then, it'd make little sense to have one when waiting a few more months could let them move to a smaller process node.

So die shrunk RV870 for the holidays and a brand new architecture for March 2011?

That would be my guess, unless they skip the die shrink and just use 28nm for their next generation architecture.

I'm guessing you would be about right. I figured they would release the 6 series around the same time as bulldozer so they can flog off their fusion tech. So early 2011 at least.

As for a refresh, ATi has a lot of headroom. They aren't anywhere near the 300W limit of pci-e compared to Nvidia with the G100. So i wouldn't be surprised to see something like a 5990 that's just balls out insane and uses 250W+.
 
brain_stew said:
I didn't realise 28nm would be in volume production so soon. I guess that kills any chance of an ATI product refresh before Q3/Q4 then, it'd make little sense to have one when waiting a few more months could let them move to a smaller process node.

So die shrunk RV870 for the holidays and a brand new architecture for March 2011?

That would be my guess, unless they skip the die shrink and just use 28nm for their next generation architecture.
Am I reading you right? You don't think there will be ANY kind of refresh before very late 2010? That would be, what almost a year since launch without a refresh - seems too long to me.
 
DennisK4 said:
Am I reading you right? You don't think there will be ANY kind of refresh before very late 2010? That would be, what almost a year since launch without a refresh - seems too long to me.

Not "very late" just sometime in Q3 or Q4 (probably early-mid Q4). Its too late for a refresh in Q1, and why would ATI refresh in Q2 when waiting three months means they could make it a proper refresh with a die shrink? I don't see them increasing the die size of their high end part anytime soon.

RV770, RV790 and RV870 all had increasingly larger dies, they won't want to go much bigger than they are already, they've been preaching against monolithic dies for a while now.
 
brain_stew said:
I didn't realise 28nm would be in volume production so soon. I guess that kills any chance of an ATI product refresh before Q3/Q4 then, it'd make little sense to have one when waiting a few more months could let them move to a smaller process node.

So die shrunk RV870 for the holidays and a brand new architecture for March 2011?

That would be my guess, unless they skip the die shrink and just use 28nm for their next generation architecture.
I think 28nm in Q4 is a best case scenario and with the way things have gone with TSMC's 40nm, I dont think either Nvidia or ATI is counting on 28nm this year. I think we will see a EG refresh before Q3.

DennisK4 said:
Am I reading you right? You don't think there will be ANY kind of refresh before very late 2010? That would be, what almost a year since launch without a refresh - seems too long to me.
RV770 -> RV790 = 10 months
 
Hazaro said:
Huh... Any insight on why 28nm before 22nm?

28nm just mainly for GPU?

Its a half process node, just like 40nm and 55nm. GPUs skipped 45nm and they'll skip 32nm as well. 22nm isn't going to be available this year, not even from Intel.
 
Techreport's take:

Speaking of efficiency, that will indeed be the big question about the Fermi architecture and especially about the GF100. How efficient is the architecture in its first implementation?

The chip isn't in the wild yet, so no one has measured its exact die size. Nvidia, as matter of policy, doesn't disclose die sizes for its GPUs (they are, I believe, the last straggler on this point in the PC market). But we know the transistor count is about three billion, which is, well, hefty. How so large a chip will fare on TSMC's thus far troubled 40-nm fabrication process remains to be seen, but the signs are mixed at best.

Although we don't yet have final product specs, Nvidia's Drew Henry set expectations for the GF100's power consumption by admitting the chip will draw more power under load than the GT200. That fact by itself isn't necessarily a bad thing—Intel's excellent Lynnfield processors consume more power at peak than their Core 2 Quad predecessors, but their total power consumption picture is quite good. Still, any chip this late and this large is going to raise questions, especially with a very capable, much smaller competitor already in the market.

With the new information we have about the GF100's graphics bits and pieces, we can revise our projections for its theoretical peak capabilities. Sad to say, our earlier projections were too bullish on several fronts, so most of our revisions are in a downward direction.

We don't have final clock speeds yet, but we do have a few hints. As I pointed out when we are talking about texturing, Nvidia's suggestion that the GF100's theoretical texture filtering capacity will be lower than the GT200's gives us an upper bound on clock speeds. The crossover point where GF100 would match the GeForce GTX 280 in texturing capacity is a 1505MHz core clock, with the texturing hardware running at half that frequency. We can probably assume the GF100's clocks will be a little lower than that.

We have another nice hint that running the texturing hardware at half the speed of the shaders rather than on a separate core clock will impart a 12-14% frequency boost. In this case, I'm going to be optimistic, follow a hunch, and assume the basis of comparison is the GT200b chip in the GeForce GTX 285. A clock speed boost in that range would get us somewhere near 725MHz for the half-speed clock and 1450MHz for the shaders. The GF100's various graphics units running at those speeds would yield the following peak theoretical rates.

I should pause to explain the asterisk next to the unexpectedly low estimate for the GF100's double-precision performance. By all rights, in this architecture, double-precision math should happen at half the speed of single-precision, clean and simple. However, Nvidia has made the decision to limit DP performance in the GeForce versions of the GF100 to 64 FMA ops per clock—one fourth of what the chip can do. This is presumably a product positioning decision intended to encourage serious compute customers to purchase a Tesla version of the GPU instead. Double-precision support doesn't appear to be of any use for real-time graphics, and I doubt many serious GPU-computing customers will want the peak DP rates without the ECC memory that the Tesla cards will provide. But a few poor hackers in Eastern Europe are going to be seriously bummed, and this does mean the Radeon HD 5870 will be substantially faster than any GeForce card at double-precision math, at least in terms of peak rates.

Otherwise, on paper, the GF100 projects to be superior to the Radeon HD 5870 only in terms of ROP rate and memory bandwidth. (Then again, it's now suddenly notable that we're not estimating triangle throughput. The GF100 will have a clear edge there.) That fact isn't necessarily a calamity. The GeForce GTX 280, for example, had just over half the peak shader arithmetic rate of the Radeon HD 4870 in theory, yet the GTX 280's delivered performance was generally superior. Much hinges on how efficiently the GF100 can perform its duties. What we can say with certainty is that the GF100 will have to achieve a new high-water mark in architectural efficiency in order to outperform the 5870 by a decent margin—something it really needs to do, given that it's a much larger piece of silicon.

Obviously, the GF100 is a major architectural transition for Nvidia, which helps explain its rather difficult birth. The advances it promises in both GPU computing and geometry processing capabilities are pretty radical and could be well worth the pain Nvidia is now enduring, when all is said and done. The company has tackled problems in this generation of technology that its competition will have to address eventually.

In attempting to handicap the GF100's prospects, though, I'm struggling to find a successful analog to such a late and relatively large chip. GPUs like the NV30 and R600 come to mind, along with CPUs like Prescott and Barcelona. All were major architectural revamps, and all of them conspicuously ran hot and underperformed once they reached the market. The only positive examples I can summon are perhaps the R520—the Radeon X1800 XT wasn't so bad once it arrived, though it wasn't a paragon of efficiency—and AMD's K8 processors, which were long delayed but eventually rewrote the rulebook for x86 CPUs. I suppose we'll find out soon enough where in this spectrum the GF100 will reside.
http://techreport.com/articles.x/18332/5

These guys (Scott & Rys) were wildly optismistic re GF100 and even they now gone some what soft.

The decision to lower TMU processing power still doesnt make sense and artificially limit the DP performance is pure cockblock move.
 
http://www.xbitlabs.com/news/video/display/20100121233418_ATI_s_Next_Generation_Graphics_Processors_on_Track_for_the_Second_Half_of_2010_AMD_s_CEO.html

Chief executive officer of Advanced Micro Devices said during a conference call with financial analysts that the company’s graphics division ATI was on-track to refresh its lineup of graphics cards in the second half of calendar 2010. The mystery, though, is with what the family of graphics processors will be renewed.

“We are ramping the ATI Radeon HD 5000 series now and look forward to refreshing the entire lineup in the second half of next year*,” said Dirk Meyer, chief executive officer of AMD, during quarterly conference call with financial analysts.

A little earlier this month AMD’s worldwide developer relations manager said that ATI hopes to maintain leadership position in terms of product performance “for the majority of 2010”.Still, it is not completely clear with what does AMD plan to refresh ATI’s lineup going forward.

The company has two ways of getting even faster products to market: it can redesign existing ATI Radeon HD 5000-series graphics chips so that to boost clock-speeds tangibly or it can introduce a brand new family of ninth-generation Radeon chips code-named Northern Islands, which is widely believed to feature a new architecture.


Not a lot is known about Northern Islands. Some sources claim that the new chips will be made using 32nm fabrication process, but the others believe that the new chips will be made on 28nm node. There are also reports that Northern Islands will have richer feature-set compared to Evergreen. According to some other reports, ATI’s next-generation family is code-named Hecatonchires and will feature code-named Cozumel, Ibiza and Kauai chips, which are not northern, but southern islands. As a result, it is possible that ATI may be working on two new architectures.

Earlier this week TSMC said that it would start mass production of chips using 28nm process technology in Q4 2010.

* Considering the fact that the call was dedicated to AMD’s financial results in the Q4 of fiscal 2009, the AMD executive referred to the second half of fiscal 2010 as to the “next year”, which is basically the second half of calendar 2010.

I think it needs to be said twice, he's talking in fiscal years, so he means the second half of this year
 
Top Bottom