The X1900 series were pretty awesome too, they were able to implement a driver hack which forced AA in Oblivion. The Nvidia equivalent at the time had no such function. I pretty much bought the X1900XTX just to play Oblivion back in the day.
Come to think of it the only 2 ATI/AMD cards I've ever owned are the 9800 Pro during the Half-Life 2 era and the X1900XTX.
X1900XTX was a great card, yeah. The time between its release and GF8800 was one of these periods where I used Radeon almost exclusively.
He's talking about a final build of the game, not the source code, and how they couldn't optimize their drivers because they didn't have that build almost until release date. I'm sure that CDPR sent final builds to AMD as well as a lot of other h/w vendors. TW3 has gone gold on 16th of April which means that the whole industry had a month to tweak their stuff for a final build of the game. The first patch was the day 1 patch AFAIK and it didn't break anything on any GPU.
The talk here is about stability mostly with some "small" TressFX improvements (this is a copy of the patch notes basically - see below). Thing is that the biggest performance improvement for TR's TressFX option on GeForce came with a new driver from NV and not from the patch made by CD.
TressFX (and Hairworks) is a rather complex piece of software which a game developer is unlikely to want or even be able to to optimize on his own. This is why we have them as a separate "plugins" from IHVs in the first place.
Crystal Dynamics and Nvidia were both able to look at the TressFX code, Nvidia made changes on their driver side and CD made changes on the game side so that TressFX could work on the Nvidia cards. 30 seconds and Google would have shown you this. I thought this was common knowledge, so I didn't bother with a source originally.
Well, since you're talking about Google...
1. TR was release on March 5th, 2013
2. On March 15th a GeForce driver was released which provided up to 60% (!) performance improvements in the game. I myself remember that the game was unplayable for me on a GTX680 until the new driver came which made it very smooth. I don't think that they could've optimized TressFX source code for GeForces, test it with AMD on Radeons - 'cause that's how such changes are implemented usually - and publish the update in 10 days. It is far more likely that almost all TressFX optimizations were done by NV in the new driver - this limits the amount of work here significantly and make it possible to produce such optimizations in two weeks.
3. There were two patches released by CD between 5th and 15th of March. The first one said only this about TressFX: "- Some small improvements to TressFX hair rendering." Note that it doesn't say anything about performance at all. The second one said this:
- Cost of TressFX reduced, especially in combination with SSAA.
- TressFX memory usage reduced for AMD Eyefinity and NVIDIA Surround setups.
- TressFX simulation and graphical quality improvements.
And also this:
Weve been working closely with NVIDIA to address the issues experienced by some Tomb Raider players. In conjunction with this patch, NVIDIA will be releasing updated drivers that help to improve stability and performance of Tomb Raider on NVIDIA GeForce GPUs.
Nowhere does it say anything about them working on TressFX with Nvidia. From the notes it is clear that the cost of TressFX went down for everyone, not for GF users only. Later patches don't mention TressFX performance anywhere. The biggest performance increase came with the new driver from NV which was probably using special optimization paths for TR's (and TressFX) shaders.
To my knowledge, this type of assistance isn't available with Hairworks.
For instance...
http://www.overclock3d.net/articles...nvidia_hairworks_unoptimizable_for_amd_gpus/1
And this quote from CDProject's Marcin Momot...
So unlike Tomb Raider and TressFX in which Crystal Dynamics were able to optimize so that it could work better with Nvidia cards(and even Intel IGPU), CD Project and AMD cannot do the same with Hairworks.
We're getting pretty far off topic at this point. If you'd like to discuss this further or if you have access to any other information on the topic please PM.
What NV did with their driver optimization for TR certainly can be done for TW3 by AMD's driver team - if the performance is really lowered by sub-optimal shaders and not by the h/w basics themselves like slower tessellation or some other feature which works faster on Maxwell. No one can stop AMD from doing this. NV is doing this with every Gaming Evolved title released basically.
This is one of reasons why I tend to stick to NV GPUs lately - I don't care why a game is performing badly on my H/W, I paid an IHV to provide me with the best possible experience and it's not my problem if some other company is somehow blocking them from providing this experience. There is always a way to fix stuff which is what NV is doing more often than not. As a result - all GE titles work fine on GeForces while some TWIMTBP titles can work like crap on Radeons. This is happening not because AMD is providing some things as "open" but because NV is putting a hell of an effort into driver level shader optimization.
It is quite interesting actually how things will unfold with DX12 as this "thin" API should limit IHV's options in driver level optimizations theoretically. It may actually lead to more GE games performing badly on NV hardware.