Is there a video of this demo for us non-owners? Looks pretty.
how does that 7970 direct cu 2 compare to 2 6950 direct cu 2's?
i really wanna go back to 1 card in stead of 2, and i would only have to put 50 euro in to get the 7970 i think if i sell my 2 6950's
Performance is comparable, but considering how much easier it is to OC one gpu rather than two I wouldn't be surprised if a 7970 can with ease beat two 6950. Then you also get rid of all the dual gpu headaches and for only 50 euros I'd do it in a heartbeat.
May I ask how you can manage to only lose 50 in the trade?
well the card costs 270 euro here in holland, and someone gave me 450 for both.
so im going to sell them sunday and wait for the 7970.
will it arrive within 2 weeks?
Ooooh Leo tech demo looks nice.
http://www.youtube.com/watch?v=4gIq-XD5uA8
EDIT: Dave Baumann with AMD confirmed that it works on 5XXX/6XXX series cards as well, but obviously it will run like shit compared to the 7970 with the features implemented.
runs smooth on 6970 at 1920x1200, it uses up to 1.5gb of video memory, the highest i seen.
Anandtech Article said:In a traditional (forward) renderer, the rendering process is rather straightforward and geometry data is preserved until the frame is done rendering. And while this normally is all well and good, the one big pitfall of a forward renderer is that complex lighting is very expensive to run because you don’t know precisely which lights will hit which geometry, resulting in the lighting equivalent of overdraw where objects are rendered multiple times to handle all of the lights.
In deferred rendering however, the rendering process is modified, most fundamentally by breaking it down into several additional parts and using an additional intermediate buffer (the G-Buffer) to store the results. Ultimately through deferred rendering it’s possible to decouple lighting from geometry such that the lighting isn’t handled until after the geometry is thrown away, which reduces the amount of time spent calculating lighting as only the visible scene is lit.
The downside to this however is that in its most basic implementation deferred rendering makes MSAA impossible (since the geometry has been thrown out), and it’s difficult to efficiently light complex materials. The MSAA problem in particular can be solved by modifying the algorithm to save the geometry data for later use (a deferred geometry pass), but the consequence is that MSAA implemented in such a manner is more expensive than usual both due to the amount of memory the saved geometry consumes and the extra work required to perform the extra sampling.
(Battlefield 3 G-Buffer)
For this reason developers have quickly been adopting post-process AA methods, primarily NVIDIA’s Fast Approximate Anti-Aliasing (FXAA). Similar in execution to AMD’s driver-based Morphological Anti-Aliasing, FXAA works on the fully rendered image and attempts to look for aliasing and blur it out. The results generally aren’t as good as MSAA (and especially not SSAA), but it’s very quick to implement (it’s just a shader program) and has a very small performance hit. Compared to the difficultly of implementing MSAA on a deferred renderer, this is faster and cheaper, and it’s why MSAA support for DX10+ games is anything but universal.
(AMD's Leo tech demo)
But what if there was a way to have a forward renderer with performance similar to that of a deferred renderer? That’s what AMD is proposing with one of their key tech demos for the 7000 series: Leo. Leo showcases AMD’s solution to the forward rendering lighting performance problem, which is to use a compute shader to implement light culling such that the compute shader identifies the tiles that any specific light will hit ahead of time, and then using that information only the relevant lights are computed on any given tile. The overhead for lighting is still greater than pure deferred rendering (there’s still some unnecessary lighting going on), but as proposed by AMD, it should make complex lighting cheap enough that it can be done in a forward renderer.
As AMD puts it, the advantages are twofold. The first advantage of course is that MSAA (and SSAA) compatibility is maintained, as this is still a forward render; the use of the compute shader doesn’t have any impact on the AA process. The second advantage relates to lighting itself: as we mentioned previously, deferred rendering doesn’t work well with complex materials. On the other hand forward rendering handles complex materials well, it just wasn’t fast enough until now.
Leo in turn executes on both of these concepts. Anti-aliasing is of course well represented through the use of 4x MSAA, but so are complex materials. AMD’s theme for Leo is stop motion animation, so a number of different material types are directly lit, including fabric, plastic, cardboard, and skin. The total of these parts may not be the most jaw-dropping thing you’ve ever seen, but the fact that it’s being done in a forward renderer is amazingly impressive. And if this means we can have good lighting and excellent support for real anti-aliasing, we’re all for it.
runs smooth on 6970 at 1920x1200, it uses up to 1.5gb of video memory, the highest i seen.
I know that having all that done real time is impressive tech wise but knowing you can get the same results by faking a lot of it makes it look blah. Samaritan looks better.
I know that having all that done real time is impressive tech wise but knowing you can get the same results by faking a lot of it makes it look blah. Samaritan looks better.
Get like 1FPM on 6780x2 unless I lower the resolution from 1080p to 720p, then I get about 30-50fps
Memory hog!
A nice demo, better than their usual monkey and toad demos...
I know that having all that done real time is impressive tech wise but knowing you can get the same results by faking a lot of it makes it look blah. Samaritan looks better.
Expect the 7950 SKUs to come with 1.5GB.I don't know if this been asked before but is there any news on when the 1.5GB version of the 7970 will be released? I mean 3GB is nice but not the worth the price.
Expect the 7950 SKUs to come with 1.5GB.
We managed to achieve a highest stable GPU overclock at 1050MHz, which is 250MHz over the stock frequency, an increase of 31%. We managed to get the memory up to 1500MHz, which is also 250MHz over stock, or an increase of 20%. We had to have the fan set manually to run at 50% which kept it cool enough to allow this. These are not the highest values allowable in Overdrive, but these are close. We definitely feel that with some Voltage tweaking and custom cooling we should be able to achieve 1.1GHz. For now, with the stock cooler and Voltages 1050MHz/6GHz is the final stable overclock.
As you can see, the overclock has made a significant difference in performance in every game. The overclocked HD 7950 is 26% faster than it was at stock frequencies in Batman. In BF3 the overclocked Radeon HD 7950 is also 26% faster than it was at stock frequencies. In Deus Ex the overclock yielded a 23% performance increase.
Our testing is clear, this Radeon HD 7950 overclocked well, and provided a significant performance improvement. Of course, ours is a reference video card, and we can't say that all retail cards will overclock similar to ours. As we evaluate more HD 7950 cards we will see how those overclock, especially with Voltage tweaking and custom cooling. This at least shows the potential that overclocking can have with the Radeon HD 7950.
:lolIf we see the price drop to $399.99 or below in 60 days, we will change the award to a GOLD (From Silver).
Ryan (PCPer) addresses the pricing concern::lol
Putting the pressure on AMD now.
PCPer review: http://pcper.com/reviews/Graphics-Cards/AMD-Radeon-HD-7950-3GB-Graphics-Card-Review
What if the SRP is $349 and due to demand the retailers price it at $400+?At $100 less than the Radeon HD 7970 3GB, the HD 7950 3GB actually seems like a pretty good buy; you can overclock the performance enough to ALMOST reach the reference performance of the more expensive card. Compared to the GeForce GTX 580 1.5GB average selling price of $499, the HD 7950 again looks like a great card considering it outperforms the NVIDIA option. I can't shake the feeling that AMD is doing a disservice to itself in the long run by keeping prices this high though I understand WHY they are doing it.
The fact is that AMD is likely still capacity constrained by the 28nm process at TSMC and pricing any of these cards lower might actually HURT the company's reputation. If the HD 7950 3GB were being released today for $349 I think just about every sane gamer on the planet would want one but if AMD doesn't have the inventory to cater to that many buyers, what is the point of offering it at that price? It would cause anger and resentment from the very audience they are trying to cater to so instead we see higher prices that allow AMD to make a bit more profit and temper demand as well until capacity might soon catch up.
As many GAFFer's have already said - they'll enjoy the 7900 series for a few months, sell it and go for the cheaper GK104 if and truly it's worth it for them.No one, absolutely no one, should be even contemplating, let alone going out, to buy the 7900 series until GK104/GTX 660 Ti arrives in April.
The temporary pleasure will not be the worth pain in the wallet 2 months later.
No one, absolutely no one, should be even contemplating, let alone going out, to buy the 7900 series until GK104/GTX 660 Ti arrives in April.
The temporary pleasure will not be the worth pain in the wallet 2 months later.
As to be expected, at this point in time AMD is mostly focusing on improving performance on a game-by-game basis to deal with games that didnt immediately adapt to the GCN architecture well, while the fact that they seem to be targeting common benchmarks first is likely intentional. Crysis: Warhead is the biggest winner here as minimum framerates in particular are greatly improved; were seeing a 22% improvement at 1920, while at 2560 theres still an 11% improvement. Metro:2033 and DiRT 3 also picked up 10% or more in performance versus the release drivers, while Battlefield 3 has seen a much smaller 2%-3% improvement. Everything else in our suite is virtually unchanged, as it looks like AMD has not targeted any of those games at this time.
As one would expect, a result of these improvements the performance lead of the 7970 versus the GTX 580 has widened. The average lead for the 7970 is now 19% at 1920 and 26% at 2560, with the lead approaching 40% in games like Metro that specifically benefited from this update. At this point the only game the 7970 still seems to have trouble pulling well ahead of the GTX 580 is Battlefield 3, where the lead is only 8%.
I don't know if this been asked before but is there any news on when the 1.5GB version of the 7970 will be released? I mean 3GB is nice but not the worth the price.
Yup, clock for clock the 7950 is only 4% slower. So the disabled CU (cutdown in shaders) is not such a big deal.I think it's actually priced well right now given the competition. It's cheaper than the GTX 580 and clearly offers better performance and then blows it away when overclocked. If you're doing a new build now, you'd be crazy not to get one. If you can wait, the 79xx series has room to drop for sure when the Nvidia cards come out. I hope for Nvidia's sake that it really is a lot more powerful, because AMD has a lot of flexing room both in clock speeds and in pricing. Not to mention the fab process will be a lot more mature yielding more cards.
I'm very surprised how well it scales clock to clock versus the 7970.
It's just exactly where it should be.I think it's actually priced well right now given the competition. It's cheaper than the GTX 580 and clearly offers better performance and then blows it away when overclocked. If you're doing a new build now, you'd be crazy not to get one. If you can wait, the 79xx series has room to drop for sure when the Nvidia cards come out. I hope for Nvidia's sake that it really is a lot more powerful, because AMD has a lot of flexing room both in clock speeds and in pricing. Not to mention the fab process will be a lot more mature yielding more cards.
I'm very surprised how well it scales clock to clock versus the 7970.
The reviews of the 7950 make it look like it is a pretty good card, but I'm still waiting to see what Kepler brings. I'd probably instabuy the 7950 if it was $100 cheaper though.