It wouldn't have the same impact in pictures. ATI wants you to notice it is running in that many monitors as a showcase of how powerful these cards are.Luigiv said:Impressive, though I've always hated multi-monitor displays. It would have been nicer if they use UDTVs (2160p) or UHDTVs (4320p) instead.
What kind of bullshit is this?nib95 said:Not really. Well, high end if you consider enthusiast level to be 5870X2 (which is only coming a few weeks later). But then I doubt the 5890 will be long after.
If the 5870 does launch at $399. I kind of hope Nvidia bust out with a card that is not only much better performance wise (as unlikely as that sounds at the moment) but cheaper too, and steals a huge segment of the market back from ATI. I honestly can't stand it when companies get complaisant like this.
I had my hopes up that this would be a 9700 Pro style affair. Not only ultra affordable, but seriously class leading. Seems like this new release will only tick the later box. Even then, not by a whole lot given the price differences.
MickeyKnox said:That's because CE3 is basically a lateral movement from CE2, though there are some really nice improvements to the already insane lighting engine without a performance hit.
vazel said:I've been playing games released this year at 1080p fine on my 4850. I'm sure I'll be okay until next gen consoles come out since most big pc games are console ports.
nib95 said:$400 is too much. ATI returning to overpriced antics. Are they getting out of the loop again in complacency? I mean....this Eye Finity stuff is more directed at people with more money than sense. I mean....9 screens....really? 3 I can just about understand...
The reason ATI have done well this last gen of cards is because of price to performance ratio's. Now they're back to increasing prices again.
The 5870 should be launching at $299. $350 max. $399 is pushing it.
Well I guess you do have a point. Still find the borders around the individual screens distracting.godhandiscen said:It wouldn't have the same impact in pictures. ATI wants you to notice it is running in that many monitors as a showcase of how powerful these cards are.
Minsc said:Then you've been doing so with some combination of lower/no AA, under 60fps, and lower game settings.
I just looked up a 4890, and listed that it can't run 1200p at >60fps on the vast majority of new games, especially console ports, like Durante has mentioned.
For people who want no sacrifices, a 4850 is not going to cut it, these new cards look great, I can't wait to see official benches.
Durante said:If we're talking about 1920*1200 with good AA, the vast majority of console ports still fail to reach a stable 60 fps on current GPUs. Remember that this is around 4 times the image quality and (more than) twice the framerate of the console versions -- and it's what I would like to play every game at.
Gestahl said:So like do shitty console ports determine hardware upgrade needs now? Someone will have to enlighten me on what all these system crushing games with eye popping visuals supposedly are, Crysis be our name.
JudgeN said:Finally someone besides me said it, everytime I read my "running all games at full 1080p with massive AA/AF at 60FPS" I die alittle on the inside cause I know their lying or just playing source games. Current gen cards are in no way shape or form close to locking that shit.
nib95 said:$400 is too much. ATI returning to overpriced antics. Are they getting out of the loop again in complacency? I mean....this Eye Finity stuff is more directed at people with more money than sense. I mean....9 screens....really? 3 I can just about understand...
The reason ATI have done well this last gen of cards is because of price to performance ratio's. Now they're back to increasing prices again.
The 5870 should be launching at $299. $350 max. $399 is pushing it.
Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector.
ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays.
Dr. Light said:Well, this is two 4890s in Crossfire, which should be at least 85-90% of a 5870, just to give you an idea of what to expect:
(all at max settings, 1080p, 8xAA)
http://i25.tinypic.com/2jg8q4w.jpg
http://i32.tinypic.com/2ylahyp.jpg
http://i25.tinypic.com/35ioo4x.jpg
I just Fraps benchmarked an entire level in Crysis Warhead (still playing through it) and got just over 38fps (max settings at 1080p, no aa which kills Crysis) but the implementation of Crossfire in that game still needs more optimization, not to mention it's CPU constrained during the heavy action scenes.
President and CEO of NVIDIA Corp. (NVDA) Jen Hsun Huang sells 260,475 shares of NVDA on 09/11/2009 at an average price of $16.07 a share.
Binabik15 said:That´s what I was thinking.
How are the current CPUs compared to those new cards? Can they keep up while those babies munch on Cryis in unbelievable resolution?
My brother wants me to build him a new rig and I really hope those cards will crush the current prices. "Only" 1080p will have to be enough for us :lol
marsomega said:Not at all. Parts of the new GPU underwent major re-enginering (the scheduler.) Some Parts did not change while others underwent major changes equivalent to the R400 to R500 changes in magnitude. CrossFire scalling alone is no where near what a GPU with double power of a 4890 GPU w/ no marchitecture changes (ie equivalent of two 4890 GPUs as one GPU) would accomplish.
If you increase the resolution and don't get a drop in framerate (or get an increase) you are CPU limited.MotherFan said:So how do you know if this card would make you cpu constrained, or even if you are cpu constrained?
I can say the same about AMD, yet I dont see such bailout moves there? Maybe its just a grim internal outlook at Nvidia or JHH really needs some urgent cash to buy a Ferrari. :lolvenne said:To be fair, the stock is riding a 12 month high.
Not again, its delusional at best. From Anandtech's Revealing The Power of DirectX 11Dr. Light said:I'm aware they added new features, etc., I'm talking about in terms of things like number of transistors, stream processers, teraflops, etc. that's roughly the comparison you're looking at. That's more realistic when talking about existing games, it's not like DX11 is going to be a huge factor when most people are barely touching DX10 as it is.
And thats just multi-threading, there is also Compute Shader, Tessellation etc. Also keep in mind that Dice ported their entire Frostbite 2 engine to DX11 in under 3 hours.There is more than just the compute shader included in DX11, and since our first real briefing about it at this year's NVISION, we've had the chance to do a little more research, reading slides and listening to presentations from SIGGRAPH and GameFest 2008 (from which we've included slides to help illustrate this article). The most interesting things to us are more subtle than just the inclusion of a tessellator or the addition of the Compute Shader, and the introduction of DX11 will also bring benefits to owners of current DX10 and DX10.1 hardware, provided AMD and NVIDIA keep up with appropriate driver support anyway.
Many of the new aspects of DirectX 11 seem to indicate to us that the landscape is ripe for a fairly quick adoption, especially if Microsoft brings Windows 7 out sooner rather than later. There have been adjustments to the HLSL (high-level shader language) that should make it much more attractive to developers, the fact that DX10 is a subset of DX11 has some good transitional implications, and changes that make parallel programming much easier should all go a long way to helping developers pick up the API quickly. DirectX 11 will be available for Vista, so there won't be as many complications from a lack of users upgrading, and Windows 7 may also inspire Windows XP gamers to upgrade, meaning a larger install base for developers to target as well.
The bottom line is that while DirectX 10 promised features that could bring a revolution in visual fidelity and rendering techniques, DirectX 11 may actually deliver the goods while helping developers make the API transition faster than we've seen in the past. We might not see techniques that take advantage of the exclusive DirectX 11 features right off the bat, but adoption of the new version of the API itself will go a long way to inspiring amazing advances in realtime 3D graphics.
From DirectX 6 through DirectX 9, Microsoft steadily evolved their graphics programming API from a fixed function vehicle for setting state and moving data structures around to a rich, programmable environment enabling deep control of graphics hardware. The step from DX9 to DX10 was the final break in the old ways, opening up and expanding on the programmability in DX9 to add more depth and flexibility enabled by newer hardware. Microsoft also forced a shift in the driver model with the DX10 transition to leave the rest of the legacy behind and try and help increase stability and flexibility when using DX10 hardware. But DirectX 11 is different.
Rather than throwing out old constructs in order to move towards more programmability, Microsoft has built DirectX 11 as a strict superset of DirectX 10/10.1, which enables some curious possibilities. Essentially, DX10 code will be DX11 code that chooses not to implement some of the advanced features. On the flipside, DX11 will be able to run on down level hardware. Of course, all of the features of DX11 will not be available, but it does mean that developers can stick with DX11 and target both DX10 and DX11 hardware without the need for two completely separate implementations: they're both the same but one targets a subset of functionality. Different code paths will be necessary if something DX11 only (like the tessellator or compute shader) is used, but this will still definitely be a benefit in transitioning to DX11 from DX10.
Running on lower spec'd hardware will be important, and this could make the transition from DX10 to DX11 one of the fastest we have ever seen. In fact, with lethargic movement away from DX9 (both by developers and consumers), the rush to bring out Windows 7, and slow adoption of Vista, we could end up looking back at DX10 as merely a transitional API rather than the revolutionary paradigm shift it could have been. Of course, Microsoft continues to push that the fastest route to DX11 is to start developing DX10.1 code today. With DX11 as a superset of DX10, this is certainly true, but developer time will very likely be better spent putting the bulk of their effort into a high quality DX9 path with minimal DX10 bells and whistles while saving the truly fundamental shifts in technique made possible by DX10 for games targeted at the DX11 hardware and timeframe.
We are especially hopeful about a faster shift to DX11 because of the added advantages it will bring even to DX10 hardware. The major benefit I'm talking about here is multi-threading. Yes, eventually everything will need to be drawn, rasterized, and displayed (linearly and synchronously), but DX11 adds multi-threading support that allows applications to simultaneously create resources or manage state and issue draw commands, all from an arbitrary number of threads. This may not significantly speed up the graphics subsystem (especially if we are already very GPU limited), but this does increase the ability to more easily explicitly massively thread a game and take advantage of the increasing number of CPU cores on the desktop.
With 8 and 16 logical processor systems coming soon to a system near you, we need developers to push beyond the very coarse grained and heavy threads they are currently using that run well on two core systems. The cost/benefit of developing a game that is significantly assisted by the availability of more than two cores is very poor at this point. It is too difficult to extract enough parallelism to matter on quad core and beyond in most video games. But enabling simple parallel creation of resources and display lists by multiple threads could really open up opportunities for parallelizing game code that would otherwise have remained single threaded. Rather than one thread to handle all the DX state change and draw calls (or very well behaved and heavily synchronized threads sharing the responsibility), developers can more naturally create threads to manage types or groups of objects or parts of a world, opening up the path to the future where every object or entity can be managed by it's own thread (which would be necessary to extract performance when we eventually expand into hundreds of logical cores).
The fact that Microsoft has planned multi-threading support for DX11 games running on DX10 hardware is a major bonus. The only caveat here is that AMD and NVIDIA will need to do a little driver work for their existing DX10 hardware to make this work to its fullest extent (it will "work" but not as well even without a driver change). Of course, we expect that NVIDIA and especially AMD (as they are also a multi-core CPU company) will be very interested in making this happen. And, again, this provides major incentives for game developers to target DX11 even before DX11 hardware is widely available or deployed..
What? ATI is releasing a card that is 20% stronger than Nvidia's premium dual card, GTX295 @ $500. A product that also has DX11 support and can display accross 6 monitors easily, and is still $100 less than its competition, and you call ATI complaisant? How long did it take you to think of an argument to keep rooting for Nvidia? Think harder next time.nib95 said:Not really. Well, high end if you consider enthusiast level to be 5870X2 (which is only coming a few weeks later). But then I doubt the 5890 will be long after.
If the 5870 does launch at $399. I kind of hope Nvidia bust out with a card that is not only much better performance wise (as unlikely as that sounds at the moment) but cheaper too, and steals a huge segment of the market back from ATI. I honestly can't stand it when companies get complaisant like this.
I had my hopes up that this would be a 9700 Pro style affair. Not only ultra affordable, but seriously class leading. Seems like this new release will only tick the later box. Even then, not by a whole lot given the price differences.
Dr. Light said:I'm aware they added new features, etc., I'm talking about in terms of things like number of transistors, stream processers, teraflops, etc. that's roughly the comparison you're looking at. That's more realistic when talking about existing games, it's not like DX11 is going to be a huge factor when most people are barely touching DX10 as it is.
irfan said:And thats just multi-threading, there is also Compute Shader, Tessellation etc. Also keep in mind that Dice ported their entire Frostbite 2 engine to DX11 in under 3 hours.
I know, I'm just posting it again because people seem to rubbish off DX11 as DX10 part II.TOAO_Cyrus said:The reason it was so easy is because DX11 is a superset of DX10. Basically all they had to do was copy paste everything to a DX11 environment and they had a working engine. From there they just need to modify and extend the existing code to add DX11 features like tesselation. This was impossible going from DX9 to DX10.
Minsc said:Then you've been doing so with some combination of lower/no AA, under 60fps, and lower game settings.
I just looked up a 4890, and listed that it can't run 1200p at >60fps on the vast majority of new games, especially console ports, like Durante has mentioned.
For people who want no sacrifices, a 4850 is not going to cut it, these new cards look great, I can't wait to see official benches.
Try Assasins Creed. There are console ports that cannot run at 60fps locked with AA @ 1080p in a GTX295 (from experience). So it would be worse on a 260 I guess.brain_stew said:Well that's just not true, considering even my GTX 260 can manage that just fine. My evidence is from actually playing the games on my rig.
irfan said:Here is an awesome video of two of the DX11 goodies: http://vimeo.com/6122205
1. HDAO: Frame rate is more than doubled via Compute Shader
2. Tessellation: Parallax mapping drops frame-rate from 500 to 70. Tessellation bumps this frame-rate to 270 with higher detail and geometry. Also the amont of detail possible via Tessellation is mind blowing.
Well I'm glad that atleast some of the GAFFers like you understand the benefits of DX11 on DX10 hardware.M3d10n said:IMO compute shading is the best thing in DX11. First because you won't need a DX11 card to use it - DX10 and DX10.0 hardware will be able to use compute shaders (DX11 hardware has a few extra CS features). Seconds because many effects can be done *much* faster using a compute shader than via traditional means, like kernel-based post processing (SSAO and variants, as example) since data can be moved around in a more direct way. Coupled with multi-threaded rendering it makes it very appealing for developers to write DX11 render-paths to their games (even if they don't add any new graphics effects), because they can get some nice performance gains on existing hardware that way.
GT300 = Nvidia's DX11 GPU. According to partner roadmaps, its slated for Q1 2010. If you are in the market for a midrange part like the 8800GT, then you're out of luck as GT300 derivatives are Q2 next year at the earliest.Mrbob said:Not attempting to start a fight, curious to know when Nvidia plans on offering DX11 cards?
Also, the ebb and flow from the ups and downs in computer hardware amazes me. The CPU side still seems to lean towards intel, but the switch in power between ATI and Nvidia throughout the years is insane. When I bought my 8800GT nvidia was the top dog, now it looks like ATI might be trying to reclaim the throne. I love it. Better hardware at cheaper prices for all!
godhandiscen said:Try Assasins Creed. There are console ports that cannot run at 60fps locked with AA @ 1080p in a GTX295 (from experience). So it would be worse on a 260 I guess.
Mrbob said:Not attempting to start a fight, curious to know when Nvidia plans on offering DX11 cards?
Also, the ebb and flow from the ups and downs in computer hardware amazes me. The CPU side still seems to lean towards intel, but the switch in power between ATI and Nvidia throughout the years is insane. When I bought my 8800GT nvidia was the top dog, now it looks like ATI might be trying to reclaim the throne. I love it. Better hardware at cheaper prices for all!
SapientWolf said:If you increase the resolution and don't get a drop in framerate (or get an increase) you are CPU limited.
Kaako said:The 2GB version of the card would be best suited for 1080P with full AA would it not?
A Chinese leak today has slipped many core details of AMD's next ATI notebook graphics platform. The Mobility Radeon HD 5000 series is expected to be a direct translation of the desktop models and should have similar lines. At the top, the 5870, 5850 and 5830 will all have a very large 1,600 shader units (effects processors) and should support CrossFire on gaming-oriented desktop replacements. The top two will support fast GDDR5 memory while the 5830 will need GDDR3.
The 5770, 5750 and 5730 will all be close parallels to the 5800 graphics chips, including shaders and memory, but won't support CrossFire. At least one of the GDDR5 models should have a 1.3GHz memory clock speed.
AMD's mainstream 5650 and 5600 don't have many details but should be more obviously feature-reduced compared to the higher-end video chipsets; the 5470, 5450 and 5430 will cater to the entry level and use a reduced 64-bit memory interface. They should respectively support GDDR3, regular DDR3 and DDR2 memory, with the speeds of each helping to dictate performance.
The finished products may not show until early 2010, or the season after the desktop cards are available.
Nvidia said:Because we support GPU-accelerated physics, our $129 card thats shipping today is faster than their new RV870 (code name for new AMD chips) that sells for $399.
godhandiscen said:Try Assasins Creed. There are console ports that cannot run at 60fps locked with AA @ 1080p in a GTX295 (from experience). So it would be worse on a 260 I guess.
Chittagong said:So is this the graphics generation that will be dumbed down for next gen consoles, or is there one more round?
Chittagong said:So is this the graphics generation that will be dumbed down for next gen consoles, or is there one more round?
jett said:AC doesn't support AA in 1080p, and my lowly 4670 runs it at 30+ fps at that resolution.
Surely a 4850 hits 60fps with no issues.
brain_stew said:Please note that this will change from game to game as well as from situation to situation within that game. As long as you can manage a decent enough frame rate being CPU bottlenecked isn't all that bad. Take the example of Ghostbuster's, its a huge hog on the CPU because of its physics engine and you won't be getting near a smooth 60fps framerate on any dual core.
However, it gives you the option to lock at 30fps, which a decent dualie will manage just fine, now you still have a renderer that is able to run way in excess of 60fps, so you can jump into the config file and enable in engine 2x2 supersampling for the most insanely gorgeous image quality. The CPU and GPU load was damn near equal on my rig after that, even though this was a game with such a highly complex physics engine and my CPU was far from cutting edge. The point is that you can always give the GPU some more work to do, so I often find it a little silly to talk about GPU bottlenecks.
As long as your CPU is giving you a framerate you're happy with then there's no reason to go and upgrade it, even if people are claiming you're "GPU bottlenecked", just go ahead and give that GPU some more work. You can never have enough GPU power as far as I'm concerned, there's always something it can be put to use to.
brain_stew said:Well that's just not true, especially the console ports part, considering even my GTX 260 can manage that just fine. My evidence is from actually playing the games on my rig, with a FRAPS counter running if necessary.
Dr. Light said:Do you even want the know the framerate that Flight Simulator X puts out at max settings on my rig (the one posted above)?
nib95 said:If the 5870 does launch at $399. I kind of hope Nvidia bust out with a card that is not only much better performance wise (as unlikely as that sounds at the moment) but cheaper too,
I honestly can't stand it when companies get complaisant like this.