• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

ATI 58XX Preview - Media Stuff goes in here.

gillty

Banned
1 card - dual dvi, dual displayport, and dual hdmi
demo02.jpg

another card with 6x displayport
demo01.jpg

demo05.jpg

demo03.jpg
 

Luigiv

Member
Impressive, though I've always hated multi-monitor displays. It would have been nicer if they use UDTVs (2160p) or UHDTVs (4320p) instead.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
Luigiv said:
Impressive, though I've always hated multi-monitor displays. It would have been nicer if they use UDTVs (2160p) or UHDTVs (4320p) instead.
It wouldn't have the same impact in pictures. ATI wants you to notice it is running in that many monitors as a showcase of how powerful these cards are.
 

artist

Banned
nib95 said:
Not really. Well, high end if you consider enthusiast level to be 5870X2 (which is only coming a few weeks later). But then I doubt the 5890 will be long after.

If the 5870 does launch at $399. I kind of hope Nvidia bust out with a card that is not only much better performance wise (as unlikely as that sounds at the moment) but cheaper too, and steals a huge segment of the market back from ATI. I honestly can't stand it when companies get complaisant like this.

I had my hopes up that this would be a 9700 Pro style affair. Not only ultra affordable, but seriously class leading. Seems like this new release will only tick the later box. Even then, not by a whole lot given the price differences.
What kind of bullshit is this?

9700 Pro launched at $399. FACT. 5870 is not going to launch at a higher price. FACT. You have to consider that AMD cannot ALWAYS keep pricing at $299. There are a lot of factors behind it.
1. Only DX11 capable hardware
2. King of the hill performance
3. 40nm yields still improving
4. 5870 more complex than 4870 (for obvious reasons)

I mean just looking at the board itself its pretty obvious that the BoM for the 5870 is much more than 4870.

AMD is still the underdog here, which is why their pricing is still very competitive. (They are coming in $100 lower than Nvidia) And you are rooting for Nvidia? :lol So you like rebranding like the G92 as 8800, 9800, 9800+, GTS250 ... guess what you will get G21x parts now rebranded as G300 just cause they will not have anything DX11. Maybe you also like if a vendor COCKBLOCKs devs adopting a new DX revision (with DX10.1) and now you will hear Nvidia touting DX10.1 does matter and DX11 can wait ..

You seriously have no grip on the situation.
 
MickeyKnox said:
That's because CE3 is basically a lateral movement from CE2, though there are some really nice improvements to the already insane lighting engine without a performance hit.

Much better optimisation + deferred lighting are pretty major upgrades if you ask me.
 

Minsc

Gold Member
vazel said:
I've been playing games released this year at 1080p fine on my 4850. I'm sure I'll be okay until next gen consoles come out since most big pc games are console ports.

Then you've been doing so with some combination of lower/no AA, under 60fps, and lower game settings.

I just looked up a 4890, and listed that it can't run 1200p at >60fps on the vast majority of new games, especially console ports, like Durante has mentioned.

For people who want no sacrifices, a 4850 is not going to cut it, these new cards look great, I can't wait to see official benches.
 
nib95 said:
$400 is too much. ATI returning to overpriced antics. Are they getting out of the loop again in complacency? I mean....this Eye Finity stuff is more directed at people with more money than sense. I mean....9 screens....really? 3 I can just about understand...

The reason ATI have done well this last gen of cards is because of price to performance ratio's. Now they're back to increasing prices again.

The 5870 should be launching at $299. $350 max. $399 is pushing it.

400$ is completely reasonable and other then the RV770 its the cheapest launch since the 9700pro. Compare this to the 4870 at launch, the die is 23% bigger, they are using twice as much ram, based on the fact that the 4850 is a salvage part and the horror stories about RV770 the yields are not great. They also have no competition and dont expect any for several months, with the 48xx they where not the fastest card adn where going for market share. Be glad they arent pulling a Nvidia and charging 650$ which they easily could.
 

Luigiv

Member
godhandiscen said:
It wouldn't have the same impact in pictures. ATI wants you to notice it is running in that many monitors as a showcase of how powerful these cards are.
Well I guess you do have a point. Still find the borders around the individual screens distracting.
 

Zaptruder

Banned
I wonder how many people bitching about multi monitor support have used much less own a triple monitor setup?

This shit is the new standard. Resolutions are getting too high to be workable on a single monitor.
 

Gestahl

Member
Minsc said:
Then you've been doing so with some combination of lower/no AA, under 60fps, and lower game settings.

I just looked up a 4890, and listed that it can't run 1200p at >60fps on the vast majority of new games, especially console ports, like Durante has mentioned.

For people who want no sacrifices, a 4850 is not going to cut it, these new cards look great, I can't wait to see official benches.

So like do shitty console ports determine hardware upgrade needs now? Someone will have to enlighten me on what all these system crushing games with eye popping visuals supposedly are, Crysis be our name.
 

chespace

It's not actually trolling if you don't admit it
I'm still cruising along with my OC'ed Yorkfield and 4870x2. No game has really even pushed my rig except Crysis.

I suppose my upgrade path looks like Core i7 first, then a new GPU. Hopefully by that time, this sucker will be in the sub $400 range.
 

JudgeN

Member
Durante said:
If we're talking about 1920*1200 with good AA, the vast majority of console ports still fail to reach a stable 60 fps on current GPUs. Remember that this is around 4 times the image quality and (more than) twice the framerate of the console versions -- and it's what I would like to play every game at.

Finally someone besides me said it, everytime I read my "running all games at full 1080p with massive AA/AF at 60FPS" I die alittle on the inside cause I know their lying or just playing source games. Current gen cards are in no way shape or form close to locking that shit.


Gestahl said:
So like do shitty console ports determine hardware upgrade needs now? Someone will have to enlighten me on what all these system crushing games with eye popping visuals supposedly are, Crysis be our name.

They aren't really system crushing games but as I said above you aren't getting locked 60 FPS at 1080p with AA/AF on many PC games (unless there source games/ some UE3 games/MT Framework games). There is still plenty of room to grow, and it seem this new generation of cards might be able to do that. Now of course if your not the kind of PC gamer that needs to run everything at max settings with AA/AF then these current gen cards of fantastic, they get you really close.
 

Dr. Light

Member
JudgeN said:
Finally someone besides me said it, everytime I read my "running all games at full 1080p with massive AA/AF at 60FPS" I die alittle on the inside cause I know their lying or just playing source games. Current gen cards are in no way shape or form close to locking that shit.

Well, this is two 4890s in Crossfire, which should be at least 85-90% of a 5870, just to give you an idea of what to expect:

(all at max settings, 1080p, 8xAA)
2jg8q4w.jpg

2ylahyp.jpg

35ioo4x.jpg


I just Fraps benchmarked an entire level in Crysis Warhead (still playing through it) and got just over 38fps (max settings at 1080p, no aa which kills Crysis) but the implementation of Crossfire in that game still needs more optimization, not to mention it's CPU constrained during the heavy action scenes.
 

marsomega

Member
nib95 said:
$400 is too much. ATI returning to overpriced antics. Are they getting out of the loop again in complacency? I mean....this Eye Finity stuff is more directed at people with more money than sense. I mean....9 screens....really? 3 I can just about understand...

The reason ATI have done well this last gen of cards is because of price to performance ratio's. Now they're back to increasing prices again.

The 5870 should be launching at $299. $350 max. $399 is pushing it.

No is it not pushing it. A 5870 that outperforms a a 295 GTX at the cost of 400 verus the price of a 295 GTX that goes for 500 plus?



Also...

Driver version 8.66 (Catalyst 9.10) or above is required to support ATI Eyefinity technology and to enable a third display you require one panel with a DisplayPort connector.
ATI Eyefinity technology works with games that support non-standard aspect ratios which is required for panning across three displays.

From the AMD EyeFinity page. Pretty much confirming we're getting the cards before October.
 

marsomega

Member
Dr. Light said:
Well, this is two 4890s in Crossfire, which should be at least 85-90% of a 5870, just to give you an idea of what to expect:

(all at max settings, 1080p, 8xAA)
http://i25.tinypic.com/2jg8q4w.jpg
http://i32.tinypic.com/2ylahyp.jpg
http://i25.tinypic.com/35ioo4x.jpg

I just Fraps benchmarked an entire level in Crysis Warhead (still playing through it) and got just over 38fps (max settings at 1080p, no aa which kills Crysis) but the implementation of Crossfire in that game still needs more optimization, not to mention it's CPU constrained during the heavy action scenes.


Not at all. Parts of the new GPU underwent major re-enginering (the scheduler.) Some Parts did not change while others underwent major changes equivalent to the R400 to R500 changes in magnitude. CrossFire scalling alone is no where near what a GPU with double power of a 4890 GPU w/ no marchitecture changes (ie equivalent of two 4890 GPUs as one GPU) would accomplish.
 

Binabik15

Member
That´s what I was thinking.

How are the current CPUs compared to those new cards? Can they keep up while those babies munch on Cryis in unbelievable resolution?

My brother wants me to build him a new rig and I really hope those cards will crush the current prices. "Only" 1080p will have to be enough for us :lol
 

Xavien

Member
Binabik15 said:
That´s what I was thinking.

How are the current CPUs compared to those new cards? Can they keep up while those babies munch on Cryis in unbelievable resolution?

My brother wants me to build him a new rig and I really hope those cards will crush the current prices. "Only" 1080p will have to be enough for us :lol

The higher the resolution the less CPU dependant a game gets so yeah, they'll be just fine.

Being CPU constrained at 1920x1200 or 1920x1080 is a serious possibility though.
 

Dr. Light

Member
marsomega said:
Not at all. Parts of the new GPU underwent major re-enginering (the scheduler.) Some Parts did not change while others underwent major changes equivalent to the R400 to R500 changes in magnitude. CrossFire scalling alone is no where near what a GPU with double power of a 4890 GPU w/ no marchitecture changes (ie equivalent of two 4890 GPUs as one GPU) would accomplish.

I'm aware they added new features, etc., I'm talking about in terms of things like number of transistors, stream processers, teraflops, etc. that's roughly the comparison you're looking at. That's more realistic when talking about existing games, it's not like DX11 is going to be a huge factor when most people are barely touching DX10 as it is.
 

SapientWolf

Trucker Sexologist
MotherFan said:
So how do you know if this card would make you cpu constrained, or even if you are cpu constrained?
If you increase the resolution and don't get a drop in framerate (or get an increase) you are CPU limited.
 

artist

Banned
venne said:
To be fair, the stock is riding a 12 month high.
I can say the same about AMD, yet I dont see such bailout moves there? Maybe its just a grim internal outlook at Nvidia or JHH really needs some urgent cash to buy a Ferrari. :lol

Dr. Light said:
I'm aware they added new features, etc., I'm talking about in terms of things like number of transistors, stream processers, teraflops, etc. that's roughly the comparison you're looking at. That's more realistic when talking about existing games, it's not like DX11 is going to be a huge factor when most people are barely touching DX10 as it is.
Not again, its delusional at best. From Anandtech's Revealing The Power of DirectX 11

There is more than just the compute shader included in DX11, and since our first real briefing about it at this year's NVISION, we've had the chance to do a little more research, reading slides and listening to presentations from SIGGRAPH and GameFest 2008 (from which we've included slides to help illustrate this article). The most interesting things to us are more subtle than just the inclusion of a tessellator or the addition of the Compute Shader, and the introduction of DX11 will also bring benefits to owners of current DX10 and DX10.1 hardware, provided AMD and NVIDIA keep up with appropriate driver support anyway.

Many of the new aspects of DirectX 11 seem to indicate to us that the landscape is ripe for a fairly quick adoption, especially if Microsoft brings Windows 7 out sooner rather than later. There have been adjustments to the HLSL (high-level shader language) that should make it much more attractive to developers, the fact that DX10 is a subset of DX11 has some good transitional implications, and changes that make parallel programming much easier should all go a long way to helping developers pick up the API quickly. DirectX 11 will be available for Vista, so there won't be as many complications from a lack of users upgrading, and Windows 7 may also inspire Windows XP gamers to upgrade, meaning a larger install base for developers to target as well.

The bottom line is that while DirectX 10 promised features that could bring a revolution in visual fidelity and rendering techniques, DirectX 11 may actually deliver the goods while helping developers make the API transition faster than we've seen in the past. We might not see techniques that take advantage of the exclusive DirectX 11 features right off the bat, but adoption of the new version of the API itself will go a long way to inspiring amazing advances in realtime 3D graphics.

From DirectX 6 through DirectX 9, Microsoft steadily evolved their graphics programming API from a fixed function vehicle for setting state and moving data structures around to a rich, programmable environment enabling deep control of graphics hardware. The step from DX9 to DX10 was the final break in the old ways, opening up and expanding on the programmability in DX9 to add more depth and flexibility enabled by newer hardware. Microsoft also forced a shift in the driver model with the DX10 transition to leave the rest of the legacy behind and try and help increase stability and flexibility when using DX10 hardware. But DirectX 11 is different.

Rather than throwing out old constructs in order to move towards more programmability, Microsoft has built DirectX 11 as a strict superset of DirectX 10/10.1, which enables some curious possibilities. Essentially, DX10 code will be DX11 code that chooses not to implement some of the advanced features. On the flipside, DX11 will be able to run on down level hardware. Of course, all of the features of DX11 will not be available, but it does mean that developers can stick with DX11 and target both DX10 and DX11 hardware without the need for two completely separate implementations: they're both the same but one targets a subset of functionality. Different code paths will be necessary if something DX11 only (like the tessellator or compute shader) is used, but this will still definitely be a benefit in transitioning to DX11 from DX10.

Running on lower spec'd hardware will be important, and this could make the transition from DX10 to DX11 one of the fastest we have ever seen. In fact, with lethargic movement away from DX9 (both by developers and consumers), the rush to bring out Windows 7, and slow adoption of Vista, we could end up looking back at DX10 as merely a transitional API rather than the revolutionary paradigm shift it could have been. Of course, Microsoft continues to push that the fastest route to DX11 is to start developing DX10.1 code today. With DX11 as a superset of DX10, this is certainly true, but developer time will very likely be better spent putting the bulk of their effort into a high quality DX9 path with minimal DX10 bells and whistles while saving the truly fundamental shifts in technique made possible by DX10 for games targeted at the DX11 hardware and timeframe.

We are especially hopeful about a faster shift to DX11 because of the added advantages it will bring even to DX10 hardware. The major benefit I'm talking about here is multi-threading. Yes, eventually everything will need to be drawn, rasterized, and displayed (linearly and synchronously), but DX11 adds multi-threading support that allows applications to simultaneously create resources or manage state and issue draw commands, all from an arbitrary number of threads. This may not significantly speed up the graphics subsystem (especially if we are already very GPU limited), but this does increase the ability to more easily explicitly massively thread a game and take advantage of the increasing number of CPU cores on the desktop.

With 8 and 16 logical processor systems coming soon to a system near you, we need developers to push beyond the very coarse grained and heavy threads they are currently using that run well on two core systems. The cost/benefit of developing a game that is significantly assisted by the availability of more than two cores is very poor at this point. It is too difficult to extract enough parallelism to matter on quad core and beyond in most video games. But enabling simple parallel creation of resources and display lists by multiple threads could really open up opportunities for parallelizing game code that would otherwise have remained single threaded. Rather than one thread to handle all the DX state change and draw calls (or very well behaved and heavily synchronized threads sharing the responsibility), developers can more naturally create threads to manage types or groups of objects or parts of a world, opening up the path to the future where every object or entity can be managed by it's own thread (which would be necessary to extract performance when we eventually expand into hundreds of logical cores).

The fact that Microsoft has planned multi-threading support for DX11 games running on DX10 hardware is a major bonus. The only caveat here is that AMD and NVIDIA will need to do a little driver work for their existing DX10 hardware to make this work to its fullest extent (it will "work" but not as well even without a driver change). Of course, we expect that NVIDIA and especially AMD (as they are also a multi-core CPU company) will be very interested in making this happen. And, again, this provides major incentives for game developers to target DX11 even before DX11 hardware is widely available or deployed..
And thats just multi-threading, there is also Compute Shader, Tessellation etc. Also keep in mind that Dice ported their entire Frostbite 2 engine to DX11 in under 3 hours.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
nib95 said:
Not really. Well, high end if you consider enthusiast level to be 5870X2 (which is only coming a few weeks later). But then I doubt the 5890 will be long after.

If the 5870 does launch at $399. I kind of hope Nvidia bust out with a card that is not only much better performance wise (as unlikely as that sounds at the moment) but cheaper too, and steals a huge segment of the market back from ATI. I honestly can't stand it when companies get complaisant like this.

I had my hopes up that this would be a 9700 Pro style affair. Not only ultra affordable, but seriously class leading. Seems like this new release will only tick the later box. Even then, not by a whole lot given the price differences.
What? ATI is releasing a card that is 20% stronger than Nvidia's premium dual card, GTX295 @ $500. A product that also has DX11 support and can display accross 6 monitors easily, and is still $100 less than its competition, and you call ATI complaisant? How long did it take you to think of an argument to keep rooting for Nvidia? Think harder next time.
 

nubbe

Member
Games need to have their in scenery complexity raised quite a bit now.

Games run at 1080p with 60fps since they are designed around a x800 chip. The bar has been raised quite a bit.
 

artist

Banned
Here is an awesome video of two of the DX11 goodies: http://vimeo.com/6122205

1. HDAO: Frame rate is more than doubled via Compute Shader
2. Tessellation: Parallax mapping drops frame-rate from 500 to 70. Tessellation bumps this frame-rate to 270 with higher detail and geometry. Also the amont of detail possible via Tessellation is mind blowing.
 
Dr. Light said:
I'm aware they added new features, etc., I'm talking about in terms of things like number of transistors, stream processers, teraflops, etc. that's roughly the comparison you're looking at. That's more realistic when talking about existing games, it's not like DX11 is going to be a huge factor when most people are barely touching DX10 as it is.

The thing is doubling physical units on a single chip generally translates to 100% increase accross the board, not to mention stuff like improved AA algorithms and such. Dual chips gets you 80% maximum with most games under 50%. 5870 sould range from 20%-60% better then 4890 Xfire depending on the game.

irfan said:
And thats just multi-threading, there is also Compute Shader, Tessellation etc. Also keep in mind that Dice ported their entire Frostbite 2 engine to DX11 in under 3 hours.

The reason it was so easy is because DX11 is a superset of DX10. Basically all they had to do was copy paste everything to a DX11 environment and they had a working engine. From there they just need to modify and extend the existing code to add DX11 features like tesselation. This was impossible going from DX9 to DX10.
 

artist

Banned
TOAO_Cyrus said:
The reason it was so easy is because DX11 is a superset of DX10. Basically all they had to do was copy paste everything to a DX11 environment and they had a working engine. From there they just need to modify and extend the existing code to add DX11 features like tesselation. This was impossible going from DX9 to DX10.
I know, I'm just posting it again because people seem to rubbish off DX11 as DX10 part II.
 
Minsc said:
Then you've been doing so with some combination of lower/no AA, under 60fps, and lower game settings.

I just looked up a 4890, and listed that it can't run 1200p at >60fps on the vast majority of new games, especially console ports, like Durante has mentioned.

For people who want no sacrifices, a 4850 is not going to cut it, these new cards look great, I can't wait to see official benches.

Well that's just not true, especially the console ports part, considering even my GTX 260 can manage that just fine. My evidence is from actually playing the games on my rig, with a FRAPS counter running if necessary.
 

godhandiscen

There are millions of whiny 5-year olds on Earth, and I AM THEIR KING.
brain_stew said:
Well that's just not true, considering even my GTX 260 can manage that just fine. My evidence is from actually playing the games on my rig.
Try Assasins Creed. There are console ports that cannot run at 60fps locked with AA @ 1080p in a GTX295 (from experience). So it would be worse on a 260 I guess.
 

M3d10n

Member
irfan said:
Here is an awesome video of two of the DX11 goodies: http://vimeo.com/6122205

1. HDAO: Frame rate is more than doubled via Compute Shader
2. Tessellation: Parallax mapping drops frame-rate from 500 to 70. Tessellation bumps this frame-rate to 270 with higher detail and geometry. Also the amont of detail possible via Tessellation is mind blowing.

IMO compute shading is the best thing in DX11. First because you won't need a DX11 card to use it - DX10 and DX10.0 hardware will be able to use compute shaders (DX11 hardware has a few extra CS features). Seconds because many effects can be done *much* faster using a compute shader than via traditional means, like kernel-based post processing (SSAO and variants, as example) since data can be moved around in a more direct way. Coupled with multi-threaded rendering it makes it very appealing for developers to write DX11 render-paths to their games (even if they don't add any new graphics effects), because they can get some nice performance gains on existing hardware that way.
 

Mrbob

Member
Not attempting to start a fight, curious to know when Nvidia plans on offering DX11 cards?

Also, the ebb and flow from the ups and downs in computer hardware amazes me. The CPU side still seems to lean towards intel, but the switch in power between ATI and Nvidia throughout the years is insane. When I bought my 8800GT nvidia was the top dog, now it looks like ATI might be trying to reclaim the throne. I love it. Better hardware at cheaper prices for all!
 

artist

Banned
M3d10n said:
IMO compute shading is the best thing in DX11. First because you won't need a DX11 card to use it - DX10 and DX10.0 hardware will be able to use compute shaders (DX11 hardware has a few extra CS features). Seconds because many effects can be done *much* faster using a compute shader than via traditional means, like kernel-based post processing (SSAO and variants, as example) since data can be moved around in a more direct way. Coupled with multi-threaded rendering it makes it very appealing for developers to write DX11 render-paths to their games (even if they don't add any new graphics effects), because they can get some nice performance gains on existing hardware that way.
Well I'm glad that atleast some of the GAFFers like you understand the benefits of DX11 on DX10 hardware. :)

Latest rumor (not mainstream yet) is that Nvidia has resurrected GT212 because GT300 needs more work (probably delayed further). GT212 is GT200 on 40nm with DX10.1 compliance and expected to fight 5850. If true I'm not sure how they are going to push DX10.1 over DX11 .. I would sell some Nvidia shares too.

Mrbob said:
Not attempting to start a fight, curious to know when Nvidia plans on offering DX11 cards?

Also, the ebb and flow from the ups and downs in computer hardware amazes me. The CPU side still seems to lean towards intel, but the switch in power between ATI and Nvidia throughout the years is insane. When I bought my 8800GT nvidia was the top dog, now it looks like ATI might be trying to reclaim the throne. I love it. Better hardware at cheaper prices for all!
GT300 = Nvidia's DX11 GPU. According to partner roadmaps, its slated for Q1 2010. If you are in the market for a midrange part like the 8800GT, then you're out of luck as GT300 derivatives are Q2 next year at the earliest.
 
godhandiscen said:
Try Assasins Creed. There are console ports that cannot run at 60fps locked with AA @ 1080p in a GTX295 (from experience). So it would be worse on a 260 I guess.

AC was such a shitty game with fugly LOD isses that I couldn't stomach it long enough to care. That being said performance was hardly poor, and incredibly close to a steady 60fps most of the time. Its only so bad comparitively because it was unplayable on the PS3 and very often in the sub 30fps on 360 as well.

If the consoles can't manage the game in the first place of course its not going to translate quite as well, it was far from a stuttery mess either though. I disabled some of the ugly shaders and post effects and got a solid 60fps, and tbh the game looked better for it imo.


Mrbob said:
Not attempting to start a fight, curious to know when Nvidia plans on offering DX11 cards?

Also, the ebb and flow from the ups and downs in computer hardware amazes me. The CPU side still seems to lean towards intel, but the switch in power between ATI and Nvidia throughout the years is insane. When I bought my 8800GT nvidia was the top dog, now it looks like ATI might be trying to reclaim the throne. I love it. Better hardware at cheaper prices for all!

Its very unlike you'll be able to buy a DX11 part from Nvidia this year.


SapientWolf said:
If you increase the resolution and don't get a drop in framerate (or get an increase) you are CPU limited.

Please note that this will change from game to game as well as from situation to situation within that game. As long as you can manage a decent enough frame rate being CPU bottlenecked isn't all that bad. Take the example of Ghostbuster's, its a huge hog on the CPU because of its physics engine and you won't be getting near a smooth 60fps framerate on any dual core.

However, it gives you the option to lock at 30fps, which a decent dualie will manage just fine, now you still have a renderer that is able to run way in excess of 60fps, so you can jump into the config file and enable in engine 2x2 supersampling for the most insanely gorgeous image quality. The CPU and GPU load was damn near equal on my rig after that, even though this was a game with such a highly complex physics engine and my CPU was far from cutting edge. The point is that you can always give the GPU some more work to do, so I often find it a little silly to talk about GPU bottlenecks.

As long as your CPU is giving you a framerate you're happy with then there's no reason to go and upgrade it, even if people are claiming you're "GPU bottlenecked", just go ahead and give that GPU some more work. You can never have enough GPU power as far as I'm concerned, there's always something it can be put to use to.
 
Kaako said:
The 2GB version of the card would be best suited for 1080P with full AA would it not?

Even with triple buffering and high image quality settings, your framebuffer is still going to be (significantly) below 100MB at that resolution. You're way overestimating the demands on memory of increasing resolutions. Sure, it may start to become an issue when you're running 3 x 1600p monitors but at "normal" resolutions its not what's eating up the majority of your VRAM on a 1GB card. 512MB cards may suffer as you could be talking a significant chunk there but even still, its far from what's taking up the majority of your video card's memory either.

Oh and on the subject of DX11, I'm with the recent posters on this. It makes no sense to continue developing for DirextX 10 at this point, I fully expect the transition from DX10 - DX11 to be basically wholesale by the ned of the year. Anyone with DX10/DX10.1/DX11 hardware is going to benefit from that switch so why not make the change?
 

K.Jack

Knowledge is power, guard it well
My usual mobile gaming report. New details on the ATI Mobility Radeon 5000 Series:

A Chinese leak today has slipped many core details of AMD's next ATI notebook graphics platform. The Mobility Radeon HD 5000 series is expected to be a direct translation of the desktop models and should have similar lines. At the top, the 5870, 5850 and 5830 will all have a very large 1,600 shader units (effects processors) and should support CrossFire on gaming-oriented desktop replacements. The top two will support fast GDDR5 memory while the 5830 will need GDDR3.

The 5770, 5750 and 5730 will all be close parallels to the 5800 graphics chips, including shaders and memory, but won't support CrossFire. At least one of the GDDR5 models should have a 1.3GHz memory clock speed.

AMD's mainstream 5650 and 5600 don't have many details but should be more obviously feature-reduced compared to the higher-end video chipsets; the 5470, 5450 and 5430 will cater to the entry level and use a reduced 64-bit memory interface. They should respectively support GDDR3, regular DDR3 and DDR2 memory, with the speeds of each helping to dictate performance.

The finished products may not show until early 2010, or the season after the desktop cards are available.


fgfaces.gif


We are pleased...

More info @ Portable4Gamers

Aaaand now I need to sell my M860ETU :lol
 

SapientWolf

Trucker Sexologist
They're right in some respects. According to physics they would fall equally fast when thrown out the window. But that's a more likely prospect for the Nvidia card.
 

jett

D-Member
godhandiscen said:
Try Assasins Creed. There are console ports that cannot run at 60fps locked with AA @ 1080p in a GTX295 (from experience). So it would be worse on a 260 I guess.

AC doesn't support AA in 1080p :p, and my lowly 4670 runs it at 30+ fps at that resolution. :p Surely a 4850 hits 60fps with no issues.
 

Mrbob

Member
I might go a little more high end with my next card. I want a system that can do Crysis 60fps/1080P. That is my goal and when I finally build my home theater PC. By mid next year ATI will be on DX11 refreshs, ahead of the curve with Nvidia.


Chittagong said:
So is this the graphics generation that will be dumbed down for next gen consoles, or is there one more round?

We got 75 million more pixels to go. :p
 
Chittagong said:
So is this the graphics generation that will be dumbed down for next gen consoles, or is there one more round?

Next generation consoles won't launch until 2012 (all guys in the know, and major engine programmers are repeating this figure) so there's a long way to go yet.

I think what this demonstrates is that really, the 3 teraflop region should be the base of what we can expect out of these consoles, even with a vEry conservative approach to technology that's where you end up. A DX12 level card in the 3 teraflop performance range will be low end on the PC side by 2012 and absolutely below $100, so there's no reason you couldn't have a console based on that level of hardware. This generation the 360 got the equivalent of a high end card at its launch, and the PS3 a midrange, hoping for just a low end PC level card is hardly pushing any boundaries.

What we're seeing here is throwing up around 20x the performance level of RSX/Xenos, so we should take that as the base leap for next generation really. They're going to be capable of some incredible things with that sort of power behind them.


jett said:
AC doesn't support AA in 1080p :p, and my lowly 4670 runs it at 30+ fps at that resolution. :p Surely a 4850 hits 60fps with no issues.

It does, just got to edit a simple config file.
 

Dr. Light

Member
brain_stew said:
Please note that this will change from game to game as well as from situation to situation within that game. As long as you can manage a decent enough frame rate being CPU bottlenecked isn't all that bad. Take the example of Ghostbuster's, its a huge hog on the CPU because of its physics engine and you won't be getting near a smooth 60fps framerate on any dual core.

However, it gives you the option to lock at 30fps, which a decent dualie will manage just fine, now you still have a renderer that is able to run way in excess of 60fps, so you can jump into the config file and enable in engine 2x2 supersampling for the most insanely gorgeous image quality. The CPU and GPU load was damn near equal on my rig after that, even though this was a game with such a highly complex physics engine and my CPU was far from cutting edge. The point is that you can always give the GPU some more work to do, so I often find it a little silly to talk about GPU bottlenecks.

As long as your CPU is giving you a framerate you're happy with then there's no reason to go and upgrade it, even if people are claiming you're "GPU bottlenecked", just go ahead and give that GPU some more work. You can never have enough GPU power as far as I'm concerned, there's always something it can be put to use to.

Do you even want the know the framerate that Flight Simulator X puts out at max settings on my rig (the one posted above)?
 

TheExodu5

Banned
brain_stew said:
Well that's just not true, especially the console ports part, considering even my GTX 260 can manage that just fine. My evidence is from actually playing the games on my rig, with a FRAPS counter running if necessary.

You'd have trouble running Mirror's Edge at 1200p with 4xAA for sure. It'd be hovering around that mark, most likely.

Dr. Light said:
Do you even want the know the framerate that Flight Simulator X puts out at max settings on my rig (the one posted above)?

Flight Sim is the most extreme example you could pick. Noone has been able to run it at much more than 30fps for the past 4 years.
 

RayStorm

Member
nib95 said:
If the 5870 does launch at $399. I kind of hope Nvidia bust out with a card that is not only much better performance wise (as unlikely as that sounds at the moment) but cheaper too,

I believe every one of us, as a customer hopes this.

I honestly can't stand it when companies get complaisant like this.

Yeah, how dare they! Actually trying to make as much money of us customers, who try to spend as little as possible! Those bastards!
 
Top Bottom