• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PC component performace degradation myth

Your first mistake was thinking that we'll listen to you over Carmack.
Do you ignore factual evidence in every facet of life or just in relation to gaming?

Anyways it's time for bed. I will be back tomorrow (16hrs from now) to post Alan Wake benchmarks, but I'm going to down clock my 8800GT to make it a more favorable comparison. So I need to do some research on that and I want more info on PC settings for games that are equivalent to their console counterparts, like is PS360 Tomb Raider Low, Normal, High PC setting or somewhere in between? I really do want to make this a fair comparison. Peace
 

saunderez

Member
Do you ignore factual evidence in every facet of life or just in relation to gaming?
Do you think you know better than Carmack? He's a veteran in the field and has been making games as long as a lot of us have been playing them. You ran a flawed test and claimed you'd busted the myth. I'm still going to say Carmack is right unless you've got some more conclusive evidence. He does this shit for a living.
 
Do you ignore factual evidence in every facet of life or just in relation to gaming?

Anyways it's time for bed. I will be back tomorrow (16hrs from now) to post Alan Wake benchmarks, but I'm going to down clock my 8800GT to make it a more favorable comparison. So I need to do some research on that and I want more info on PC settings for games that are equivalent to their console counterparts, like is PS360 Tomb Raider Low, Normal, High PC setting or somewhere in between? I really do want to make this a fair comparison. Peace

I do have access to an 8800 GT. I think I will downclock it by half, turn off some CPU cores, and downclockthe processor to something which makes it benchmark similarly to some 2005 era processor. Then... see what happens.
Do you think you know better than Carmack? He's a veteran in the field and has been making games as long s a lot of is have been playing them. You ran a flawed test and claimed you'd busted the myth. I'm still going to say Carmack is right unless you've got some more conclusive evidence.
Carmacks quote is oft mused to talk about this most recent generation of consoles or GPU performance (when it really is talking about CPU performance under dx9).
 

Deepo

Member
I was the one who asked Carmack that question on Twitter. The reason I asked was because I felt that AMD going with Mantle might bring us back to the old 3DFX Glide days, where you had to have a card from a specific vendor. I thought that sounded like a bad idea. Turns out I was pretty much wrong there, but that tweet reply sure has been getting some mileage!
 

Nabbis

Member
Someone should test a 7600GT or 7800GT if you want to see how a PS3 GPU performs. 8800GT is a first generation DX10 GPU... It's also kinda hard to judge the performance since Cell was allegedly used as bandage for the GPU.
 
I do have access to an 8800 GT. I think I will downclock it by half, turn off some CPU cores, and downclockthe processor to something which makes it benchmark similarly to some 2005 era processor. Then... see what happens.

Carmacks quote is oft mused to talk about this most recent generation of consoles or GPU performance (when it really is talking about CPU performance under dx9).

Cool, look forward to the extra data.

Bonus pic of my PC..
8OBEX0X.jpg
 
If that's the case he's definitely right. Look what Mantle does for PCs with weak CPUs
Mantle apllies to multi-GPU and to AMD CPUs mainly. So while carmacks quote stands, it applies to certain scenarios only... and not all PCs or all game engines. Nvidia drivers forexample don't have such a monumental performance hit under DX as AMDs have if one looks at similar benchmarks.

I think Mantle and lower level API on PC is more interesting not due to low end CPUs, but when one thinks about the level of draw calls and unique objects PC games could have under a low level API. Something a game like the latest AssCreed could very much so benefit from.
 

DonMigs85

Member
The fact that the 750 Ti can beat the PS4 and that the desktop Radeon 7870 also performs around or better than PS4 shows that DX11 is actually fairly efficient relative to DX9/10 and OpenGL. And again, Nvidia's drivers are really well coded that they don't really need anything like Mantle, it seems.
Oh and curse the weak Jaguar cores
 
Does the OP and his defenders think they know better than John friggin CARMACK???? LMAO

Also how convenient to use a single game and a 8800GT which isn't even the same gen and performance tier as PS360 gpu's.
 
Does the OP and his defenders think they know better than John friggin CARMACK???? LMAO

Also how convenient to use a single game and a 8800GT which isn't even the same gen and performance tier as PS360 gpu's.

God I'm still awake! Read the thread. I've done two games and will do more tomorrow with my 8800GT down clocked see you then...
 

DonMigs85

Member
Another question: back in the old days would the GeForce 3 Ti 500 always outperform the Xbox? Back then I think OS/API overhead may have been proportionately greater.
 

coastel

Member
The fact that the 750 Ti can beat the PS4 and that the desktop Radeon 7870 also performs around or better than PS4 shows that DX11 is actually fairly efficient relative to DX9/10 and OpenGL. And again, Nvidia's drivers are really well coded that they don't really need anything like Mantle, it seems.
Oh and curse the weak Jaguar cores

Multi plat game's do not get the best out of the system and it's one year in. This is gonna bite you in the ass in a year or 2. So you think lesser hardware is gonna beat better hardware. That's no better than the people saying the 2 times performance.
 

Valkyria

Banned
OP how long have you been playing on PC? Anyone who has been for the last 20 years knows too well what Carmack is saying. It has happened again and again since the 3d cards were introduced.
 

Serandur

Member
Mantle apllies to multi-GPU and to AMD CPUs mainly. So while carmacks quote stands, it applies to certain scenarios only... and not all PCs or all game engines. Nvidia drivers forexample don't have such a monumental performance hit under DX as AMDs have if one looks at similar benchmarks.

I think Mantle and lower level API on PC is more interesting not due to low end CPUs, but when one thinks about the level of draw calls and unique objects PC games could have under a low level API. Something a game like the latest AssCreed could very much so benefit from.

I very much look forward to DX12 as a standard for situations like AC or, better yet, Total War. But on the note of low-level APIs only truly being an issue for CPUs and draw calls; despite that advantage, the consoles aren't faring too well with 20-25 FPS on AC Unity vs 60 FPS being borderline attainable with a Sandy, Ivy, or Haswell quad-core. I do indeed believe low-level APIs are beneficial for CPU performance, but if put into the context of matching the Jaguar CPUs in the X1/PS4, it's not remotely an issue because of just how weak those CPUs are.



Additional note for other posts in the thread: Did Carmack even remotely elaborate on that quote? No, no he didn't and no common sense of engineering, benchmark comparisons, or any such other actual data or explanations beyond the very brief and taken-out-of-context tweet even remotely suggest that any such blanket 2x statement could be remotely accurate. But please, go ahead and explain/generalize how mature and modern APIs for the world's source of high-performance hardware and software advancement (with a clear focus for quite some time now on efficiency) that are held responsible for many of the world's most advanced professional, scientific, and financial research are so ridiculously and across-the-board inefficient in comparison to a cobbled-together, mainstream budget electronic despite contrary benchmarks and common sense.
 

DonMigs85

Member
Multi plat game's do not get the best out of the system and it's one year in. This is gonna bite you in the ass in a year or 2. So you think lesser hardware is gonna beat better hardware. That's no better than the people saying the 2 times performance.
Let's see if next year's COD game looks and performs better on PS4 than the 750 Ti. I'm fully expecting Uncharted 4 to blow us away too
 

astraycat

Member
Take some video of all settings for a real comparison.

Some screenshots for a psuedo-comparison. I notice there aren't either.
 

coastel

Member
Let's see if next year's COD game looks and performs better on PS4 than the 750 Ti. I'm fully expecting Uncharted 4 to blow us away too

Depend's on the port again. The problem is we may never know how well optimised a multi plat is for console's compared to PC. You could be right but I just can't see how sony could mess up calculation's so there hardware can't run at what there number's say.
 

DonMigs85

Member
(Compared to DirectX on AMD GPUs which is up to 75% less CPU efficient than on Nvidia GPUs)
Exactly, Nvidia never needed anything like Mantle and DX12 may just widen the gap further.
You don't think there's any political shenanigans though right?
 

Durante

Member
By the way OP, good job on doing this. Sadly, I've noticed that sometimes exactly the people who should take note are also some of the most likely to simply ignore data points.

You don't think there's any political shenanigans though right?
My favourite game for demonstrating the CPU efficiency gap in DirectX is Alien:Isolation, which is AMD sponsored.
 

Ivan

Member
Try wolfenstein or the evil within... Games like that.

And that pc does outperform consoles in older games, but will be completely crushed in newer like the two I mentioned. There's more, of course...
 
By the way OP, good job on doing this. Sadly, I've noticed that sometimes exactly the people who should take note are also some of the most likely to simply ignore data points.

My favourite game for demonstrating the CPU efficiency gap in DirectX is Alien:Isolation, which is AMD sponsored.

Yeah it's pretty embarrassing especially with all that wasted money on Mantle.,.. . Luckily they weren't cheap on the VRAM so still glad I went with 280X instead of GTX770 when i last upgraded my PC but my next GPU will most likely be NVIDIA again.
 

DonMigs85

Member
My favourite game for demonstrating the CPU efficiency gap in DirectX is Alien:Isolation, which is AMD sponsored.
And it doesn't matter whether you use an AMD CPU right? Even an Nvidia GPU paired with an AMD CPU? Maybe it also has something to do with that Intel compiler boost - the infamous "AMD Crippler"?
 

SURGEdude

Member
Another thing for people using older rigs to consider is how much left over shit and improperly removed/installed/updated software and registry entries even the most careful user is likely to accumulate over the years many years. Especially now in the era of auto-update everything the amount of turnover in the system folder is quite high and uninstallers often do a poor job fully removing software. And that is from reputable companies, the problem is vastly worse if you share your system with people who install things randomly and assume that even if they install junk they can just unistall it and all is well again.

And because of the increased storage and CPU speeds apps that are installed are growing in size and footprint and storing or caching their own large support files that can sometimes put a lot of strain on something like a 5 year old cheap consumer mechanical hard drive. Heck to this day on 8.1 if the OS decides to spin up one of my half dozen internal or external drives just to index files for search, no matter what other drive, folder, or task I'm working on within Explorer everything will grind to a sudden pause as I hear the drive spin up from a deadsleep.

I'm sure there is a legacy need for it to work that way, and it is only a minor annoyance. But like the registry and filesystem if they could start off fresh things would be done differently and so awareness and managing freespace, fragmentation, and the registry are likely here to stay even if they aren't as apparent.
 

hodgy100

Member
what about this example then?
https://www.youtube.com/watch?v=rJ-24T7W61A

the 8600gt 256mb was an awful card even when it relased with it being lessp owerful than the 7800gt, yet here it is running DMC:Devil may cry at 30fps (without fraps) at a higher resolution than either of the console versions.

and here is sonic generations on a 8600gt https://www.youtube.com/watch?v=fiuWRMpu7x0
"20 FPS with Fraps running.
40 FPS without Fraps running."

and BF4 https://www.youtube.com/watch?v=Xy2aHaVnSi8
(though to be fair its being ran at 800x600, the pc version is very different when compared to the 360/360 version.

PC games have been playing the console games at much much better settings for years now, you have to remember that most 360/ ps3 games run at 720p (if that) at 30fps, it is fairly easy to get comparable performance with low end parts.
 

hohoXD123

Member
Would be interesting to see roughly what settings the PS3/360 are running at for Tomb Raider, could help with the comparisons.
 

Blinck

Member
I had that graphic card and it's a beast. I still have it on an old PC in the house and I shall never get rid of it!
I still think that what carmack said makes sense. I can't see my GTX 670 running something like uncharted 4 at 1080p 60fps, unless the game was developed directly for it like it is for the ps4.

Quick curious question:

For you guys (like me) that don't have the most high-end card, do you prefer to lower your settings a bit to get 1080p 60fps or would you rather have a fluctuating framerate but at the highest settings?
 

M3d10n

Member
Carmak said a console can run a game 2x faster than equal PC hardware. On Twitter. This isn't a scientific axiom. You can't turn that into a reversible math formula and make the comparison using 2x "faster" hardware because establishing what "2x faster" is is not that simple.

This isn't DBZ where there's a single "power" measurement. A GPU is not a linear engine. It has a lot of hardware that performs several tasks, each one with it's own upper limits. How many texels it can read, how many vertex/pixel shader instructions it can run, how long each different shader instruction takes to run, how many polygons can it assemble, the latency for fetching uncached texture data, how big the texture cache is, how many pixels it can write to the framebuffer, what's the blending performance, how fast can it perform depth/stencil tests... all of that has an impact on how a game will run, and that's not even getting into the API/driver side of things (command queuing strategy, what's the penalty for switching render states, etc).

When Carmak said a console can run a game 2x faster than equal hardware, he wasn't talking about magically doing the same thing, but faster. No amount of "coding to the metal" will increase fillrate or make the GPU run shaders faster. It's about doing things differently (or not doing things at all) and finding out a way for it to still look good. In a console the ceiling(s) are always in the same place, so it's possible to figure out what the bottlenecks are and what you should do to avoid them. It's about smoke and mirrors.

When Carmak said a console can run a game 2x faster than equal hardware, he was talking about equal hardware, which the GeForce 8800GT definitely isn't. The GF8 was a major breakthrough in GPU architecture, specially when it came to shader performance. It was also a DirectX 10 GPU, which brought forth several advancements to reduce the DX9 draw call costs, like constant and state buffers. Its performance is in an entirely different league than the PS3 and 360.

Finally, when Carmak said a console can run a game 2x faster than equal hardware, he was talking about the PS3 and the 360. He was not postulating an axiom that applies to all hardware ever. Current gen hardware actually lost some of last gen's advantages: they have drivers, need to virtualize GPU resources and are stuck with subpar general purpose x86 CPUs instead of SIMD number crunchers.
 

Unai

Member
Quick curious question:

For you guys (like me) that don't have the most high-end card, do you prefer to lower your settings a bit to get 1080p 60fps or would you rather have a fluctuating framerate but at the highest settings?

I've just bought a G-Sync monitor, so fluctuating framerate is not as bad as it used to be. I'm ok with it now.
 
Carmak said a console can run a game 2x faster than equal PC hardware. On Twitter. This isn't a scientific axiom. You can't turn that into a reversible math formula and make the comparison using 2x "faster" hardware because establishing what "2x faster" is is not that simple.

This isn't DBZ where there's a single "power" measurement. A GPU is not a linear engine. It has a lot of hardware that performs several tasks, each one with it's own upper limits. How many texels it can read, how many vertex/pixel shader instructions it can run, how long each different shader instruction takes to run, how many polygons can it assemble, the latency for fetching uncached texture data, how big the texture cache is, how many pixels it can write to the framebuffer, what's the blending performance, how fast can it perform depth/stencil tests... all of that has an impact on how a game will run, and that's not even getting into the API/driver side of things (command queuing strategy, what's the penalty for switching render states, etc).

When Carmak said a console can run a game 2x faster than equal hardware, he wasn't talking about magically doing the same thing, but faster. No amount of "coding to the metal" will increase fillrate or make the GPU run shaders faster. It's about doing things differently (or not doing things at all) and finding out a way for it to still look good. In a console the ceiling(s) are always in the same place, so it's possible to figure out what the bottlenecks are and what you should do to avoid them. It's about smoke and mirrors.

When Carmak said a console can run a game 2x faster than equal hardware, he was talking about equal hardware, which the GeForce 8800GT definitely isn't. The GF8 was a major breakthrough in GPU architecture, specially when it came to shader performance. It was also a DirectX 10 GPU, which brought forth several advancements to reduce the DX9 draw call costs, like constant and state buffers. Its performance is in an entirely different league than the PS3 and 360.

Finally, when Carmak said a console can run a game 2x faster than equal hardware, he was talking about the PS3 and the 360. He was not postulating an axiom that applies to all hardware ever. Current gen hardware actually lost some of last gen's advantages: they have drivers, need to virtualize GPU resources and are stuck with subpar general purpose x86 CPUs instead of SIMD number crunchers.

Very good and interesting post. Thank you.
 
As the console generation goes on the standards just get lower and lower in regards to performance and IQ settings, we saw this last time with really awful versions like Far Cry 3 and BF4. What people don't realize if you drop your settings to match what the consoles are doing then hardware that matched or out performed it originally is still going to do it all the way through the generation. This was true last gen and has been proven with benchmarks and will be true this generation.

The 'real reason' why your cards don't last through an entire generation is your standards don't lower, and if you can spend $100-200 every few years and get a significantly better experience in FPS and IQ across the board then that is what people choose to do, rather than ride out a card for 8 years. Who would do that? Consoles don't have a choice, they are stuck with their set hardware and so have to make sacrifices as it gets long in the tooth.
 
Very good and interesting post. Thank you.

Except that it plays loose and hard with the facts. Most comically stating that the 8800GT was a DX10 GPU and benefited from reduced drawcalls, which is only true if the game was actually made using DirectX 10 and all the optimizations it offered.
spoiler, that rarely happened until DX11 came along, and a majority of the popular games from the last console generation were DX9
 

Rafterman

Banned
The last gen consoles major bottleneck was Ram in the end, I've seen many of these PC comparisons over the years and the fact they use several times the system Ram and a chunk of Vram on top is often glossed over. How would a PC with only 512mb of total Ram cope?

Why are people ignoring this point when it's probably the most relevant in the thread?

I don't see any myth busting going on here, I just see vastly superior hardware performing superior...shocking!
 

SmokedMeat

Gamer™
The last gen consoles major bottleneck was Ram in the end, I've seen many of these PC comparisons over the years and the fact they use several times the system Ram and a chunk of Vram on top is often glossed over. How would a PC with only 512mb of total Ram cope?

We never see true like for like comparisons. I'd love to see a PC with similar 360 components and 512mb of RAM run Titanfall.
 
The PS3 GPU is very similar to a 256mb 7800gtx, except with a 128-bit memory bus instead of 256 and half the render output units disabled. Here are some comparisons with the 8800gt:

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Assassins-Creed-v1.02,735.html

8800gt 57.8
7800gtx 15.5

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Mass-Effect,771.html

8800gt 85.3
7800gtx 19.6

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Call-of-Duty-4-v1.6,744.html

8800gt 81.6
7800gtx 21.9

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Crysis-v1.21,748.html

8800gt 25.7
7800gtx 5.7

It's not even close. Almost 5x better in some of these, and remember the 7800gtx is better than the PS3. This is only taking the GPU into account, but really, nobody should be surprised to see the 8800gt blowing the PS3 away.
 
I always assumed that Carmack quote was referring to how things run under the hood, how devs can make things more efficient etc, rather than say 2x the powa ! 60 FPS instead of 30 FPS !
 
i don't understand what's the problem with my pc OP.... I've got way better hardware than yours but can't get the fps you're having with tomb raider...

GTX660
16gb ram
i7 Gpu 3,4ghz

with ultra setting, without tesselation, i only got 50 fps, i must change some of the setting to always get 60 fps

Maybe reinstalling it completly would help? my pc seems slow sometimes (but i still can run some games like alien at 60 fps and evil within at 45 fps)

yeah sorry, when it comes to pc i'm a real noob (even if it's the thing i play the most)
 

cpooi

Member
i don't understand what's the problem with my pc OP.... I've got way better hardware than yours but can't get the fps you're having with tomb raider...

GTX660
16gb ram
i7 Gpu 3,4ghz

with ultra setting, without tesselation, i only got 50 fps, i must change some of the setting to always get 60 fps

Maybe reinstalling it completly would help? my pc seems slow sometimes (but i still can run some games like alien at 60 fps and evil within at 45 fps)

yeah sorry, when it comes to pc i'm a real noob (even if it's the thing i play the most)

If you are super-sampling, stop it. Also, remove tressFX.
 

danwarb

Member
Some bad scientific method

Good old GPUs don't suddenly become obsolete I think is the point. An 8800GT still performs well relative to the previous console generation, as you might expect. If you've got a decent CPU now, it's already faster than the PS4/X1 CPU to a much greater extent than the OP's comparison. A 280 or 760 or whatever will run console ports with significantly higher image quality and frame rate throughout.


I'm guessing that games that are super optimized for slow x86 CPUs and familiar AMD GPUs would probably do alright on PC too. It's not like there's weirdness like CELL in there now.
 
The PS3 GPU is very similar to a 256mb 7800gtx, except with a 128-bit memory bus instead of 256 and half the render output units disabled. Here are some comparisons with the 8800gt:

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Assassins-Creed-v1.02,735.html

8800gt 57.8
7800gtx 15.5

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Mass-Effect,771.html

8800gt 85.3
7800gtx 19.6

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Call-of-Duty-4-v1.6,744.html

8800gt 81.6
7800gtx 21.9

http://www.tomshardware.com/charts/gaming-graphics-charts-2008-q3/Crysis-v1.21,748.html

8800gt 25.7
7800gtx 5.7

It's not even close. Almost 5x better in some of these, and remember the 7800gtx is better than the PS3. This is only taking the GPU into account, but really, nobody should be surprised to see the 8800gt blowing the PS3 away.
Those are framerates? Obv not at console settings though.

I think the ultimate test would be that 7800 GTX in Crysis 2 using the console Cvars to mimic the ps360 at their respective rez's.
 

derExperte

Member
The Nvidia 8800 GT was an amazing card. The only reason why I replaced it is because my card actually died.

Many 8800GTs did sooner (mine lasted six months) or later, they were the 360s of graphics cards unfortunately.

Single slot cooler... yikes!

The issue was more some sort of production fault which affected quite a few batches but Nvidia decided (unofficially) to sell them anyway and deal with the RMAs instead.
 

bj00rn_

Banned
Carmak said a console can run a game 2x faster than equal PC hardware. On Twitter. This isn't a scientific axiom. You can't turn that into a reversible math formula and make the comparison using 2x "faster" hardware because establishing what "2x faster" is is not that simple.

This isn't DBZ where there's a single "power" measurement. A GPU is not a linear engine. It has a lot of hardware that performs several tasks, each one with it's own upper limits. How many texels it can read, how many vertex/pixel shader instructions it can run, how long each different shader instruction takes to run, how many polygons can it assemble, the latency for fetching uncached texture data, how big the texture cache is, how many pixels it can write to the framebuffer, what's the blending performance, how fast can it perform depth/stencil tests... all of that has an impact on how a game will run, and that's not even getting into the API/driver side of things (command queuing strategy, what's the penalty for switching render states, etc).

When Carmak said a console can run a game 2x faster than equal hardware, he wasn't talking about magically doing the same thing, but faster. No amount of "coding to the metal" will increase fillrate or make the GPU run shaders faster. It's about doing things differently (or not doing things at all) and finding out a way for it to still look good. In a console the ceiling(s) are always in the same place, so it's possible to figure out what the bottlenecks are and what you should do to avoid them. It's about smoke and mirrors.

When Carmak said a console can run a game 2x faster than equal hardware, he was talking about equal hardware, which the GeForce 8800GT definitely isn't. The GF8 was a major breakthrough in GPU architecture, specially when it came to shader performance. It was also a DirectX 10 GPU, which brought forth several advancements to reduce the DX9 draw call costs, like constant and state buffers. Its performance is in an entirely different league than the PS3 and 360.

Finally, when Carmak said a console can run a game 2x faster than equal hardware, he was talking about the PS3 and the 360. He was not postulating an axiom that applies to all hardware ever. Current gen hardware actually lost some of last gen's advantages: they have drivers, need to virtualize GPU resources and are stuck with subpar general purpose x86 CPUs instead of SIMD number crunchers.

You indirectly explained very well why the Carmack-quote is often unwittingly used as a red herring around this place.. I admit the op was somewhat unclear, but I also kinda feel that your post partly supports his apparent theory (if I understood him) that the Carmack quote isn't as directly usable to point out "PC-inefficiency" across the board as some think.
 
Top Bottom