• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Are the console GPUs this cycle really underpowered?

2San

Member
Perhaps we could compare with whatever the average PC GPU is, perhaps using steam information for that?
Some people use a calculator to play Counterstrike though. :p

The most common parts are(from Steam):
Windows 7 64 bit
8GB RAM(system ram)
2.3 Ghz to 2.69 Ghz (2 CPU cores) Mostly Intels
NVIDIA GeForce GTX 560 Ti(actually the Intel HD Graphics 3000, but that's an integrated GPU that comes with the more popular CPU's)
1GB VRAM
 

StevieP

Banned
PS4 basically uses a high end gpu of 2010 (GTX570), which is good enough.

Wh... what?

While next-gen consoles can't rival high-end rigs it's a certainty that they are still far above the average gaming PC in every way, hence why it's a good thing that consoles remain the common denominator.

For those that value power above all (and there are many of you on GAF) I don't see how having a console as a common denominator is positive at all. The power focus should mean developers target the highest end and reduce in scale for everyone else.

That's not normally economically feasible, but we're not talking economics here. Not saying I agree with that way of thinking, but if you want power you don't game multiplats on consoles at all.

Why even try compare console GPU with pc GPU?

Closed hardware, near 100% efficiency, coded to the metal, games coded to one specific hardware config

Vs

Open modular starved hardware, high overheads, ~50% efficiency, games coded for compatibility of multiple hardware configs

Ugh... c'mon dude.
 
I think using steam hardware surveys isn't that great of a comparison unless you compare them to the average console. :p

It might make sense to compare the average contemporary dedicated graphics card but that's really a different conversation.
 
Interesting post, but if anything it seems to underscore the differences to me.

Yup. I applaud the OP's effort, but the comparison is a bit flawed in that in doesn't take into account factors such as the eDRAM or the unified shader architecture of the Xenos chip.

So, not only was the Xenos chip competing in terms of FLOPS with the high end of PC graphics, it was also ahead in architecture. The PS4 GPU competes with last year's mid-range cards and it's nothing special techwise.
 
Why even try compare console GPU with pc GPU?

Closed hardware, near 100% efficiency, coded to the metal, games coded to one specific hardware config

Vs

Open modular starved hardware, high overheads, ~50% efficiency, games coded for compatibility of multiple hardware configs

Uh, neither of those efficiency numbers are right. PC efficiency is a bit higher the console efficiency is way lower. The gap in efficiency is actually around 5-8% in PCs vs consoles in favor consoles. Console developers are more likely to use visual trickery (fake it) to get a result whereas a PC dev will accurately render it since the PC has the power.
 
The bottom line is nothing is going to target the high end gpus speciifically...a 690 can play Battlefield 3 on ultra at a res of 2560 by 1600 at an average of 113 fps....that is insane. No one needs these thing to play the current crop of games, even Crysis 3. Next gen consoles should bump the requirements up though.

Honestly, I think the comparison is interesting but not that informative. As I said, high end GPUs these days are drawing on an obscene amount of energy. No GPU in 2005 was maxing the most demanding game at 100fps. Hell, when Crysis came out it was bringing the highest end GPUs to their knees. Even SLI set ups couldnt run it on ultra that well. Now with SLI Titans, you can get Crysis 3 to run at 95 fps. That is insane, but it is only because the GPUs are drawing crazy wattage. No magic tech here.
 
Feel free to criticize and add your own comparisons.

I think you can't just talk about the GPU without considering how efficiently its power can be utilized by the system. The PS4 uses an HSA-based architecture, which should allow optimal data sharing between CPU and GPU, meaning that you should be able to get the most out of the GPU. On Pc we won't see HSA before 2014, accoring to AMD. Yes, on PC you can buy a 4TF GPU, but it won't make the PCIe bus any faster.
 

benny_a

extra source of jiggaflops
So, not only was the Xenos chip competing in terms of FLOPS with the high end of PC graphics, it was also ahead in architecture. The PS4 GPU competes with last year's mid-range cards and it's nothing special techwise.

So the EDRAM used in 360 was ahead but the PS4's 176GB/s bandwidth available to both CPU and GPU memory is not better than the PCI-E bandwidth available on current PCs?
 
I think you can't talk about the GPU without considering how efficiently its power can be utilized by the system. The PS4 uses an HSA-based architecture, which should allow optimal data sharing between CPU and GPU, meaning that you should be able to the the most out of the GPU. On Pc we won't see HSA before 2014, accoring to AMD. Yes, on PC you can buy a 4TF GPU, but it won't make the PCIe bus any faster.
The PCIe bus has yet to be bottle necked. Shit, you can do quad SLI with PCIe 3.0 and not bottle neck the system.
It has more than enough bandwidth.
 

sangreal

Member
So the EDRAM used in 360 was ahead but the PS4's 176GB/s bandwidth available to both CPU and GPU memory is not better than the PCI-E bandwidth available on current PCs?

The thread is about GPUs, and no the 176GB/s memory bandwidth for the GPU is not better than current high-end PCs

If you want to argue about comparing the entire architecture it's an entirely different topic
 

benny_a

extra source of jiggaflops
The PCIe bus has yet to be bottle necked. Shit, you can do quad SLI with PCIe 3.0 and not bottle neck the system.
It has more than enough bandwidth.
There are benchmarks on HardOCP that say the opposite. Unless changing the variable PCI-E 2.0 to PCI 3.0 and getting more frames means there isn't a bottle neck. Obviously it isn't like going from a regular harddisk to a SSD when talking I/O but their results say up to 10% advantage by just changing a single variable in modern games.

The thread is about GPUs, and no the 176GB/s memory bandwidth for the GPU is not better than current high-end PCs
I feel that the eDRAM part is represented in the OP so I don't know why my post isn't valid as a response. Feel free to disagree, if you feel the HSA architecture is not at all important to the GPU discussed then it can be discarded.
 

Eideka

Banned
Is the bandwith a real concern on PC ?

I have never heard of this being an issue even with 3 graphic cards. PCI Express 3.0 should last a good part of next-gen.

I believe high-end PCs won't struggle to run multiplats at the very least as good as the PS4, even if the VRAM limitation is likely to be a problem later on.
 
There are benchmarks on HardOCP that say the opposite. Unless changing the variable PCI-E 2.0 to PCI 3.0 and getting more frames means there isn't a bottle neck. Obviously it isn't like going from a regular harddisk to a SSD when talking I/O but their results say up to 10% advantage by just changing a single variable in modern games.


I feel that the eDRAM part is represented in the OP so I don't know why my post isn't valid as a response.

Did they keep the exact same processor with the exact same ram with the exact same north and south bridge with the exact same hard drive with the exact same software installed?
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
PCIe is just used for transferring the data up to the VRAM. The bandwidth of the VRAM is what matters.
 
The PCIe bus has yet to be bottle necked. Shit, you can do quad SLI with PCIe 3.0 and not bottle neck the system.
It has more than enough bandwidth.

This is a post from a knowledgeable user on B3d:

If the PS4 or XB720 leverage shared computation between the CPU and GPU to significant effect, there are cases where PC setups can suffer if they run into PCIe latency/bandwidth restrictions. Discrete products may also lag behind the consoles in terms of shared memory space, compared to consoles that will have it at the outset.

Why not stuck an Orbis-like chip on a graphics board and call it a new generation? Even a few cores could, with the help of the driver or HSA runtime, actually make some of the general GPU processing workloads that are dominated by copy and PCIe overhead reasonable to use.

http://beyond3d.com/showpost.php?p=1711211&postcount=16
 

Eideka

Banned
PCIe is just used for transferring the data up to the VRAM. The bandwidth of the VRAM is what matters.

The memory bandwith on cards like the 7970 or the GTX680 is not sufficient ?

I swear I heard that the 680 has something like 192GB/s....
 
Okay then, but that does not solve the bandwith problem completely apparently.

How that will affect PC ports ?
PC ports will be fine with any mid to high end PC card for at least 3 years. If you have an SLI setup, you'll be good to go for at least 4-5 years with the equivalent of a 660 Ti or above.
 

Eideka

Banned
PC ports will be fine with any mid to high end PC card for at least 3 years. If you have an SLI setup, you'll be good to go for at least 4-5 years with the equivalent of a 660 Ti or above.

That sounds a little bit too optimistic but I hope you are correct. I was under the impression that high-end GPUs would be necessary to run them as good as the PS4/720.

How did you get that from what I said? I was simply stating that the ~25GB/s PCIe bandwidth is by and large irrelevant for rendering.
You mentioned the VRAM bandwidth so I felt compelled to ask whether or not high end cards would fare positively in that scenario. I hope the next batch of GC will have a 384 bus, I don't know why Nvidia has dropped the ball to such an extent with its 600 series of cards.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
The memory bandwith on cards like the 7970 or the GTX680 is not sufficient ?

I swear I heard that the 680 has something like 192GB/s....

How did you get that from what I said? I was simply stating that the ~25GB/s PCIe bandwidth is by and large irrelevant for rendering.
 
One more thing.

Wasn't the PS3 meant to be considerably more powerful (in FLOPS) compared to the 360? Ultimately we never saw a big difference between them.

Maybe someone could confirm, I googled a bit and seems PS3 was double the flops of the 360?

No not really. It was all marketing BS. Lies. RSX was like 250flops vs Xenos was 240. So a little more raw power but Xenos had unified architecture. Cell had more than double the flops of Xenon though, but as we know it was difficult to tap into that.

Btw I don't understand the PS4 flop numbers, it says it's 1tflop, but it's 1.84tflops...

Edit: nvm I understand how to read it now. It's a ratio.
 

gconsole

Member
My point still stands. People like you love to twist facts so they suit their agenda, but tangible facts say otherwise and the meltdowns are entertaining. I'm more upset by this situation than anything else, I really wanted next-gen consoles to top high end PCs.

That said it does not change the fact that this hardware will produce better results than what is found in the PC space.
This is what excites me the most. :)

As I said I'm eager to see where the GTX770/780 and the new 89XX fit in all of this, because that's going to be a major talking point for PC folks given how demanding next-gen multiplats are likely to be.

What's wrong with being positive? The console market is shrinking. And if the manufacturer is not trying to be smart. The whole industry will go down a lot faster.

Nobody argue that console will win PC in term of raw performance. It would be very stupid to try compare $500 eco car with $2000 truck. Common.
 
As a programmer (but a graphics layman) I'd imagine the bandwidth isn't an issue now, but latency could be more of a headache. I think what the dude is saying is that bandwidth could be an issue in a couple of years, but it's not a definite.
 

AgentP

Thinks mods influence posters politics. Promoted to QAnon Editor.
Which is not true at all, see my previous post.

I said rendering, you are talking about CPUs and GPU sharing data over a PCIe bus, which of course is avoided presently. Until PCs make a jump to UMA (besides low-end laptops), this will be the case.
 
Titan is appearntly 4.5 Tflops and these are Nvidia flops too, that would account to AMD 5 or more Tflops I imagine.

Is there a single game in existence that takes advantage of that kind of power?

I get the impression PC GPUs have become more about penis size than they are running PC games designed today.
 

StevieP

Banned
Is there a single game in existence that takes advantage of that kind of power?

Plenty. Even some games designed with current HD console targets in mind can bring those vanity cards to their knees.

I get the impression PC GPUs have become more about penis size than they are running PC games designed today.

Nope. Unless you don't like things like resolution, framerate, and real anti-aliasing solutions.
 
PS4 basically uses a high end gpu of 2010 (GTX570), which is good enough.

Remember a year after xbox 360 launch, PC users got a card that completely crushed the current gen consoles cards...the 8800 GTX

In return, 8800GTX is nowhere near the GTX570 in power.

So yes, its still a pretty big difference between this gens gpu and next

What? It trounces a high end gpu from 2010, because it has a little bit more or the same amount of flops, but is much much more efficient. GCN++(whatever it is, probably GCN 2.0), plus the fact it's on an APU, and the much higher memory bandwidth. In real world performance it would destroy a 3 yr old high end PC GPU.
 
Plenty. Even some games designed with current HD console targets in mind can bring those vanity cards to their knees.

I'd be curious to hear which games. Is this more an issue of unoptimized games than it is games that truly need 4 teraflops to run at max settings.

Call me a fool, but I wouldn't be surprised if the PS4 got a port of Crysis 3 this fall and ran it at near max settings.
 

Orayn

Member
Is there a single game in existence that takes advantage of that kind of power?

I get the impression PC GPUs have become more about penis size than they are running PC games designed today.

Well, it depends on what you mean by "takes advantage of." Running The Witcher 2 or Battlefield 3 at 2560x1440 with supersampling AA at 60FPS still takes one hell of a setup to accomplish. Crysis 2 and 3 start with steep requirements and only get more demanding when you start using graphical mods and messing with console variables to take the visuals even further.

I'd be curious to hear which games. Is this more an issue of in optimized games than it is games that truly need 4 teraflops to run at max settings.

Call me a fool, but I wouldn't be surprised if the PS4 got a port of Crysis 3 this fall and ran it at near max settings.

Optimization is part of it. A lot of the stuff the stuff that demands so much power tends to be inefficient specifically because the games are made with PS3 and 360 in mind.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Is there a single game in existence that takes advantage of that kind of power?

Of course there is. You could run Crysis 3 with all the bells and whistles on and loads of anti aliasing and down sample from a high resolution and you wouldn't be able to get 60fps.
 

cyberheater

PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 PS4 Xbone PS4 PS4
Don't need a good GPU with all that RAM.

What's going to give you better visuals. A great GPU with 1Gb of RAM or a crap GPU with 8Gb ?

EDIT: I'm not saying the PS4 GPU is weak. I'm saying your argument is.
 

StevieP

Banned
I'd be curious to hear which games. Is this more an issue of in optimized games than it is games that truly need 4 teraflops to run at max settings.

Call me a fool, but I wouldn't be surprised if the PS4 got a port of Crysis 3 this fall and ran it at near max settings.

"Optimization" in the context it is often used on this board (i.e. to make consoles magically perform 2x better than their parts' raw power) is really just turning things down/off, removing AA or going to a post-AA blurfest solution, reducing rendering resolution, texture resolutions, toning down effects or their accuracy, and other similar smoke&mirrors type things to make the best use of the raw computational power (or lack thereof, in the case of optimization) available on the fixed target.

There is no fixed target on PC. You can take it as high or low as you want. If you want to "optimize" for your hardware, there are plenty of ways to do so either via the in-game sliders, INI/CFG editing, or similar utilities.

Crysis 3 cranked all the way up with high resolutions is bringing Titan PC to its knees, if the Titan thread is to be believed (with some people SLIing them to get better results). And the new Cryengine is extremely well 'optimized' for current multi-core PCs. If you want BF3 multiplayer to retain 60fps or above at 1080p, you need something high end like a 680, and Frostbite 2 is also extremely well optimized on PC.
 
What? It trounces a high end gpu from 2010, because it has a little bit more or the same amount of flops, but is much much more efficient. GCN++(whatever it is, probably GCN 2.0), plus the fact it's on an APU, and the much higher memory bandwidth. In real world performance it would destroy a 3 yr old high end PC GPU.

Probably would. The weakness is that PS4 hardware gets set in stone some time this year and won't change for 7 years, give or take. Been the way of things for a long time so I'm not really sure what people are arguing about.
 

benny_a

extra source of jiggaflops
There is no fixed target on PC. You can take it as high or low as you want. If you want to "optimize" for your hardware, there are plenty of ways to do so either via the in-game sliders, INI/CFG editing, or similar utilities.
I think Naughty Dog and Guerilla Games are doing a bit more than just changing sliders and ini-files.

I'm not arrogant enough to assume that me changing the variables in Crysis 3 is the same as the ICE-Team is doing day in and day out.
 

StevieP

Banned
I think Naughty Dog and Guerilla Games are doing a bit more than just changing sliders and ini-files.

You mean stuff like using the Cell to augment the RSX to code to the strengths of the platform?
The PS4 is a little bit less like that and a little bit more like current PCs.

The context that people use "optimization" on this board describes exactly that. Taking advantage/knowing the platform you're coding for and putting out the best product you can.
 

benny_a

extra source of jiggaflops
The PS4 is a little bit less like that and a little bit more like current PCs.
Except for the part where it's not going through several APIs and has a HSA.

The context that people use "optimization" on this board describes exactly that. Taking advantage/knowing the platform you're coding for and putting out the best product you can.
I have no issue with the statement as it is presented here. In your previous post it reads more like that developers are doing the same thing I could do on my PC, which is obviously untrue. They control everything.
 

Loofy

Member
The people complaining about how underpowered the consoles will be are probably the same ones that wanted the new consoles to come out in 2011.. I wonder how 2011 hardware would have compared to durango/orbis lol.
 

benny_a

extra source of jiggaflops
The people complaining about how underpowered the consoles will be are probably the same ones that wanted the new consoles to come out in 2011.. I bet those consoles would be underpowered lol.
I wish the consoles were more powerful. But isn't that what most people want?

The more the baseline is moved forward the better for everyone, I think.
 

Loofy

Member
I wish the consoles were more powerful. But isn't that what most people want?

The more the baseline is moved forward the better for everyone, I think.
Sure but its still funny to to think that alot of people were ready to spend the next generation on a console that would have come out as early as 2010(5 years after 360 launch). It wasnt even that long ago that most people predicted that 2gb would be the norm for ram. 2013 and the hardware still isnt good enough.
 
Top Bottom