• Hey Guest. Check out your NeoGAF Wrapped 2025 results here!

VGLeaks: PS4 GPU Hardware Balanced at 14 CUs - 4 CUs only minor boost for rendering

Status
Not open for further replies.
That goes without saying because CU don't scale linearly. Example, if you have 6CU, adding 2 more isn't going to generate the same level of performance as the initial 6; add another 2 and it's not going to generate the same level of performance of the last 8 and repeated. However that doesn't mean there isn't a performance increase at all which is what everyone is assuming. The same thing applies to GCN hardware on desktop GPUs. You're never getting 100% of each additional CU that's added (this goes for Nvidia's CUDA cores as well), but you are getting a performance % increase never the less. One thing we do know is that 12-14CU isn't the ceiling or anywhere near it for AMD's GCN architecture in terms of efficiency.

14:4 doesn't make sense simply because it undermines the entire point of increasing ACEs to 8 with 8 CLs.

Gemüsepizza;84200703 said:
The FLOPS performance doesn't change. And using 18 CUs for rendering is better than using 14 CUs for rendering. Like I said, Killzone uses most CUs for rendering and no compute for particles. But there is a possibility that, if you are using some of those CUs for things like particles instead of for normal rendering, it will look better.

Appreciate this stuff. Anywhere I can go for good reading/learning on it?
 
Cerny's quote directly:

So Cerny said that if they wanted throughput above all else like Kutaragi's philosophy they would have gone with the eDRAM solution. But instead they went with GDDR5 because it was more accessible for developers.

Did you read what i just said they were always going with GDDR5 .
It would have been eDRAM plus GDDR5 at 128 bus and have smaller bandthwith.
 
You're right, that one paragraph acknowledges what cerny said, its just most will read the op and assume the worst.



Its not that the slide is old, its that cerny debunked it awhile ago.

Cerny's quote seems to say the same thing that's in the slide.

There's no actual mandate that you have to use it for GPGPU. 14 CUs is the sweet spot for graphics but we've got some extra juice if you want to throw some GPGPU stuff in your game.

Same thing the slide says.
 
We don't know if the GPU changed.

A GCN compute unit has 64 shader cores and 4 texture mapping units. This means that FLOPS and texel fillrate are coupled to the number of compute units. Bandwidth and the number of raster operation pipelines are decoupled from the number of compute units, though. Xbox One uses 16 ROPs for 12 CUs. Maybe Sony also had a GPU with 16 ROPs in 2012 and decided later that they want to have 32 ROPs (which simply doubles the pixel fillrate and helps with high resolutions).

Just for comparison:

HD7850 = 16 CUs, 32 ROPs, 64 TMUs, 1024 shader cores, 153 GB/s
HD7870 = 20 CUs, 32 ROPs, 80 TMUs, 1280 shader cores 153 GB/s
PS4 = 18 CUs, 32 ROPs, 72 TMUs, 1152 shader cores, 176GB/s

I don't know why PS4's should be bottlenecked to use the full power for rendering. The number of shader cores and TMUs is coupled to the number of CUs and the ROPs and bandwidth looks plenty compared to AMD's desktop cards.


No one ever accounts for wattage :(
 
Why do some people want PS4 so badly to be weaker? Is it a misery loves company thing?

If you can take this in context, and I mean, IN CONTEXT, I was told a few months ago this verbatim:

"The true successor to the Xbox 360 is the PlayStation 4."

How and why?
- Easier to program for
- Better performing multiplatform titles
- Designed around games first
- Attempt at captivating the hardcore first and foremost.
 
Sorry, but isn't it the case that the extra logic on those 4 special ALUs provides only a minor boost for rendering, as opposed to them being just normal ALUs?
 
I posted this slide because it was just made public when I opened the thread by VGLeaks. If you look into the OP I added the comment from Cerny about it and addressed the age of the slide and that things could've changed.

But I can see why you assumed I want to speak bad about the PS4 regarding my posting history here: I'm obviously biased towards the X1 to some degree but that doesn't mean that I want to dismiss the HW advantage of the PS4.
I don't know how you're still here. You pretend to be an insider on Twitter even though you know nothing and you're a bigger troll than senjutsu sage and reiko combined. Seriously, get this nonsense out of here. It's disgusting and you are shaming gaf with your childish behaviour.
 
No, you posted this because you think 12 CU is better than 18 CU.
rQDDS09.jpg

That's not inherently wrong... With the optimum load a lower latency 12 CUs can outperform 18 CU with more memory access latency.

The million dollar question is whether graphic and compute tasks fit or not on that scenario.
 
There's something else that bother me, in that slide Sony suggest using 4 CUs for GPGPU stuff. But now Sony encourage developers to use "asynchronous fine-grain compute" which don't need any fixed number of CUs.
 
Any hints if this true or not?

If Cerny himself is debunking it, it doesn't matter what any one else is saying. More importantly, it is super funny to see the hardcore Xbox One Fanboys read the title of this thread with glee only to find themselves in a trap of nomenclature.
 
If you can take this in context, and I mean, IN CONTEXT, I was told a few months ago this verbatim:

"The true successor to the Xbox 360 is the PlayStation 4."

How and why?
- Easier to program for
- Better performing multiplatform titles
- Designed around games first
- Attempt at captivating the hardcore first and foremost.

That has definitely been evident, as has for further context, seems that the X1 is the successor to the PS3 regards taking on

-More difficult to program for - Cell / EsRAM (according to devs and articles)
 
I would gladly have more computing resources, than less.

Some of you guys need to let go. The XB1 is a done deal, and nothing short of a hardware redesign or a new product line is going to change anything.
 
There's something else that bother me, in that slide Sony suggest using 4 CUs for GPGPU stuff. But now Sony encourage developers to use "asynchronous fine-grain compute" which don't need any fixed number of CUs.

Yeah that's what I tried to point out in my post, although you worded it better.
 
If you can take this in context, and I mean, IN CONTEXT, I was told a few months ago this verbatim:

"The true successor to the Xbox 360 is the PlayStation 4."

How and why?
- Easier to program for
- Better performing multiplatform titles
- Designed around games first
- Attempt at captivating the hardcore first and foremost.

Can't argue with any of that, in any context, except maybe the first point which is most likely due to MS being behind on their SDK and drivers.
 
Also this slide is from before the PS4 went from 4GB to 8GB.

Exactly what I was about to post. Could all of this be legit, and yet could the RAM increase change the way it should be interpreted? Has this so-called "balance" been shifted due to the RAM increase?
 
Even if this is true im pretty sure the engineers will find a way to use those CU for graphics.
You could probably implement a lot of post processing effect with compute shaders.

It really doesn't matters if there is performance the exclusive team will use or abuse it for task its not meant to.
 
If Cerny himself is debunking it, it doesn't matter what any one else is saying. More importantly, it is super funny to see the hardcore Xbox One Fanboys read the title of this thread with glee only to find themselves in a trap of nomenclature.

the only problem with this thread is people act like cerny is some unbiased resource who is all about the tech and nothing else.
 
look at all those X1 fans jumping to this thread!

18 CUs are 18 whether used for rendering or compute or raycasting or even making a sandwich. they are there to serve a purpose, and is at the mercy of developers to do anything with it.

MS in desperation is touting 14 CUs which funny enough is more than X1 12 CUs.

Assume a scenario where PS4 utilize 14(rendering) + 4 (compute) in a multiplat, X1 to achieve similar compute effect needs to use 8 (rendering) + 4 (Compute).

So the 50% difference is there, won't disappear magically.
 
Any hints if this true or not?

That has definitely been evident, as has for further context, seems that the X1 is the successor to the PS3 regards taking on

-More difficult to program for - Cell / EsRAM (according to devs and articles)

It can help justify / explain the behavior of Xbox 360 fans who oh so desperately want the Xbox One to be more than it really is. I feel for those guys. Microsoft garnered so much good will from the hardcore (the controller, Xbox Live, achievements, etc.), but they fucked it all up in a matter of months. It's as if they read their entire fan base wrong.
 
Even if this is true im pretty sure the engineers will find a way to use those CU for graphics.
You could probably implement a lot of post processing effect with compute shaders.

It really doesn't matters if there is performance the exclusive team will use or abuse it for task its not meant to.

yes post and particles seem to be the current trend.
 
Out of curioustity, if the PS4 was balanced for rendering at 14 CU's anybody else think the XB would likely be balanced at between 6-10 CU's?

I mean

Pixel fill rate: 13.648GPixel/Sec against 25.6 GPixels/Sec - PS4 90% more
Texture fill rate: 40.9Gtexel/Sec against 57.6 Gtexel/Sec - PS4 40% more

Estimated practical Bandwidth: 140gb/s vs. 172gb/s - PS4 23% more

Depending on where the bottleneck is, the XB1 would be "balanced" when the PS4 is between 23%-90% higher on Floating point performance.

So let's run this down with the xb1 mentality that the PS4 is balanced at 14 CU and therefore 1.43Gflops. so by Microsofts own "Balance" logic the xb1 would also be balanced at somewhere between: 0.75Tflops and 1.16Tflops or 6.9 CU to 10.6 CU's meaning the xb1 CU's would stop scaling linearly somewhere between 6 and 10.

But of course only the PS4 could possibly be affected by this.

Jus sayin'
 
Some AMD GCN Cards dont even have 4GB GDDR5 and some double the CU's, I dont think Ram is the issue :P

That's true. I still don't even understand what the issue is. Is there not enough bandwidth to feed all CUs? What's this fuss about "balance" when it never seems to be an issue with standard GPUs? What could prevent all CUs from being fully utilized?
 
I don't know how you're still here. You pretend to be an insider on Twitter even though you know nothing and you're a bigger troll than senjutsu sage and reiko combined. Seriously, get this nonsense out of here. It's disgusting and you are shaming gaf with your childish behaviour.

Wow. Calm down man.
 
Why do some people want PS4 so badly to be weaker? Is it a misery loves company thing?

The same kind of people that enjoy "Xbone is weaker" threads: fanboys. At the end of the day, the difference on the screen won't be as significant as some people think and most of all, it won't matter. At all.
 
gpu-1024x607.jpg


http://www.vgleaks.com/playstation-4-balanced-or-unbalanced/

Note that the slide is from 2012 - so maybe Sony found some way to increase the outcome of using more than 14 CUs now compared to back then.

Mark Cerny on the matter:

http://www.eurogamer.net/articles/digitalfoundry-face-to-face-with-mark-cerny

It's the first time who you posted something about ps4; everytime I read something in your post, it give me the impression that you try to redeem xbone hardware specs. & now this news...Forgive me but are you Leadbetter? I hardly believe in the coincidence.
 
Wow this is incredibly old news, and was false in the first place.

The GPU CAN use its CU's for compute tasks, but it is not mandatory.

Res0gun uses GPCPU programming structure. I believe HouseMarque stated most of the physics and such are being done by the GPU and not the CPU.

It is all just choice. You can if you want. But the system does not lock away 4 CU's for nothing but compute tasks, that would be a ridiculous structure.

And as people have shown, the PS4 GPU is a very respectable GPU even in the Desktop world. Around a 200$ card. I call it a 7860, since it seems to be right there in the middle between the 7850 and 7870.
 
Status
Not open for further replies.
Top Bottom