• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EuroGamer: More details on the BALANCE of XB1

what.... dude, if I say something, and I say it's the truth, which I have, I never back down from it. I don't believe I'm spreading false information, and so I will stand by what I said, exactly as I've done here before.

"Actual real information" implies some sort of proof. Knowing that you don't have such proof, and hearing you say that this is, at best, third-hand telling of something and is "the interpretation of an experienced games programmer" and not the actual source material you imply when you make definitive statements like "Sony said it" is troubling. I believe you when you say you trust your source, but I believe Reiko trusted his source too. Look at where that got him.
 
It's a shame you can't elaborate more because there's a reason that curve has to exist. The PS4 GPU has nearly double the guaranteed bandwidth and double the ROPs, yet apparently there's still some bottleneck that provides diminishing returns on the extra CUs. That's why it's hard to believe and seems suspicious.

The 32 ROPs is something MS dodged for good reason. PS4 having twice the ROPs is undeniably signficant. So if it's not the ROPs being the problem, then maybe the issue is bandwidth. From an article on Anandtech regarding ROPs, the more you have the more it hogs on bandwidth significantly and although PS4 has a lot, twice the ROPs means a ton more bandwidth is required, so maybe 14 CU is the max recommended before you start running into bandwidth issues... just a guess.

Edit: Or just like with today's consoles, it's better to optimize at 720p/30fps for the best results possible since going higher res means you have to sacrifice visual fidelity. For the PS4 pipeline, using 14 CU for graphics and 4 CU for GPGPU is more efficient and yields better results overall than using all 18 for visuals and letting the Jaguar CPU handle what would otherwise be taken care of by the 4 CU you just jacked.
 
You secretly love it. Waiting for that next big thread to explode. It's what the GAF lives for!

lol yeah I just want some 3rd party interpretations as well

Bish or maybe kidbeta as he seems quite knowledgeable on this stuff

Basically it seems to be boiling down to how much less effective are those extra 4 CU's and why
 

artist

Banned
Lol

this is funny.
ekim is the funniest guy I know.
Yeah, you are a good source.
tdiWPHA.png
haha, he's just too good.
 

Chobel

Member
The 32 ROPs is something MS dodged for good reason. PS4 having twice the ROPs is undeniably signficant. So if it's not the ROPs being the problem, then maybe the issue is bandwidth. From an article on Anandtech regarding ROPs, the more you have the more it hogs on bandwidth significantly and although PS4 has a lot, twice the ROPs means a ton more bandwidth is required, so maybe 14 CU is the max recommended before you start running into bandwidth issues... just a guess.

The bottleneck can't be from bandwidth, otherwise Sony wouldn't encourage devs to use GPGPU. Graphics and compute share the same badwidth.
 
"Actual real information" implies some sort of proof. Knowing that you don't have such proof, and hearing you say that this is, at best, third-hand telling of something and is "the interpretation of an experienced games programmer" and not the actual source material you imply when you make definitive statements like "Sony said it" is troubling. I believe you when you say you trust your source, but I believe Reiko trusted his source too. Look at where that got him.

Reiko was taking a chance on something (something silly and unrealistic from my understanding), and someone, he clearly wasn't sure about, or he wasn't being honest. The source, in my view, is the proof, but short of screwing someone over I can't exactly go into detail, and hopefully people understand that much.

Had a feeling it might have a derailing effect, and my apologies seeing how that seems to be the case, since this isn't even the point of this thread, so I've probably contributed enough as it is.
 
Reiko was taking a chance on something (something silly and unrealistic from my understanding), and someone, he clearly wasn't sure about, or he wasn't being honest. The source, in my view, is the proof, but short of screwing someone over I can't exactly go into detail, and hopefully people understand that much.

So who thus far has the source info been shared with?

I'm sure you did share it with Bish so its curious that he hasn't posted anything on it

Has anyone else received the goods yet?

And no offense but I doubt consider ekim to be impartial [if you have shared it with him]

If it makes you feel better ekim I don't consider myself to be impartial either
 
The bottleneck can't be from bandwidth, otherwise Sony wouldn't encourage devs to use GPGPU. Graphics and compute share the same badwidth.

You'd have to make the argument that it's less bandwidth intensive or peaks at a different time than graphics loads.
 
The bottleneck can't be from bandwidth, otherwise Sony wouldn't encourage devs to use GPGPU. Graphics and compute share the same badwidth.

Yeah, you can tell from that, and also looking at the bandwidth of the higher end cards.

Example. 7870 GHZ Edition has a bandwidth of 153... effectively lower than the PS4 (granted PS4 will have 20GB/s dedicated for CPU..making them the same).

But 7870 shows a significant increase over 7850.
 
What's funny here? Hasn't it been confirmed that 2 CUs are deactivated for Xbox One?

I'm not sure what's so funny, either, but if there's any context to be added, it's that ekim was fully aware of the 2 additional CUs on the Xbox One and Microsoft experimenting with their inclusion well before that latest eurogamer article, so if he was maybe hinting about it, there's good reason.

He's playing the insider card, the funny part is that he's doing on twitter and not here because it'll get him banned. :)

He's not really playing the insider card. He literally already knew about it. That doesn't have to mean he's playing games or not being honest, if that's the insinuation. All the eurogamer article did was just further confirm what he already knew well before.
 

Dragon

Banned
I'm not sure what's so funny, either, but if there's any context to be added, it's that ekim was fully aware of the 2 additional CUs on the Xbox One and Microsoft experimenting with their inclusion well before that latest eurogamer article, so if he was maybe hinting about it, there's good reason.

I never said Senjutsusage is a dev. I just learned that lherre is one. I won't explain it further now.

Nathan-Fillion-Loss-For-Words-Reaction-Gif.gif
 
Yeah, you can tell from that, and also looking at the bandwidth of the higher end cards.

Example. 7870 GHZ Edition has a bandwidth of 153... effectively lower than the PS4 (granted PS4 will have 20GB/s dedicated for CPU..making them the same).

But 7870 shows a significant increase over 7850.

PS4 has all 176 GB/s available to the GPU. CPU is the one that's limited. Of course GPU bandwidth = total bandwidth - used CPU bandwidth.
 
I'm not sure what's so funny, either, but if there's any context to be added, it's that ekim was fully aware of the 2 additional CUs on the Xbox One and Microsoft experimenting with their inclusion well before that latest eurogamer article, so if he was maybe hinting about it, there's good reason.

Right. MS admitted to toying with the idea of enabling the CUs but instead chose the up-clock (maybe spin, but who knows for sure?). Isn't Ekim's info consistent with that?
 

gofreak

GAF's Bob Woodward
It would be much worse if I were putting my own spin on the info. I told it exactly as I know it, with no changes.

If your friend told you this was some hard and fast universal rule, as you've been suggesting here, then your friend is mistaken.

where are people getting the "beyond 14 CU's" there's a dropoff nonsense?

is it because the CPU cannot feed the GPU beyond this point?

i know it's NOT a scaling issue with the architecture because the architecture is PROVEN to scale well beyond 14 CU's. (64 shader cores per CU btw). The Radeon 7970 for instance uses 32 CU's (2048 cores) and the 7870 pictarin chip uses 20 CU's for 1280 cores. Both the 7970 and 7870 have 32 ROP's, same as the PS4 GPU....so where is this efficiency dropoff coming from?

What makes you think those CUs are 100% utilised? Or that perf scales linearly with ALU? In what games?

We have PC benches which show they aren't necessarily.

There is no such thing as some general 100% balanced magic ratio of resources for every game. Every game is different. If you match the demands of your software to the ratios available on a GPU then indeed, you can get a good linear scaling with the resources available. That's when there's balance.

This is not a new concept. The ratio of ALU to other resources in a GPU has not been static over generations of GPU (as if there was some magic software-independent 'balance'). The ratio of ALU to other resources has been increasing, particularly on AMD cards. People have debated the merits of this on games today , pointing out that it may be relatively wasted in the shader workloads right now. Others point to tomorrow and GPGPU. We've seen that debate every time a new generation of card comes out with a shift in ratios.

Sony's example of the type of ALU a game could be leaving on the table if just thinking about graphics, is really just an echo of that.
 
PS4 has all 176 GB/s available to the GPU. CPU is the one that's limited. Of course GPU bandwidth = total bandwidth - used CPU bandwidth.

I know, I was just pointing out the worst case scenario in terms of bandwidth available to the GPU.

If your friend told you this was some hard and fast universal rule, as you've been suggesting here, then your friend is mistaken.



What makes you think those CUs are 100% utilised? Or that perf scales linearly with ALU? In what games?

We have PC benches which show they aren't necessarily.

There is no such thing as some general 100% balanced magic ratio of resources for every game. Every game is different. If you match the demands of your software to the ratios available on a GPU then indeed, you can get a good linear scaling with the resources available. That's when there's balance.

This is not a new concept. The ratio of ALU to other resources in a GPU has not been static. It has increased, particularly on AMD cards. People have debated the merits of this on games today , pointing out that it may be relatively wasted in the shader workloads right now. Others point to tomorrow and GPGPU. We've seen that debate every time a new generation of card comes out with a shift in ratios.

Sony's illustrative example of the type of ALU a game could be leaving on the table if just thinking about graphics, is really just an echo of that.

Of course it isn't linear, but it sure as fuck isn't a "significant drop in increase of performance" or whatever the hell people are saying.
 

Tripolygon

Banned
I try not to engage in these things but i'll bite.
It would be much worse if I were putting my own spin on the info. I told it exactly as I know it, with no changes.

And what you know could be (IS) wrong

That's how it was presented

1. According to Mark Cerny
That comes from a leak and is not any form of formal evangelisation
2. backed by Sony Press release
The Graphics Processing Unit (GPU) has been enhanced in a number of ways, principally to allow for easier use of the GPU for general purpose computing (GPGPU) such as physics simulation. The GPU contains a unified array of 18 compute units, which collectively generate 1.84 Teraflops of processing power that can freely be applied to graphics, simulation tasks, or some mixture of the two.
3, backed by Neil Brown Team Lead SCEE R&D
but maybe it's similar in a sense to the Xbox One early dev documentation where they pointed out how the Xbox 360 GPU was 53% efficient, whereas the Xbox One GPU is 100% efficient.

False equivalency and it is a common knowledge that no GPU is 100% effecient but it is known that modern GPU scales with CU assuming it is not limited by bandwidth and ROP or other parts of the GPU.

However, the context for that was that this was the difference between a much less efficient by nature 5-lane SIMD running a piece of code, and a much more efficient 1-lane SIMD, something the PS4 also possesses, running that exact same code, so rather than some revolutionary new thing that made the GPU more efficient, it was instead something rather common that will be shared on both systems. Maybe this 14 and 4 thing for the PS4 GPU is just one such similar case as this Xbox One example where the entire picture or context isn't clear, but I presented it exactly without any alterations and the initial interpretation isn't of the sort where the individual somehow wouldn't
The rumor started from vgleaks
1. PS4 14+4 split
2. then it went on to PS4 having 1 external GPU and an iGPU for 14+4 split
3. then to unified 18CU APU with hard split

Sony announced PS4 and said the APU has a unified array of 18CU that can be used however devs want.

Microsoft says their magical number for CU is 12, by your own spin (which is wrong) Sony says it is 14 and both would be correct because it has to do with each system specifically, taking into account the Bandwidth, ALU, ROP etc. It is not a universal rule hence why we have GPUs with 20+ CU.

If Sony wanted developers to use 14CU for graphic and 4 for GPGPU, they wouldn't go out of their way to add more than needed ACEs and make customization that allow all 18CU to be used for graphics while also allowing compute when certain task do not use 100% of the CU.

anyway, i'm not very knowledgeable with these things.
 
Right. MS admitted to toying with the idea of enabling the CUs but instead chose the up-clock (maybe spin, but who knows for sure?). Isn't Ekim's info consistent with that?

Not just consistent, precisely dead on.

I try not to engage in these things but i'll bite.

The rumor started from vgleaks
1. PS4 14+4 split
2. then it went on to PS4 having 1 external GPU and an iGPU for 14+4 split
3. then to unified 18CU APU with hard split

Sony announced PS4 and said the APU has a unified array of 18CU that can be used however devs want.

Microsoft says their magical number for CU is 12, by your own spin (which is wrong) Sony says it is 14 and both would be correct because it has to do with each system specifically, taking into account the Bandwidth, ALU, ROP etc. It is not a universal rule hence why we have GPUs with 20+ CU.

If Sony wanted developers to use 14CU for graphic and 4 for GPGPU, they wouldn't go out of their way to add more than needed ACEs and make customization that allow all 18CU to be used for graphics while also allowing compute when certain task do not use 100% of the CU.

anyway, i'm not very knowledgeable with these things.

You're preaching to the choir. :p Hehe, just playing :)
 

gofreak

GAF's Bob Woodward
Of course it isn't linear, but it sure as fuck isn't a "significant drop in increase of performance" or whatever the hell people are saying.

Going back to those DF ALU comparison benchmarks, you'll find drops 'in increase of performance' of up to two thirds vs linear. Smallest is 40% off linear.

AMD has long been skewing high on ALU and has attracted debate around the merits of that. Sony is echoing the raison detre for that.
 
there is no 14+4 split. it's 18 CU's (1152 cores)... that's it. the split was in the rumors, there is none of that. it's up to the developer to determine how much horsepower they want to use for compute or rendering IF that's what they want to do. they give you 18 CU's, you decide how to use it. there's no forcing devs to use 14 + 4.

i'm not buying any diminishing returns theories either unless they're hitting a CPU wall where the CPU isn't fast enough to feed the GPU past a certain level of performance.
 
I've gone through this thread, read every post, and understand nothing ... It just feels like pedantic argument to me.

That was my understanding as well

In essence it boils down to how effectively the 4 extra CU's can be used and as iHerre states

About the CU's ...

All are the same, no differences between them.

About the "balance". As Cerny said in the interview you can use them as you wish, render or GPGPU. They simply said that those "extra" 400 Gflops are "ideal" for gpgpu tasks instead of rendering, but it is up to you how to use them. Not that you can't gain anything using the 18 CU's for rendering, obviously you will see a difference using them only for rendering versus 14 CU's but they want to encourage devs to use gpgpu compute in ps4.

So it's a misunderstood that the gpu is unbalanced in any way.

Which still doesn't quantify how less effective the 4 extra CU's may be

But I see no reason to believe senju's claim of significant performance loss over those 4 compared to the others without something to back it up

Incredibly pedantic from where I'm sitting
 

GameSeeker

Member
Folks need to stop believing this 14+4 nonsense.

These are the two posts to pay attention to:

About the CU's ...

All are the same, no differences between them.

About the "balance". As Cerny said in the interview you can use them as you wish, render or GPGPU. They simply said that those "extra" 400 Gflops are "ideal" for gpgpu tasks instead of rendering, but it is up to you how to use them. Not that you can't gain anything using the 18 CU's for rendering, obviously you will see a difference using them only for rendering versus 14 CU's but they want to encourage devs to use gpgpu compute in ps4.

So it's a misunderstood that the gpu is unbalanced in any way.

This is pure insanity.

The PS4 is a good deal stronger then the XBO. Period.

These posts are by confirmed developers who have been part of NeoGAF for a long time and have a reputation for being trusted and accurate.

Don't confuse your brain listening to other nonsense.
 

artist

Banned
Going back to those DF ALU comparison benchmarks, you'll find drops 'in increase of performance' of up to two thirds vs linear. Smallest is 40% off linear.

AMD has long been skewing high on ALU and has attracted debate around the merits of that. Sony is echoing the raison detre for that.
Which DF ALU comparison are you talking about? The Leadbetter one where he used a 7850 for the Xbone?
 

GribbleGrunger

Dreams in Digital
That was my understanding as well

In essence it boils down to how effectively the 4 extra CU's can be used and as iHerre states



Which still doesn't quantify how less effective the 4 extra CU's may be

But I see no reason to believe senju's claim of significant performance loss over those 4 compared to the others without something to back it up

Incredibly pedantic from where I'm sitting

And what's it all for anyway? To reduce the advantage down to 2 CUs by dismissing 4 CUs in order to justify addressing the idea that the X1 isn't that much weaker than the PS4?
 
Going back to those DF ALU comparison benchmarks, you'll find drops 'in increase of performance' of up to two thirds vs linear.

AMD has long been skewing high on ALU and has attracted debate around the merits of that. Sony is echoing the raison detre for that.
Are you talking about this?

http://www.eurogamer.net/articles/digitalfoundry-can-xbox-one-multi-platform-games-compete-with-ps4

Where they use a 7850 and 7870 XT for the X1 and PS4 respectively?

Where the TMUs are inflated, skewing the results even more?(original 48 vs 72 & 64 vs 96 inflated numbers). What about the ROPs being the same as what the PS4 has, even though the PS4 has double the ROPs of the X1?

Please, I hope you're not citing that.
 

nib95

Banned
Folks need to stop believing this 14+4 nonsense.

These are the two posts to pay attention to:

These posts are by confirmed developers who have been part of NeoGAF for a long time and have a reputation for being trusted and accurate.

Don't confuse your brain listening to other nonsense.

Pretty sure Lherre is working on the PS4 now. So if he says 18, it's 18.
 

Raydeen

Member
And all benchmarks, ROPS, ArseDRAM and bandwidth aside, if MS announced Shenmue 3 tomorrow as a Xbone exclusive it would all be redundant.

Get to it MS!!
 

lherre

Accurate
Pretty sure Lherre is working on the PS4 now. So if he says 18, it's 18.

I think no one here said that there are less CU's in Ps4, in fact Cerny confirmed it. So I'm not sure what doubts can the people have about it.

I'm only repeating what Cerny said. Not new info at all.
 
there is no 14+4 split. it's 18 CU's (1152 cores)... that's it. the split was in the rumors, there is none of that. it's up to the developer to determine how much horsepower they want to use for compute or rendering IF that's what they want to do. they give you 18 CU's, you decide how to use it. there's no forcing devs to use 14 + 4.

i'm not buying any diminishing returns theories either unless they're hitting a CPU wall where the CPU isn't fast enough to feed the GPU past a certain level of performance.

Just to clarify, nobody is suggesting that there's a 14 CU GPU, and then some 4CU GPU off by itself somewhere.

The PS4 has one GPU with 18 full Compute Units. The 14 and 4 thing is simply a recommendation by Sony to developers on what they see as an optimal balance between graphics and compute. It was never suggested, at least not by me, that devs are forced to use it this way. They're free to use all 18 on graphics, 16 on graphics with 2 on compute, 5 on graphics with 13 on compute. It's all up to devs. Sony just gave a recommendation based on what they observed taking place after 14 CUs for graphics specific tasks. The extra resources are there, there's a point of diminishing returns for graphics specific operations, so they suggest using all the extra ALU resources for compute, rather than using them for graphics and possibly not getting the full bang for your buck. That's how I see it, and probably the last I'll say on the subject.

And what's it all for anyway? To reduce the advantage down to 2 CUs by dismissing 4 CUs in order to justify addressing the idea that the X1 isn't that much weaker than the PS4?

Seems to fly in the face of me always saying the PS4 is easily the stronger console, doesn't it? I've just had disagreements on how noticeable that difference will appear to be in real games, since both systems will crank out incredible looking games. There is no sinister motivation behind me saying it. It was in direct response to a post that seemed to be suggesting the claim of some kind of 14 CU balance of sorts on the PS4 was completely unfounded, and I pointed out that this wasn't exactly the case, and that there is something more to that remark than just Jedi PR mind tricks. I thought in light of what MS said recently about the CPU limiting games, and how their clock speed increase gave them more than the extra 2 CUs, that this information I've known for awhile, might now make more sense in retrospect, as a possible explanation as to how or why that performance curve I heard about exists. It wasn't an attempt to say the gap has closed, or anything silly like that. I've said on here time and time again, that the compute advantage of the PS4 would likely end up leading to things that simply couldn't be recreated on the Xbox One, and it may be in that area where the gap in power is showcased most between the two systems, rather than in raw graphics quality. Sounds reasonable, I thought, but people seem to just treat most things I say as an attempt to snipe at the PS4. Always gloss over me complimenting and praising the system for its power. :p
 

nib95

Banned
I think no one here said that there are less CU's in Ps4, in fact Cerny confirmed it. So I'm not sure what doubts can the people have about it.

I'm only repeating what Cerny said. Not new info at all.

No I appreciate that, but sometime earlier this 14+4 rumour spread which Cerny himself debunked by stating that all 18 could be used for rendering as you yourself mentioned. I don't really know why the 14+4 thing has sprung up again in this thread besides maybe people grasping at straws or being misinformed.
 

gofreak

GAF's Bob Woodward
Are you talking about this?

http://www.eurogamer.net/articles/digitalfoundry-can-xbox-one-multi-platform-games-compete-with-ps4

Where they use a 7850 and 7870 XT for the X1 and PS4 respectively?

Where the TMUs are inflated, skewing the results even more?(original 48 vs 72 & 64 vs 96 inflated numbers). What about the ROPs being the same as what the PS4 has, even though the PS4 has double the ROPs of the X1?

Please, I hope you're not citing that.

While it was not indicative of general performance differences between the two console GPUs - and was bad to use it to try and indicate overall performance differences - the whole point of the benchmarks was to compare what difference greater ALU could bring, which is perfect for our purposes here in asking whether 'a typical game' scales indefinitely with increased ALU. If the ROPs and other elements were as unequal in this bench as they are in the consoles it would make it impossible to determine the impact of ALU alone.
 

demolitio

Member
Folks need to stop believing this 14+4 nonsense.

These are the two posts to pay attention to:





These posts are by confirmed developers who have been part of NeoGAF for a long time and have a reputation for being trusted and accurate.

Don't confuse your brain listening to other nonsense.

No kidding. They are confirmed but also don't have skin in the game as far as defending one console goes like some of the other "sources" that people spout in here.

I just think it's all ridiculous at this point because the argument just keeps changing to adapt to whatever else has been confirmed at this point and what they have wiggle room on so they move on to the next target to "balance" the systems out.

It's just insane how far people are willing to go for something that shouldn't really be so personal to them.
 
Has Microsoft been recommending 8+4 for graphics and compute like Sony has been recommending (14+4)?

And for those mentioning diminishing returns, do the diminishing returns change depending on the amount of graphics CUs being used vs. compute CUs?
I'm just not understanding the concept here. My tech knowledge is pretty poor.
 
Which is pretty much correct. I just provided the context for why Sony recommends that devs utilize a 14 and 4 split. As I've said earlier in this same thread. All 18 CUs are identical. Devs can use them however they like. Some aren't somehow more powerful than others. It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.

This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.

My problem isn't with you "claiming" 14+4 (which you didn't). My problem is with this.

Which is pretty much correct. I just provided the context for why Sony recommends that devs utilize a 14 and 4 split. As I've said earlier in this same thread. All 18 CUs are identical. Devs can use them however they like. Some aren't somehow more powerful than others. It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.

This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.

There is physically no reason to have "significant" drop. A drop? Sure, yes, I can see it, but where is this "significant" coming from?
 

GribbleGrunger

Dreams in Digital
LOL. At least it's fun to watch :D

It now seems to be coalescing around the potential of diminishing returns on anything over 14 CUs. So for me, that's where the debate should be focused ENTIRELY. Just what percentage are we talking about because I've seen 25% being used and that still sounds significant as far as I'm concerned. Is that 25% per CU or 25% for the remaining 4 CUs? It just appears that Sony have advised a 14/4 split but devs can do whatever the hell they want ... so what are we even debating? LOL

This is just pedantry
 

viveks86

Member
I think no one here said that there are less CU's in Ps4, in fact Cerny confirmed it. So I'm not sure what doubts can the people have about it.

I'm only repeating what Cerny said. Not new info at all.

Would you be able to comment on Senjutsu's claims that there would be a significant decline in performance scaling once we cross 14 CUs for graphics?

It now seems to be coalescing around the potential of diminishing returns on anything over 14 CUs. So for me, that's where the debate should be focused ENTIRELY. Just what percentage are we talking about because I've seen 25% being used and that still sounds significant as far as I'm concerned.

Exactly what I am trying to find out as well
 
Just to clarify, nobody is suggesting that there's a 14 CU GPU, and then some 4CU GPU off by itself somewhere.

The PS4 has one GPU with 18 full Compute Units. The 14 and 4 thing is simply a recommendation by Sony to developers on what they see as an optimal balance between graphics and compute. It was never suggested, at least not by me, that devs are forced to use it this way. They're free to use all 18 on graphics, 16 on graphics with 2 on compute, 5 on graphics with 13 on compute. It's all up to devs. Sony just gave a recommendation based on what they observed taking place after 14 CUs for graphics specific tasks. The extra resources are there, there's a point of diminishing returns for graphics specific operations, so they suggest using all the extra ALU resources for compute, rather than using them for graphics and possibly not getting the full bang for your buck. That's how I see it, and probably the last I'll say on the subject.

Senjutsu! Watch out! It's a trap!

If third party games use 0.4 tflops for GPGPU and 1.4 tflops for rendering, there is no way to port it to X1 without severely gimping it.

Lets say the GPGPU operations are for graphical flare. (hair physics, particle physics etc)
Then the devs will need to either:

1) Use 0.4 tflops on GPGPU ops as well on x1 and use the remaining 0.9 tflops for rendering. Which means:

1.4/0.9 ~= 56% more flops for shaders on ps4.

In this case, ps4 will be "balanced!" at around 1.4 tflops while x1 will be "unbalanced!" at 0.9 tflops!!

or

2) Not use GPGPU on X1 hence missing graphical effects. This is bad since while many people won't mind lower resolution, they will mind missing effects.

Sony wants this to happen! They are the Devious!!
 

onQ123

Member
So is it safe to assume that XBOX ONE games that was shown before was using 14 CU's clocked at 800MHz but finally retail games will be 12 CU's clocked at 853MHz?
 
Top Bottom