• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

EuroGamer: More details on the BALANCE of XB1

Xanonano

Member
Hope lherre doesn't mind me reposting this from the other thread:
About the CU's ...

All are the same, no differences between them.

About the "balance". As Cerny said in the interview you can use them as you wish, render or GPGPU. They simply said that those "extra" 400 Gflops are "ideal" for gpgpu tasks instead of rendering, but is up to you how to use them. Not that you can't gain anything using the 18 CU's for rendering, obviosly you will see a difference using them only for rendering versus 14 CU's but they want to encourage devs to use gpgpu compute in ps4.

So it's a misunderstood that the gpu is unbalanced in any way.
 
Correct.

The number 14 depends on the software. It might be 'typical'. The reason a 'typical' shader workload may scale less than linearly beyond that inflection point is because the ratio of its demands on ALU vs its demands on other resources (like pixel write or bandwidth) is less than the ratio available in the hardware. So other resources start forming a bound, that prevent the ALU side of the system running on all cylinders. ALUs will start to idle more as the ratio of some other resource in the hardware isn't matching what the software calls for. Again, what ratio software calls for will vary. It's why in DF's benchmarks some games gain more, some less, from additional ALU past the base card they used in that test. If you had a software mix that was very ALU intensive you could see a linear scale with all the CUs you can throw at it.

What Sony was saying here was 'just' what many would have already concluded - that past a certain point for a certain resolution, a x% larger amount of ALU ("flops") won't necessarily give you a x% higher framerate. In a typical case today, you might be leaving up to 4 CUs worth of ALU time on the table. So consider using (more ALU intensive) GPGPU to soak up the excess.
Thanks. It essentially sounds like it will all be very software dependent. And the idea that what was presented was just a typical use case, rather than being some set in stone inflection point, seems sensible. One may see non-linear performance gains from using all 18 CUs for graphics or they could potentially see linear gains depending upon what they want to do.

I really don't think this info would be any source of contention if it had been presented as you have.

The corollary of course, which brings us back on topic more, is the exact same would presumably apply to the XB1 regardless of the balance mantra, i.e. for certain software one may already be beyond non-linear performance gains utilising 12 CUs due to things like fill rate and bandwidth?
 

Skeff

Member
Well, the man still stands after PM-ing Bish. I'd reserve judgments until there are more developments. :)

Well he might be on his knees begging, all we know is that the banhammer hasn't swung yet, then again, Bish hasn't posted regarding the reliability of his information yet. I'd say it's still up in the air, likely I expect he read something how he wanted it to be read and took the wrong meaning from it.
 
Damn. Time to go away. This thread is full of garbage and with little of meaningfull info. I think and always have, that ps4 is more BW limited than GPU limited regarding graphics rendering. NO NEW INFO. maybe the limit is some where else at 1080p. who knows.
 

ekim

Member
Well he might be on his knees begging, all we know is that the banhammer hasn't swung yet, then again, Bish hasn't posted regarding the reliability of his information yet. I'd say it's still up in the air, likely I expect he read something how he wanted it to be read and took the wrong meaning from it.

If this means anything: I can confirm that senjutsusage is legit.
 

Ebomb

Banned
Soin summary, multiple sources state Sony is encouraging 14+4 split but nothing in the hardware mandates any split and all CU assets on chip are identical.

How you want to frame that is up to you, but unless Sony is some sort of GPGPU standard bearer and has some other alterior motive for championing its use,I think we can acknowledge that using 4 CUs for GPGPU offer more return than using those as Shaders/Graphics tasks in Many but not all use cases.
 

viveks86

Member
Well he might be on his knees begging, all we know is that the banhammer hasn't swung yet, then again, Bish hasn't posted regarding the reliability of his information yet. I'd say it's still up in the air, likely I expect he read something how he wanted it to be read and took the wrong meaning from it.

Agreed. We would know soon enough, I hope
 
Hope lherre doesn't mind me reposting this from the other thread:

Which is pretty much correct. I just provided the context for why Sony recommends that devs utilize a 14 and 4 split. As I've said earlier in this same thread. All 18 CUs are identical. Devs can use them however they like. Some aren't somehow more powerful than others. It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.

This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.

It doesn't

That was quick. :p
 

viveks86

Member
If this means anything: I can confirm that senjutsusage is legit.

Yeah I'm pretty sure he is legit in some way. He wouldn't have lasted here this long otherwise. What I really care about is this particular interpretation he has shared. He could genuinely be relaying whatever he heard, but I'm curious to find out if it has been interpreted correctly.
 

tfur

Member
If this means anything: I can confirm that senjutsusage is legit.

2 Legit

TooLegitToQuit-MCHammersingle.jpg
 

gofreak

GAF's Bob Woodward
Thanks. It essentially sounds like it will all be very software dependent. And the idea that what was presented was just a typical use case, rather than being some set in stone inflection point, seems sensible. One may see non-linear performance gains from using all 18 CUs for graphics or they could potentially see linear gains depending upon what they want to do.

I really don't think this info would be any source of contention if it had been presented as you have.

The corollary of course, which brings us back on topic more, is the exact same would presumably apply to the XB1 regardless of the balance mantra, i.e. for certain software one may already be beyond non-linear performance gains utilising 12 CUs due to things like fill rate and bandwidth?

Correct.

However, the relative point at which that inflection would exist for a given piece of software may be different between the two; they have significantly different ratios of ALU:ROPs:Bandwidth available. (Which makes it slightly funny for MS to cite Sony for backup here but hey...)
 
So what happens to the balance of multiplatform development when a dev decides to use 14 CUs on PS4 for visuals and the remaining 4 for physics? How would they go about doing it on Xbox One considering it only has 12 CUs?

Because that's not just affecting visuals, it's affecting gameplay.
 

Spongebob

Banned
So what happens to the balance of multiplatform development when a dev decides to use 14 CUs on PS4 for visuals and the remaining 4 for physics? How would they go about doing it on Xbox One considering it only has 12 CUs?

Because that's not just affecting visuals, it's affecting gameplay.
Most devs will probably just use all 18CUs for graphics.
 
I just provided the context for why Sony recommends that devs utilize a 14 and 4 split.
You really haven't. You sowed confusion.

Gofreak seems to have done so from trying to interpret what you've written. Frankly, if you really want to provide context I would suggest you supply the information to him.
It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.

This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.
And you're again repeating this as if it's some hard and fast ubiquitous rule as opposed to being software-dependent and something that applies to any GPU, including the XB1's.

:/
 

Tsundere

Banned
So what happens to the balance of multiplatform development when a dev decides to use 14 CUs on PS4 for visuals and the remaining 4 for physics? How would they go about doing it on Xbox One considering it only has 12 CUs?

Because that's not just affecting visuals, it's affecting gameplay.

Can someone clarify for me?

Xbox One has 14 CUs total, 2 of which are deactivated in retail units. So 12 total useable for both rendering and compute?
 

kartu

Banned
OK let's see:

PS4 has a unified memory with 18CUs and tight APU configuration

vs

X1 with less CUs and tiny esram and slow DDR3 ram

and we have one comment believing MS "Balance" spin.

yep, MS has succeeded .


Nailed it. I can't believe someone is seriously discussing this "balanced" spin.
 
This was actual information presented by Sony at a devcon, so don't shoot the messenger.

...


So, there you have it. That's actual real information. So you can't exactly say that Microsoft in that case is spreading misinformation, because Sony themselves actually said that to developers.


"Actual real information." Wow. He must have the documents or slides, right?


then...


I don't know if it works this way on desktop GPU (doubt it), or if it's due to the CPUs in these systems, but this was information that was communicated to developers, and I found it really interesting that Microsoft is now themselves mentioning that they had tested 14 CUs -- something I heard and mentioned to another poster earlier this month -- and it was somehow determined that their GPU clock speed increase was more useful.

Sony said it.


Sounds solid. He must have those documents.


Only saying it now so I don't have to repeat it. I'm not posting a link, because it isn't a link.

Not a link...it must be documents, or photos, or something...right?

it's just information that I think proves what I'm saying is true beyond doubt, but it also carries the risk of getting someone in trouble or violating their trust, which is why I can't just say everything, and why immediately after I said it, I said I would have no issue sharing the information with a mod if it made people more comfortable that I'm being transparent and honest.


Oh, FFfffffff----

"It's just information I think proves what I'm saying?" And it sounds like somebody just told him this? And the source is mysterious and anonymous? Oh....

This isn't my interpretation. It's the interpretation of an experienced games programmer that learned this at an official Sony devcon. I'm just repeating their understanding, so there's very little getting lost in translation as a result. Only way the info could be misinterpreted is if they misinterpreted it, which I doubt.

Wait...what?

He's going to have to stop making definitive statements like the ones he was making before if that's all he's got...right?

I did :p

That explains exactly diminishing returns. There is a drop off in the value of the extra ALU resources for graphics after a certain point, that point being 14 CUs.

I guess not.

Dude, take a chill pill. This is actual information presented by Sony to developers. After you understand that, you're free to take it however you want. This can't be applied to desktop GPU scenarios. This purely down to the design and balance of the PS4 hardware.

Oh shit, I said balance.

But, I kinda expected this kind of silly reaction, so I'll excuse myself.

Wow. He's doubling-down.

...

This sounds like Reiko all over again. He had a supposed "rock solid source" feeding him info too, IIRC...turns out someone was just trolling him.
 

tfur

Member
Nailed it. I can't believe someone is seriously discussing this "balanced" spin.

It's the observation of the precarious balance some have between reality and delusion.

Xbone, in comparison to the PS4, is anything but balanced. Fortunately for PR people, the term "balanced" is general enough to mean almost anything!


.
 

onQ123

Member
Which is pretty much correct. I just provided the context for why Sony recommends that devs utilize a 14 and 4 split. As I've said earlier in this same thread. All 18 CUs are identical. Devs can use them however they like. Some aren't somehow more powerful than others. It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.

This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.



That was quick. :p


But that's just going by the 1080P standards but if the games was being made with other resolutions in mind like the people who also have 2560 x 1440 monitors 14 CU's would fall well below that drop off.

Also PS4 is said to be able to go beyond the DirectX 11.2 Shaders, so you could see 14 CU's end up well below the ALUs needed.
 
You really haven't. You sowed confusion.

Gofreak seems to have done so from trying to interpret what you've written.
And you're again repeating this as if it's some hard and fast ubiquitous rule as opposed to being software-dependent and something that applies to any GPU.

:/

It would be much worse if I were putting my own spin on the info. I told it exactly as I know it, with no changes. That's how it was presented, but maybe it's similar in a sense to the Xbox One early dev documentation where they pointed out how the Xbox 360 GPU was 53% efficient, whereas the Xbox One GPU is 100% efficient.

However, the context for that was that this was the difference between a much less efficient by nature 5-lane SIMD running a piece of code, and a much more efficient 1-lane SIMD, something the PS4 also possesses, running that exact same code, so rather than some revolutionary new thing that made the GPU more efficient, it was instead something rather common that will be shared on both systems. Maybe this 14 and 4 thing for the PS4 GPU is just one such similar case as this Xbox One example where the entire picture or context isn't clear, but I presented it exactly without any alterations and the initial interpretation isn't of the sort where the individual somehow wouldn't understand exactly what was being said.

But that's just going by the 1080P standards but if the games was being made with other resolutions in mind like the people who also have 2560 x 1440 monitors 14 CU's would fall well below that drop off.

Also PS4 is said to be able to go beyond the DirectX 11.2 Shaders, so you could see 14 CU's end up well below the ALUs needed.

I'm not sure, I'm jumping into speculation now, but I'm not aware of it being presented as being tied to any particular resolutions, but not every game is the same, so I suppose anything is possible.

Wow. He's doubling-down.

...

This sounds like Reiko all over again. He had a supposed "rock solid source" feeding him info too, IIRC...turns out someone was just trolling him.

what.... dude, if I say something, and I say it's the truth, which I have, I never back down from it. I don't believe I'm spreading false information, and so I will stand by what I said, exactly as I've done here before.

#1 I'm not this Reiko individual.
#2 My source is rock solid.
#3 I'm not being trolled.
#4 And I'm not trolling anyone else.

Take it or leave it, you don't have to believe me.
 
It would be much worse if I were putting my own spin on the info. I told it exactly as I know it, with no changes. That's how it was presented, but maybe it's similar in a sense to the Xbox One early dev documentation where they pointed out how the Xbox 360 GPU was 53% efficient, whereas the Xbox One GPU is 100% efficient.

However, the context for that was that this was the difference between a much less efficient by nature 5-lane SIMD running a piece of code, and a much more efficient 1-lane SIMD, something the PS4 also possesses, running that exact same code, so rather than some revolutionary new thing that made the GPU more efficient, it was instead something rather common that will be shared on both systems. Maybe this 14 and 4 thing for the PS4 GPU is just one such similar case as this Xbox One example where the entire picture or context isn't clear, but I presented it exactly without any alterations and the initial interpretation isn't of the sort where the individual somehow wouldn't understand exactly what was being said.



I'm not sure, I'm jumping into speculation now, but I'm not aware of it being presented as being tied to any particular resolutions, but not every game is the same, so I suppose anything is possible.



what.... dude, if I say something, and I say it's the truth, which I have, I never back down from it. I don't believe I'm spreading false information, and so I will stand by what I said, exactly as I've done here before.

#1 I'm not this Reiko individual.
#2 My source is rock solid.
#3 I'm not being trolled.
#4 And I'm not trolling anyone else.

Take it or leave it, you don't have to believe me.

It's a shame you can't elaborate more because there's a reason that curve has to exist. The PS4 GPU has nearly double the guaranteed bandwidth and double the ROPs, yet apparently there's still some bottleneck that provides diminishing returns on the extra CUs. That's why it's hard to believe and seems suspicious.
 
where are people getting the "beyond 14 CU's" there's a dropoff nonsense?

is it because the CPU cannot feed the GPU beyond this point?

i know it's NOT a scaling issue with the architecture because the architecture is PROVEN to scale well beyond 14 CU's. (64 shader cores per CU btw). The Radeon 7970 for instance uses 32 CU's (2048 cores) and the 7870 pictarin chip uses 20 CU's for 1280 cores. Both the 7970 and 7870 have 32 ROP's, same as the PS4 GPU....so where is this efficiency dropoff coming from?

Where did the rumor start? It doesn't make sense because the GPU architecture was designed with efficiency in mind and also to scale by increases/decreasing CU counts to provide different chips for different levels of performance (7700/7800/7900 series).

It's not ROP starved, it's not memory bus starved.... could it be limited by the CPU? That's the only logical reason I can think of there being a GPU dropoff past a certain point and Xbone/PS4 both have very similar CPU's....

Can anyone explain?
 
where are people getting the "beyond 14 CU's" there's a dropoff nonsense?

is it because the CPU cannot feed the GPU beyond this point?

i know it's NOT a scaling issue with the architecture because the architecture is PROVEN to scale well beyond 14 CU's. (64 shader cores per CU btw). The Radeon 7970 for instance uses 32 CU's (2048 cores) and the 7870 pictarin chip uses 20 CU's for 1280 cores. Both the 7970 and 7870 have 32 ROP's, same as the PS4 GPU....so where is this efficiency dropoff coming from?

Where did the rumor start? It doesn't make sense because the GPU architecture was designed with efficiency in mind and also to scale by increases/decreasing CU counts to provide different chips for different levels of performance (7700/7800/7900 series).

It's not ROP starved, it's not memory bus starved.... could it be limited by the CPU? That's the only logical reason I can think of there being a GPU dropoff past a certain point and Xbone/PS4 both have very similar CPU's....

Can anyone explain?

Maybe senju will PM you with his super secret source info?

I'm sure he has something, just not sure if the interpretation is correct

And have yet to hear anyone else's interpretation of the source information except from trying to extrapolate based on what senju has said

It's rather maddening to be honest
 
Maybe senju will PM you with his super secret source info?

I'm sure he has something, just not sure if the interpretation is correct

And have yet to hear anyone else's interpretation of the source information except from trying to extrapolate based on what senju has said

It's rather maddening to be honest

You secretly love it. Waiting for that next big thread to explode. It's what the GAF lives for!
 

Kuro

Member
where are people getting the "beyond 14 CU's" there's a dropoff nonsense?

is it because the CPU cannot feed the GPU beyond this point?

i know it's NOT a scaling issue with the architecture because the architecture is PROVEN to scale well beyond 14 CU's. (64 shader cores per CU btw). The Radeon 7970 for instance uses 32 CU's (2048 cores) and the 7870 pictarin chip uses 20 CU's for 1280 cores. Both the 7970 and 7870 have 32 ROP's, same as the PS4 GPU....so where is this efficiency dropoff coming from?

Where did the rumor start? It doesn't make sense because the GPU architecture was designed with efficiency in mind and also to scale by increases/decreasing CU counts to provide different chips for different levels of performance (7700/7800/7900 series).

It's not ROP starved, it's not memory bus starved.... could it be limited by the CPU? That's the only logical reason I can think of there being a GPU dropoff past a certain point and Xbone/PS4 both have very similar CPU's....

Can anyone explain?

I think the diminishing returns might just be that with the extra CUs they can get 75fps rather than 60 but its better to use them for compute like physics and filters and keep the game at a steady 60fps. Sony is probably encouraging developers to use GPGPU compute because it adds more to visual fidelity and physics than just increasing framerate.
 
Top Bottom