SEGAvangelist
Member
It was derailed and turned into a ps4 specs thread, I think.what happened to this thread?
It was derailed and turned into a ps4 specs thread, I think.what happened to this thread?
But he has so many threats to attend. I don't think he has enough enough time to scour old B3D posts.
what happened to this thread?
what happened to this thread?
About the CU's ...
All are the same, no differences between them.
About the "balance". As Cerny said in the interview you can use them as you wish, render or GPGPU. They simply said that those "extra" 400 Gflops are "ideal" for gpgpu tasks instead of rendering, but is up to you how to use them. Not that you can't gain anything using the 18 CU's for rendering, obviosly you will see a difference using them only for rendering versus 14 CU's but they want to encourage devs to use gpgpu compute in ps4.
So it's a misunderstood that the gpu is unbalanced in any way.
Hope lherre doesn't mind me reposting this from the other thread:
Senjutsu happened. He has made some claims that have gotten many of us confused. Now we are just waiting for Bish or someone else with insider knowledge to weigh in and get us back on track.
Hope lherre doesn't mind me reposting this from the other thread:
Not confused just want this ended. He is just being a spin doctor.
Thanks. It essentially sounds like it will all be very software dependent. And the idea that what was presented was just a typical use case, rather than being some set in stone inflection point, seems sensible. One may see non-linear performance gains from using all 18 CUs for graphics or they could potentially see linear gains depending upon what they want to do.Correct.
The number 14 depends on the software. It might be 'typical'. The reason a 'typical' shader workload may scale less than linearly beyond that inflection point is because the ratio of its demands on ALU vs its demands on other resources (like pixel write or bandwidth) is less than the ratio available in the hardware. So other resources start forming a bound, that prevent the ALU side of the system running on all cylinders. ALUs will start to idle more as the ratio of some other resource in the hardware isn't matching what the software calls for. Again, what ratio software calls for will vary. It's why in DF's benchmarks some games gain more, some less, from additional ALU past the base card they used in that test. If you had a software mix that was very ALU intensive you could see a linear scale with all the CUs you can throw at it.
What Sony was saying here was 'just' what many would have already concluded - that past a certain point for a certain resolution, a x% larger amount of ALU ("flops") won't necessarily give you a x% higher framerate. In a typical case today, you might be leaving up to 4 CUs worth of ALU time on the table. So consider using (more ALU intensive) GPGPU to soak up the excess.
Hope lherre doesn't mind me reposting this from the other thread:
Hope lherre doesn't mind me reposting this from the other thread:
Sorry guys, I'm new here. Who is lherre? First time I'm seeing a quote from this person. Insider? Dev?
Well, the man still stands after PM-ing Bish. I'd reserve judgments until there are more developments.
Well he might be on his knees begging, all we know is that the banhammer hasn't swung yet, then again, Bish hasn't posted regarding the reliability of his information yet. I'd say it's still up in the air, likely I expect he read something how he wanted it to be read and took the wrong meaning from it.
If this means anything: I can confirm that senjutsusage is legit.
Well he might be on his knees begging, all we know is that the banhammer hasn't swung yet, then again, Bish hasn't posted regarding the reliability of his information yet. I'd say it's still up in the air, likely I expect he read something how he wanted it to be read and took the wrong meaning from it.
Hope lherre doesn't mind me reposting this from the other thread:
It doesn't
If this means anything: I can confirm that senjutsusage is legit.
If this means anything: I can confirm that senjutsusage is legit.
LolPeople think he is a dev. Afaik he is not.
If this means anything: I can confirm that senjutsusage is legit.
If this means anything: I can confirm that senjutsusage is legit.
Thanks. It essentially sounds like it will all be very software dependent. And the idea that what was presented was just a typical use case, rather than being some set in stone inflection point, seems sensible. One may see non-linear performance gains from using all 18 CUs for graphics or they could potentially see linear gains depending upon what they want to do.
I really don't think this info would be any source of contention if it had been presented as you have.
The corollary of course, which brings us back on topic more, is the exact same would presumably apply to the XB1 regardless of the balance mantra, i.e. for certain software one may already be beyond non-linear performance gains utilising 12 CUs due to things like fill rate and bandwidth?
He's a dev.People think he is a dev. Afaik he is not.
If this means anything: I can confirm that senjutsusage is legit.
Most devs will probably just use all 18CUs for graphics.So what happens to the balance of multiplatform development when a dev decides to use 14 CUs on PS4 for visuals and the remaining 4 for physics? How would they go about doing it on Xbox One considering it only has 12 CUs?
Because that's not just affecting visuals, it's affecting gameplay.
You really haven't. You sowed confusion.I just provided the context for why Sony recommends that devs utilize a 14 and 4 split.
And you're again repeating this as if it's some hard and fast ubiquitous rule as opposed to being software-dependent and something that applies to any GPU, including the XB1's.It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.
This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.
So what happens to the balance of multiplatform development when a dev decides to use 14 CUs on PS4 for visuals and the remaining 4 for physics? How would they go about doing it on Xbox One considering it only has 12 CUs?
Because that's not just affecting visuals, it's affecting gameplay.
Yeah, you are a good source.
OK let's see:
PS4 has a unified memory with 18CUs and tight APU configuration
vs
X1 with less CUs and tiny esram and slow DDR3 ram
and we have one comment believing MS "Balance" spin.
yep, MS has succeeded .
That's correct.Can someone clarify for me?
Xbox One has 14 CUs total, 2 of which are deactivated in retail units. So 12 total useable for both rendering and compute?
Can someone clarify for me?
Xbox One has 14 CUs total, 2 of which are deactivated in retail units. So 12 total useable for both rendering and compute?
People think he is a dev. Afaik he is not.
This was actual information presented by Sony at a devcon, so don't shoot the messenger.
...
So, there you have it. That's actual real information. So you can't exactly say that Microsoft in that case is spreading misinformation, because Sony themselves actually said that to developers.
I don't know if it works this way on desktop GPU (doubt it), or if it's due to the CPUs in these systems, but this was information that was communicated to developers, and I found it really interesting that Microsoft is now themselves mentioning that they had tested 14 CUs -- something I heard and mentioned to another poster earlier this month -- and it was somehow determined that their GPU clock speed increase was more useful.
Sony said it.
Only saying it now so I don't have to repeat it. I'm not posting a link, because it isn't a link.
it's just information that I think proves what I'm saying is true beyond doubt, but it also carries the risk of getting someone in trouble or violating their trust, which is why I can't just say everything, and why immediately after I said it, I said I would have no issue sharing the information with a mod if it made people more comfortable that I'm being transparent and honest.
This isn't my interpretation. It's the interpretation of an experienced games programmer that learned this at an official Sony devcon. I'm just repeating their understanding, so there's very little getting lost in translation as a result. Only way the info could be misinterpreted is if they misinterpreted it, which I doubt.
I did
That explains exactly diminishing returns. There is a drop off in the value of the extra ALU resources for graphics after a certain point, that point being 14 CUs.
Dude, take a chill pill. This is actual information presented by Sony to developers. After you understand that, you're free to take it however you want. This can't be applied to desktop GPU scenarios. This purely down to the design and balance of the PS4 hardware.
Oh shit, I said balance.
But, I kinda expected this kind of silly reaction, so I'll excuse myself.
Nailed it. I can't believe someone is seriously discussing this "balanced" spin.
That's correct.
Yup, the GPU has 12 Compute Units.
Which is pretty much correct. I just provided the context for why Sony recommends that devs utilize a 14 and 4 split. As I've said earlier in this same thread. All 18 CUs are identical. Devs can use them however they like. Some aren't somehow more powerful than others. It's just that once you hit the point where you begin using 14 for graphics, any additional CUs that are apportioned for graphics specific tasks will do so, according to Sony, at a significant drop off in value.
This doesn't mean they do nothing, but the impact for graphics apparently starts to lessen after 14 CUs, hence the recommendation to use the remaining 4 CUs for compute specific tasks in order to get the most out of the remaining ALUs.
That was quick.
You really haven't. You sowed confusion.
Gofreak seems to have done so from trying to interpret what you've written.
And you're again repeating this as if it's some hard and fast ubiquitous rule as opposed to being software-dependent and something that applies to any GPU.
:/
But that's just going by the 1080P standards but if the games was being made with other resolutions in mind like the people who also have 2560 x 1440 monitors 14 CU's would fall well below that drop off.
Also PS4 is said to be able to go beyond the DirectX 11.2 Shaders, so you could see 14 CU's end up well below the ALUs needed.
Wow. He's doubling-down.
...
This sounds like Reiko all over again. He had a supposed "rock solid source" feeding him info too, IIRC...turns out someone was just trolling him.
Lol
this is funny.
It would be much worse if I were putting my own spin on the info. I told it exactly as I know it, with no changes. That's how it was presented, but maybe it's similar in a sense to the Xbox One early dev documentation where they pointed out how the Xbox 360 GPU was 53% efficient, whereas the Xbox One GPU is 100% efficient.
However, the context for that was that this was the difference between a much less efficient by nature 5-lane SIMD running a piece of code, and a much more efficient 1-lane SIMD, something the PS4 also possesses, running that exact same code, so rather than some revolutionary new thing that made the GPU more efficient, it was instead something rather common that will be shared on both systems. Maybe this 14 and 4 thing for the PS4 GPU is just one such similar case as this Xbox One example where the entire picture or context isn't clear, but I presented it exactly without any alterations and the initial interpretation isn't of the sort where the individual somehow wouldn't understand exactly what was being said.
I'm not sure, I'm jumping into speculation now, but I'm not aware of it being presented as being tied to any particular resolutions, but not every game is the same, so I suppose anything is possible.
what.... dude, if I say something, and I say it's the truth, which I have, I never back down from it. I don't believe I'm spreading false information, and so I will stand by what I said, exactly as I've done here before.
#1 I'm not this Reiko individual.
#2 My source is rock solid.
#3 I'm not being trolled.
#4 And I'm not trolling anyone else.
Take it or leave it, you don't have to believe me.
Its just you are clueless. One is a dev and the other is not.Yes. Very funny. Haha. Can't stop laughing. -.-
Please don't tell me your source is that guy off psu forum.
Its just you are clueless. One is a dev and the other is not.
where are people getting the "beyond 14 CU's" there's a dropoff nonsense?
is it because the CPU cannot feed the GPU beyond this point?
i know it's NOT a scaling issue with the architecture because the architecture is PROVEN to scale well beyond 14 CU's. (64 shader cores per CU btw). The Radeon 7970 for instance uses 32 CU's (2048 cores) and the 7870 pictarin chip uses 20 CU's for 1280 cores. Both the 7970 and 7870 have 32 ROP's, same as the PS4 GPU....so where is this efficiency dropoff coming from?
Where did the rumor start? It doesn't make sense because the GPU architecture was designed with efficiency in mind and also to scale by increases/decreasing CU counts to provide different chips for different levels of performance (7700/7800/7900 series).
It's not ROP starved, it's not memory bus starved.... could it be limited by the CPU? That's the only logical reason I can think of there being a GPU dropoff past a certain point and Xbone/PS4 both have very similar CPU's....
Can anyone explain?
Maybe senju will PM you with his super secret source info?
I'm sure he has something, just not sure if the interpretation is correct
And have yet to hear anyone else's interpretation of the source information except from trying to extrapolate based on what senju has said
It's rather maddening to be honest
where are people getting the "beyond 14 CU's" there's a dropoff nonsense?
is it because the CPU cannot feed the GPU beyond this point?
i know it's NOT a scaling issue with the architecture because the architecture is PROVEN to scale well beyond 14 CU's. (64 shader cores per CU btw). The Radeon 7970 for instance uses 32 CU's (2048 cores) and the 7870 pictarin chip uses 20 CU's for 1280 cores. Both the 7970 and 7870 have 32 ROP's, same as the PS4 GPU....so where is this efficiency dropoff coming from?
Where did the rumor start? It doesn't make sense because the GPU architecture was designed with efficiency in mind and also to scale by increases/decreasing CU counts to provide different chips for different levels of performance (7700/7800/7900 series).
It's not ROP starved, it's not memory bus starved.... could it be limited by the CPU? That's the only logical reason I can think of there being a GPU dropoff past a certain point and Xbone/PS4 both have very similar CPU's....
Can anyone explain?