What's so CELL or SPU/SIMD like in XSX architecture leading to a "hidden potential" and much harder programming comparatively?The same design that plagued the PS3 in extracting its potential. It requires workflows and process to be executed in a more parallel manner to take advantage of the higher CU count. On top of PS3's dual core CPU setup, it also had 8 additional co-processors.
Where is the console warring in this technical take? I own neither console and couldn't care less about Sony or Microsoft and whoever wins whichever generation.You can freely disagree with my comment. I won't care about your accusations any further.*Looks at post history*
Literally only console warring.
I don't trust anything this developer says about technical matters after what they did with WRC on Switch:
This is what I'm worried about and I can't see MS ever getting around it. MS first parties will always be tethered to a significantly weaker system while the ps5 won't (as long as it's current gen only). So MS games will always be handcuffed by the S.That's the thing - I feel like Xbox first party is even more constricted. As I pondered in OP, how far can GPU parallel compute development go when the lead SKU is 20 CUs? Serious question for those who have a better understanding of game development.
Also where is the Direct ML they were touting? MS has passed me off this gen with a lot of what I would consider false promises OR half steps like shutting down their Back Compat program.Meh. How about using them mesh and primitive shaders?
XSX's lower front-end (geometry) and back-end (RBE) performance can be worked around with mesh shaders and compute shader/ texture units (AMD's Async Compute marketing). One of the main points about DX12U is moving the front-end geometry into compute shaders.That's some very basic take coming from a developer actually but NDAs and such i understand to some degree. I also find "Xbox Series X’s GPU raw performance is better" comment to be somewhat misleading. I think what he really means is "XSX has a higher theoretical 'compute' ceiling/power", 'performance' alludes to final real world thoughtput which is contradictory to the context and also compute is only one facet of it. More so than parallelism XSX's main problem is that its higher CU count design is coupled with a slower GPU back and front end compared PS5, which naturally reduces its real/whole GPU power. Thus i am not adhering to XSX is "more powerful but harder to exploit" narrative as being the main reason behind the real world results. PS5/XSX situation is very far from being analogous to PS3/X360 one.
i think i have an ideaSometimes people hear what they want to hear. The message is very clear. PS5 easier to exploit, Series X has a higher performance ceiling. It's up to the devs and engine developers to prioritize where they want their resources.
How you read this and come to the conclusion that it's states one is better than the other is beyond me.
it's clearly a huge deal for so many people to get their "told ya so" sentiment in whenever their console wins the latest special olympics events of ps5 vs xbox series x (titans of technology and computing power).Most the differences have been down to a frame dropped here and there, with many more being technically different but imperceptible outside for forensic scrutiny. Anything more significant seemed to be more the games fault than whatever console. I don't even know how df pulls views for these comparisons when every one is a big nothingburger.
it's clearly a huge deal for so many people to get their "told ya so" sentiment in whenever their console wins the latest special olympics events of ps5 vs xbox series x (titans of technology and computing power).
It is crazy how you have no idea what you are talking about, what you mentionned is not new nor exclusive to MS, it just has a different nomenclature under DX 12... mesh and primitve saders are basically the same etc but ,you never learn, no matter how much people tell you that you are wrong.This is pretty much what we all knew.
Sony's big push was time to triangle. Keeping the same CU count, having very mature tools made it alot easier to get more of its potential out earlier on.
MS is relying on alot of tech that hasn't even been used by any devs yet such as Mesh Shaders and Sampler Feedback Streaming.
Will be interesting to see how games multiplatform games compare at the end of the gen.
This has been asked and answered ad Infinitum. The same way they do it for pc.https://gamingbolt.com/xbox-series-...s5s-but-harder-to-exploit-wrc-generations-dev
Neither the topic nor the sentiment is new, but it's always nice to hear input from 3rd party developers on platform comparisons. My question as it relates to the future is how far MS first party is able to extract Series X GPU parallelism while also developing for the more popular Series S, which has significantly less CUs (~65% less) not to mention lower clock speed.
FYI, RBEs (Render Back End that contains color ROPS and z-ROPS) are not the only hardware for read/write I/O.I feel like most of this thread is talking about GPU cores as if they're CPU cores and you have to program explicit multithreading or the extra ones just don't work. It doesn't really work like that, and I maintain that the application of the term "cores" to GPU groupings of ALUs was a mistake that continue to fool uninformed people.
GPUs and GPU programming are "embarrassingly parallel" machines, the parallelism is inherent in their nature. You're not sitting there going, oh fuck, I have to make a thread for CU 51 now. If there's an issue scaling up to a higher CU count, there's something bottlenecking it or not moving fast enough to feed it. The PS5s GPU simplified as being weaker because people take the shaders * clock speed = Tflops as everything they need to know, but the higher clock speed actually clocks other parts of its logic higher, for example its pixel fill rate of 142Gpixel/s vs 116 on the XSX, and then all the Compute Unit command processor logic etc.
It sounds like maybe these bottlenecks are being worked around and gradually showing more of the XSX's higher peak shader performance, and the APIs and OS and toolset are surely getting better to do so as well. The XBO generation also showed their API was heavier despite being among the "low level" ones, maybe some of that still going on and improving as well. The PS5 for its part has its own leads in hardware as well, Gflops are the simplest baseline paper calculation, it's like comparing CPU speeds just by clock speeds.
Yeah or dynamic res that can drop to 1721p on one console and 1823 on otherMost the differences have been down to a frame dropped here and there, with many more being technically different but imperceptible outside for forensic scrutiny. Anything more significant seemed to be more the games fault than whatever console. I don't even know how df pulls views for these comparisons when every one is a big nothingburger.
What are you going on about? This article has nothing to do with Mark Cerny nor the Roadmap to PS5 event. Benoit Jacquier is a dev who made specific statement. You used it to create a fictitious claim of intent.
Also, it's easier to fully use 36 CUs in parallel than it is to fully use 48 CUs. When triangles are small its much harder to fill all those CUs with useful work.
either budget issue or its a launch game so they didnt fully utilize the power of the console.WR generations looks no better than Dirt Rally 2.0 which came out in 2019
Surely both the PS5 and Xbox Series X should be more than enough to run WRC generations at high resolution and a high framerate.
I mean fucking seriously... rally games should look waaaaaay better than this considering that they have no AI to deal with. They literally just need to render the game environment, the one vehicle (no opponent vehicles of course since it's rally), and do the physics and that's it. I guess it's a budget issue .
Yeah, specs are nearly identical and for over two years games have been identical. It's time to move on.This is like the 360 vs PS3 gen all over again, except the difference in theoretical power isn’t as large
At this point who cares?
Dude, that's exactly what Cerny said in The Road to PS5. And still it has nothing to do with it?
Min. 32:55, time stamped.
As a developer, our priority is to develop for the XSX when it comes to the Xbox platform. Then do what we can with the XSS. Talking to a few other developers that is their way of thinking also.https://gamingbolt.com/xbox-series-...s5s-but-harder-to-exploit-wrc-generations-dev
Neither the topic nor the sentiment is new, but it's always nice to hear input from 3rd party developers on platform comparisons. My question as it relates to the future is how far MS first party is able to extract Series X GPU parallelism while also developing for the more popular Series S, which has significantly less CUs (~65% less) not to mention lower clock speed.
They even put WRC Generations on the Switch so that tells us they weren't exactly shooting for the stars.either budget issue or its a launch game so they didnt fully utilize the power of the console.
As a developer, our priority is to develop for the XSX when it comes to the Xbox platform. Then do what we can with the XSS. Talking to a few other developers that is their way of thinking also.
As long as it runs well.What requirements are you bound to in terms of performance differential between the Series X and S? Or are there none and it just has to "be" on the Series S and run reasonably well-ish?
This has been asked and answered ad Infinitum. The same way they do it for pc.
As a developer, our priority is to develop for the XSX when it comes to the Xbox platform. Then do what we can with the XSS. Talking to a few other developers that is their way of thinking also.
And that's my concern. That would be awful. Alternative such as GPGPU optimization would be much more interesting than improved resolutions that you typically see in PC environment.
Interesting. Thanks for sharing. I'm assuming you're 3rd party?
I immediately thought of that as I read this thread...
this is exactly what some people expected (hoped), hence the never-ending tools, bugs, lazy devs rhetoric whenever tables are turned.I think some people got caught up with what happened with One X vs. PS4 Pro and thought it PS5 vs. Series X would be a repeat of that but they couldn’t have been more wrong.
Unfortunately for MS, there is no way that devs are actually going to deploy those resources for a platform with a significantly smaller user base and lower sales.Sometimes people hear what they want to hear. The message is very clear. PS5 easier to exploit, Series X has a higher performance ceiling. It's up to the devs and engine developers to prioritize where they want their resources.
How you read this and come to the conclusion that it's states one is better than the other is beyond me.
If only there was another larger MS platform that devs developed for. That way MS could implement some form of standard template of graphical feature sets that game engine developers could implement that could also be shared "direct" with their "x"box platform. If only such a thing existed.Unfortunately for MS, there is no way that devs are actually going to deploy those resources for a platform with a significantly smaller user base and lower sales.
Unfortunately for MS, there is no way that devs are actually going to deploy those resources for a platform with a significantly smaller user base and lower sales.
Obviously MS has been doing that for a very long time. There’s always going to be a gap between a PC running a regular OS and a console.If only there was another larger MS platform that devs developed for. That way MS could implement some form of standard template of graphical feature sets that game engine developers could implement that could also be shared "direct" with their xbox platform. If only such a thing existed.
I am not sure you realize this but game engines don’t code and optimize themselves. Yet.Game engines don't look at platform sales and say " nah I'm not gonna scale to this platform "