• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Call of Duty: Warzone 2.0 - PS5 vs Xbox Series X/S - The DF Tech Review!

ChiefDada

Gold Member
That's actually not what it says at all, lol. That is regarding inline raytracing with DXR 1.0 for PC vs. 1.1 with Series X, where it actually shows GAINS. The optimizations are from the jump to 1.0 for PC to 1.1 with rdna2 for Series X , not regarding Vulkan, ffs.

This was NOT an API comparison with PS5.

And that all goes without the fact that RT is nearly identical across both consoles. Or, conversely, look at an actual UE game with RT with downgraded RT effects on PS5 vs. XSX, such as The Medium.

In reality, DXR is actually more mature than Vulkan regarding efficiency atm. Don't spread misinformation.

The presentation mentioned multiple issues relating to Series RT pipeline where again PS5 implementation was much more flexible. The DXR 1.1 update did allow for inline RT, but as i said before, that alone did not clear all inefficiencies of Series RT pipeline (DXR) relating to different techniques lumen was applying. As SenjutsuSage SenjutsuSage mentioned above there are instances in demo where ray generation sharers were more appropriate than inline. In those instances, the issue of occupancy penalty still applies. The budget between inline vs DXR 1.0 was NOT my ref point for saying PS5 was more efficient. It was the accompanying notes in article. I really don't have the energy to argue back and forth and repost quotes from the presentation. believe what you want.



Unless I missed something, at no stage does it actually claim the PS5 has any advantage over the Xbox Series X in the area cited? In fact, I don't see a single mention of the Playstation 5 anywhere in the document. The only mention seen is of DXR 1.0 leaving performance on the table, and how when using In-Line Ray Tracing the Series X saw a performance gain of between 0.3 - 1.7ms.

iOSFHNF.png

leMgqJs.png




They cite that DXR 1.0 leaves performance on the table and isn't as inefficient.

TdkejG7.png



Except Series X supports a more advanced tier of DXR called DXR Tier 1.1, which supports the very in-line ray tracing method utilized to bring additional performance in the UE5 document.

I'm guessing you saw some youtube video that literally took an opportunity to spread misinformation about what this info was saying?

DirectX 12 Ultimate solves the issue they were having in this document. It has a higher tier of DXR support. Maybe in the actual video or accompanying audio they mention PS5? But there's certainly no mention of PS5 in the powerpoint. They were highlighting a more advanced form of RT that Series X literally supports in DirectX 12 Ultimate. They were comparing to a less flexible form of DXR. Version 1.0.

https://devblogs.microsoft.com/directx/announcing-directx-12-ultimate/

EsVFTsR.png

sKzuMNP.png



In other words, there are cases where dynamic-shading ray tracing is better, and there are cases where in-line ray tracing is better. They can even be combined together.

JFC I'm too tired for this shit. You some good points from research here that I never even disagreed with but your decision to attack without proof just turned me off from potential conversation. Btw I suggest you read the accompanying notes to the slides.
 

ChiefDada

Gold Member
I'm not a graphics programmer but you sound like you're reaching. Are you intimately familiar enough with how this works to know what exactly the overhead would be with the different approach with series x to make a meaningful difference? If so that would be more helpful than just pointing out what the slides say.

Why ask me? I was gracious enough to provide the presentation in the thread so why not just read the slides and accompanying notes and make a decision for yourself?
 

Danknugz

Member
Why ask me? I was gracious enough to provide the presentation in the thread so why not just read the slides and accompanying notes and make a decision for yourself?
Because youre interpreting the slides for everyone else and claiming that it should be read as ps5 being more efficient, and actually referring to specific technicalities in the wording that are supposed to bolster your argument, but you don't seem to understand what they actually mean.
 

ChiefDada

Gold Member
Because youre interpreting the slides for everyone else and claiming that it should be read as ps5 being more efficient, and actually referring to specific technicalities in the wording that are supposed to bolster your argument, but you don't seem to understand what they actually mean.

Edit: I speak for no one but myself. The slides I graciously provided are available for you to read, interpret, and even constructively contribute in this discussion that's unfortunately gone on a tangent

I would never pretend to be any sort of expert in this particular area, but how do you interpret these developer quotes? Which box appears more efficient and developer friendly to you based on their notes?


One solution to that problem that we have for techniques that use ray tracing shaders on XSX is to use what we call “Specialized State Objects”:
Which means the high-level ray tracing pipeline is broken down internally into multiple different ray tracing pipelines based on ray-gen VGPR
And then we select the one we want to avoid paying the occupancy penalty all the time.
And this is not needed on PS5 because there are no ray tracing pipelines as such.
Each ray gen shader is just a regular compute shader with known resource allocations.

Therefore on Xbox compiler breaks down ray gen shader into two parts: before and after TraceRay call and generates a lot of instructions to save and the restore state.
On Xbox the state goes into scratch memory.
On PS5 we can allow spilling state to either LDS or scratch or just kept in registers.

We can specify exactly how much can spill into each category separately.

Continued directly for context of only having scratch memory for save/restore:
And to illustrate how how much big of a problem that actually is:
As a simplified example, assuming we want 10 gigarays and XSX has 500 GB/s bandwidth that means each ray can only read or write 50 bytes.
That includes all buffer,texture, UAV reads and writes and saving or loading current state.
So saving and restoring ray-state can easily dominate other operations if we are not careful.

Remember the goal of Matrix Awakens Demo was to achieve as much parity as possible across all consoles.

They specifically list solutions to overcome Series console RT pipeline AS IT RELATES TO WHAT THEY'RE TRYING TO ACCOMPLISH IN THEIR ENGINE AND WITH THIS DEMO. Or do you think they were just picking on Series X/S and just forgot to list development roadblocks associated with PS5???
 
Last edited:

coffinbirth

Member
Edit: I speak for no one but myself. The slides I graciously provided are available for you to read, interpret, and even constructively contribute in this discussion that's unfortunately gone on a tangent

I would never pretend to be any sort of expert in this particular area, but how do you interpret these developer quotes? Which box appears more efficient and developer friendly to you based on their notes?






Continued directly for context of only having scratch memory for save/restore:


Remember the goal of Matrix Awakens Demo was to achieve as much parity as possible across all consoles.

They specifically list solutions to overcome Series console RT pipeline AS IT RELATES TO WHAT THEY'RE TRYING TO ACCOMPLISH IN THEIR ENGINE AND WITH THIS DEMO. Or do you think they were just picking on Series X/S and just forgot to list development roadblocks associated with PS5???
The problem isn't with the hardware, it's with inefficiencies with Unreal Engine. This is exacerbated in conjunction with Direct X, and isn't isolated to RT in that regard. It's not an API issue either, because Series RT pipeline supports Vulkan as well. Regardless, that demo runs nearly identical on both consoles. Where you read "optimizations" I read UE still doesn't play nice with pipelines. Micro-stutter anyone?

You speak of optimizations for this one specific module of this one engine, but in reality any other dev working on any other engine now has to incorporate Vulkan just to get RT on PS5, a console without a dedicated RT pipeline, so to suggest that it's more efficient/performant, or my favorite, "developer friendly" is just funny. When you look at a game like Control, which is considered by many to be the current benchmark for console RT, and notably not running on Unreal Engine, the proof is in the pudding of how Series X RT is in comparison to PS5.

That you chose to bring this into a CoD thread is pretty funny as well.

But these discussions are largely moot as what that demo really proves is that we won't be seeing games utilizing this tech in this manner on these consoles anyways because 20FPS feels like shit.
 
Last edited:
Warzone 2.0 neither runs on UE5, nor has any RT.

Not sure why this topic has become the battlefield for it.
To be fair. ChiefDada was simply grabbing a quick example to point out how one console might play nicer with one engine feature, while the other console might play nicer with another. And in that regard he's right on point.

This is not the droid warrior you're looking for.
DmQnS6v.jpg
 

coffinbirth

Member
To be fair. ChiefDada was simply grabbing a quick example to point out how one console might play nicer with one engine feature, while the other console might play nicer with another. And in that regard he's right on point.

This is not the droid warrior you're looking for.
DmQnS6v.jpg
Reading back over the thread I tend to agree, and apologies to ChiefDada for assuming this was warring related.
I believe the quote that got peoples attention was this;
"because DXR was inefficient and PS5's was more flexible"
Which I personally took umbrage with because Series can utilize the same API as PS5 and isn't limited to DXR as his take suggests, which makes that statement erroneous. Once the data got posted, the goalposts moved though.
 

Rea

Member
Like you say, it will really come down to which designs are better suited for different next gen engines. API also plays a role. In the recent UE5 presentation, we learned that PS5 ray tracing was faster/better suited for Lumen Hardware RT than Series X/S. Not because of any issue with hardware, but because DXR was inefficient and PS5's was more flexible.
From my understanding, DXR has automated Raytracing pipeline, it doesn't mean inefficient, it's more like harder for customization. Automated pipeline is very good for inexperience developers and for PC hardwares due to the nature of different varieties of hardware. It's just "automated". Devs just call Raytracing Function in the code and Voila it works on any RTX hardware. However, the downside is performance penalty if the developer is not careful with resource allocation.
 
I'm not a graphics programmer but you sound like you're reaching. Are you intimately familiar enough with how this works to know what exactly the overhead would be with the different approach with series x to make a meaningful difference? If so that would be more helpful than just pointing out what the slides say.

They literally made it up to make the PS5 sound better than the Series X. He totally misinterpreted the information. The method of ray tracing shown is better in some cases, and not so good in other cases. And Series X has a more advanced level of DirectX support that solves the very issue the slides demonstrate needed to be fixed a different way.
 

Zathalus

Member
I can buy Lumen being slightly faster on the PS5 API, as UE5 seems to run near identical on both despite the other XSX GPU advantages.

This does not apply to all RT implementations however, as Control and Metro Exodus prove.

That being said, the PS5 API still seems to deliver some advantages compared to DX12 like this and lower CPU overhead in some cases as well. One of the trade offs for having the flexibility that DX12 and Xbox has across multiple platforms.
 

Lysandros

Member
There are so many variables to consider when comparing XSX vs PS5 side by side game performance like the engine but it just goes to show that whilst Xbox went for raw on paper power, Cerny wasn't talking shit about the design and efficiency of the PS5 to allow devs to squeeze the most out of it!

I have both consoles so not console warring here but I remember when everyone was saying that the extra flops on XSX were going to create a wide gap between the two and dominate but still yet to really see it.
The confusion/prejudice was and still is (an extremely stubborn one apparently) to equate Tflops which is only the theoretical compute metric of a GPU with overall 'power' of it. Without this blinding obstination the whole specs always told another story inline with the results we are seeing since these past two years. Those machines are simply neck to neck in accordance to their specs.
 
Top Bottom