I have defended Alex and DF more than any other Sony fan here, but he has repeatedly downplayed the SSD, inferred Sony and Cerny were lying about hardware ray tracing, and has been called out by industry programmers. TBH, I take it back. I dont care if he has an agenda or not. It doesnt matter. He's made a fool out of himself time and time again.
So he's made a few mistakes here and there, people make mistakes. I still don't see what "downplaying the SSD" actually means without some context. WRT PS5 RT that sounds like it was a mistake or maybe a misunderstanding on his part. However it's often been speculated Sony's RT implementation is different than standard RDNA 2, that was actually one of the earliest rumors about the system. This can still be the case while still being hardware-accelerated.
I've seen many other trusted people during the course of next-gen speculation get things wrong multiple times, or say things in a way where it seemed they were lying through omission. Plenty of insiders made these kinds of mistakes, for example. A few of them as well as just other people with technical knowledge made a lot of mistakes and some also arguably misguiding not just WRT PS5 specs but also Series X and Series S specs, too.
But for some odd reason Alex seems to get a target on his back by people who have some hateboner with DF, I do find that kind of amusing and also kinda weird.
Based on what you are "sure"?
From what I've seen of their posts on other forums surrounding performance results.
You aren't unable to understand what's missed in the full bandwidth report right? They practically leave to intend the bandwidth is like to have 336+560 GBs but can't be possible. Every single time the CPU uses 1 GB, occupies half of the bandwidth of the RAM bench, excluding the GPU to it.
You're describing bus contention. PS5 has a similar issue, all hUMA designs struggle with bus contention since the bus arbitration makes it that only one processor component drives the main bus to memory at a time.
Series X's issue with this might be somewhat more exacerbated compared to PS5 but in typical usage the bandwidth accesses should never drag to an average equivalent to 448 GB/s; this kind of depends on how CPU-intensive a game is in terms of busywork outside of handling drawlists/drawcalls for the GPU.
Typically these CPUs shouldn't require more than 30 GB/s - 50 GB/s of bandwidth when in full usage. Even tacking on an extra 10 GB/s in Series X's case due to its RAM setup that's still an effective 500 GB/s - 520 GB/s for the GPU to play with, compared to 398 GB/s - 418 GB/s for PS5's GPU (factoring out what the CPU might be using).
...although this actually also doesn't account for audio or the SSD, both of which eat away at bandwidth. So it's somewhat lower effective bandwidth for both Series X and PS5's GPUs that they actually might typically get to use, but everything I'm describing also assumes non-GPU processes are occupying the bus for an entire second when in reality that is almost never true; they're constantly alternating access on the measure of frames or even more accurately, cycles.
You're acting like PS5 doesn't have a single advantage to its GPU over XSX. While XSX may have higher compute throughput, the PS5's GPU has higher pixel and rasterization throughput over XSX's GPU. So technically the PS5 also has better potential for graphics than XSX. What this means is that different games will show each console GPU's strengths and weaknesses. Games that are rasterization, pixel fillrate bound will run faster on PS5 (i.e. higher fps or higher sustained resolution if dynamic res is used), and games that favor compute, texture throughput will pull ahead on XSX.
It really depends. Series X's GPU advantage isn't just theoretical compute, but in BVH intersections for RT as well as texture fillrate (texel fillrate). The latter is interesting because while texels can be used as pixels, they don't actually have to. They can be used as LUTs for certain effect data, like shadow maps and tessellation.
PS5 definitely has advantages in pixel fillrate, culling rate and triangular rasterization rate, but we can't forget that the GDDR6 memory is going to be a big factor here because not all calculable data will be able to reside in the GPU caches. PS5 probably has 512 extra MB of GDDR6 for game data compared to Series X (14 GB vs. 13.5 GB), but its RAM is 112 GB/s slower. It's SSD is 125% faster but that "only" a fillrate of up to 22 GB/s which is only a fraction of the 112 GB/s RAM delta.
Then we also need to take into account that several games so far in terms of load times (i.e getting data from storage into memory) have shown remarkably close load times between Sony and Microsoft's systems, where if PS5 is pulling ahead it's only by a couple of seconds at most and in some cases not even that. So essentially both systems are seemingly able to populate their free RAM in virtually identical amounts of time (though I guess if PS5's specific I/O features will flex it could be in asset streaming, the question would be how big of a flex that would be over Series X and I'm thinking it's probably not going to be that big of a delta between them on this factor either).
Basically, we can't say for certain that in EVERY case a game that favors this or that will run better on Series X or favors this other stuff will run better on PS5, because you can't simply look at GPU strengths and weaknesses in isolation to the rest of the system's design from top to bottom. And just like how some people have underestimated the impact of variable frequencies being a benefit to Sony's design, I think some have underestimated the impact of RAM bandwidth (even if that's a fast/slow RAM pool kind of thing) and particulars of Microsoft's design in terms of the memory and SSD I/O.