A fixed system at peak compute is universally better, that's inarguable because a fixed compute ceiling can be coded around without unknown variables and pushed to capacity. There's no benefit to variance. Variance introduces convolution and exception in coding, it changes tolerances. If you know the system boosts but can fall below a certain threshold you are forced to code around the lower limit as the baseline, and introduce exceptions in code in the event it can push higher. That's convoluted design and developing around a limitation.
The only way a system of the PlayStation 5's nature would be introduced in its present capacity is because Sony is taking a lower spec'd machine and pushing it in ways never intended. That's why it has to offset between the GPU and CPU, not because of some intelligent design. If it was intelligent design it would never have to throttle and shift resources, they would have merely given it more CU's and a fixed frequency which would nullify this entire power struggle mess.
Well it's not because when the GPU is not being stressed to its maximum potential it's still operating at 1,825Mhz, that is inefficient but it doesn't matter because it's invisible to the user experience. Just because you say something is incoherent doesn't make it so, and efficiency isn't a unilateral aspect of a systems operation. Efficiency needs to be defined on a case by case basis for multiple aspects of function. You can't just say "X system is more efficient than Y system", you have to break down in which ways it is.
You do realise now that all new devices work on variable clocks except MS consoles ? Sony now with Ps5, even Nintendo moves clocks, PCs, apple....shall I go on ? So MS all alone in keeping fixed clocks is better because you say so.../s
All we know is the paper TF number of XSX is higher, and its bandwidth is higher for the first 10 GB, XSX will loose the advantage per TF for larger games as its a shared bus with slower access to the last 6GB.
We dont know the variablility of the PS5 clocks other than Cerny stating 2 % gave 10 % less power. Thats it. Everything else you spewed is just tales and FUD nonsense for now.
We dont know the max clock of RDNA2 where it stops giving performance benefits....AMD inferred RDNA2 some critical layers of both consoles employing EUV, likely on Fins and gates allowing the AMD stated 50 % per / watt improvement. Might have improved some of the semiconductor tech in the gates as well..
Cerny did say that there was no benefit above 2.23 for RDNA2 if you bother to read the articles is allw e have to go on..... Oh and you and Timdog and widows central fanboys of course.
We dont know the effects of VRS. cache scrubbers or indeed other thinsg that may effect efficiency.
You dont know either, that DEFINATELY is for sure.
Both consoles seem bandwidth constrained if pushing 4K and high detail compared to equivalant TF PC parts, we dont know how well both will perform....