Would you consider yourself unbiased when comparing the next gen consoles?
Really man is that a question ?
Probably (there is chance, that RDNA2 with that many cores is unstable with higher clock). With price of more unpredictable performance and GPU feeding on CPU power (chances of developers choosing CPU over GPU are as big as for Horizon 2 being XBox exclusive). I still want to know how low PS5 clocks go. Especially CPU, because GPU clocks they given us are very high judging by current generation of RDNA cards (much closer to maximum overclock range than base clocks).
Is not only if is unstable (which can be also in a bigger chip) in order XSX to get a clock locked under any workload they need underclock its systems so when for example a heavy
workload appears in the queue of the CPU for example an AVX the CPU doesn't need to reduce its clock in order to reduce the peak in consume.
This is traditional approach in the worlds of consoles is not new, has its advantages like constants clock doesn't matter what (if the cooling solution is good enough) but instead
you chip is not using all its potential and even if the system is doing simple things is wasting many energy, for this reason they cannot just increment the clock of one the chips
so easily if they don't control all its SoC as Sony does.
The case of PS5 the only moment when the CPU or GPU will reduce its clocks is when a workload in the CPU/GPU is so heavy than need change its clock in order to be inside
the parameter of energy consume, so its approach is different than the normal follow for the consoles where the termal are rules with lock clocks, yes will exists some situations
where a couple of percent reduction in frequency will be necessary but this allows to Sony to reach so higher frequencies most of times which helps in more things than
only increment the Flops.
Cerny mention an example where if you reduce the clocks by 10% percent because maybe the current workload is so easy to do than is not necessary more than your consume
will be reduce by a 27% percent. So I expect a reduction of only 10% of energy I expect be around 3-3.5% in reduction of clock.
One misconception which we have about the GPU and CPU which is very common see in the PC gaming community is Clock=Use or Clock=Energy consumed, when the reality
is a little more complex, a CPU can be running to 3 GHz doing just simple operation like "8 bit Add" will consume less energy than other doing an AVX operation, you can see that now
in how the CPU in PC reduce its clock in order to not exceed the energy limit set for that chip. The benchmark show a similar consume between 2 chips of the same architure
because both chips are doing the same basics workloads.
So the relationship should be more like this Energy consumed = Clock*Workload and as we know energy consumed is related to heat generated.
But even then those situations where you have running both CPU and GPU to its max clock with heavy workloads for longs periods of time are not common in the videogames,
the videogames are not programs which function is to stress your system, so all this will happens in some specific moments that is why is not recommend for a normal
PC gamer to run stress test in its PCs for long periods of time if they want to have for a long time their components.