Yeah, after reading the additional info in the DF follow up interview to the Road to PS5, it was interesting that the power control unit in the APU uses a simulation model of the silicon, so that the specifics of each APU aren't important to the deterministic clock & power/versus workload.
I was surprised to see Cerny mention that CPU, GPU and memory controller activity all played a part in the clock rate chosen by the power control unit. So in theory, an algorithm re-written to be more compute intensive, than memory intensive might see a clock boost(and performance boost) if it reduced power draw, or vice versa,
After considering the usage model of game consoles, the design has to be this way to maintain quality of service.
For example if someone is playing a 60fps multiplayer game - Street fighter for example - the logic of the game is based on the frame rate. Hit boxes, network latency correction, control response and so on.
It would be a poor experience if someone playing in a different ambient environment (or just a “loser” in the silicon lottery) were to find they’re playing at a disadvantage because their system is modifying clocks while their opponent isn’t getting that.
In fact such a scenario would spell disaster for the brand, so they can’t allow varying clocks or power to impact performance in an unpredictable way.
With traditional console design, an overhead of TDP and computing power is left to ensure a stable frame rate. Couple that with fixed clocks and the experience is the same on all units. That takes a lot of testing of course and leaves performance on the table and all of that is down to the developers to work out.
Using Sony’s approach - an SoC “model” determining the load to manage clocks and smartshift for power allocation - means that each unit reacts in the same way to the code it’s running regardless of ambient conditions or a particular piece of silicon.
It also means devs code to max out that model - they don’t have to leave performance “on the table” because the system is managed algorithmically at runtime, performance and response is predictable and is identical for every console.
Now having said that, there are risks. If Sony haven’t done a good job, the SoC model will be too conservative and there won’t be any dev “tricks” to extract more performance - they’re handcuffed by the system. The code can only be as good as the SoC model Sony implement.
If Sony go the other way and don’t have good QA, then their model may run too close to the limit of the hardware and in some environments lead to instability or failures.
So a lot remains to be seen with this solution. But the tech is interesting, and perhaps revolutionary if does lead to performance and/or development gains over traditional console design.
I’m not sure how easy it will be to modify that SoC model post release either. Any changes to the model would be like changing the hardware - games could be affected in weird ways.
So yeah - there are a lot of implications and at the moment not much in the way of data to go on.