So how does boost work in this case? Put simply, the PlayStation 5 is given a set power budget tied to the thermal limits of the cooling assembly. "It's a completely different paradigm," says Cerny. "Rather than running at constant frequency and letting the power vary based on the workload, we run at essentially constant power and let the frequency vary based on the workload."
An internal monitor analyses workloads on both CPU and GPU and adjusts frequencies to match. While it's true that every piece of silicon has slightly different temperature and power characteristics, the monitor bases its determinations on the behaviour of what Cerny calls a 'model SoC' (system on chip) - a standard reference point for every PlayStation 5 that will be produced.
The PlayStation 5 has variable frequencies for CPU and GPU, with an internal monitor adjusting clocks to keep the system within its power budget.
"Rather than look at the actual temperature of the silicon die, we look at the activities that the GPU and CPU are performing and set the frequencies on that basis - which makes everything deterministic and repeatable," Cerny explains in his presentation. "While we're at it, we also use AMD's SmartShift technology and send any unused power from the CPU to the GPU so it can squeeze out a few more pixels."
It's a fascinating idea - and entirely at odds with Microsoft's design decisions for Xbox Series X - and what this likely means is that developers will need to be mindful of potential power consumption spikes that could impact clocks and lower performance. However, for Sony this means that PlayStation 5 can hit GPU frequencies way, way higher than we expected. Those clocks are also significantly higher than anything seen from existing AMD parts in the PC space. It also means that, by extension, more can be extracted performance-wise from the 36 available RDNA 2 compute units.