If the frequencies were fixed at max values, then it would overheat correct?
Also does the developer need to be aware of how to move the power from GPU or CPU or does the PS5 do it automatically?
If it's automatic does it mean that it somehow "knows" what's coming up? Is this feasible for most games?
For CPU intensive games, would it mean you're mostly going to get closer to 9 TF from this thing (eg. Ass creed?).
Just curious.
TF is the rating of the CUs with their specific paramaters X clock speed for how many floating point operations the GPU is capable of
per SECOND. If the game is CPU intensive- meaning it needs more CPU than GPU- and the CPU would be the bottleneck what are you
asking about... because the CPU is going to likely run at full clocks alongside the GPU, we dont have any details on power consumption.
Cerny stated a few percent, I would say the most you'll drop to is 10 BASED ON what he said- If it drops int the 9s who knows.
Here is another way to phrase your question-
If a Game is CPU intensive and not optimized for heavy use of the GPU- and is not programmed to use enough compute units
because its programmed for a common denominator / common platform for instance, the PS5 or a 5700XT with fewer than 40 CUs,
what would the XBOX's TFLOP rating be? Lets do the math. If a game is not optimized to use many CUs lets say it uses 40 which is more
than on the PS5.... AND LETS say the xbox is clocked at, what is its clock 1850? that is Minus 12 CUs.... 12 CUs is 9TF. Meanwhile Sony's system only has
36 CUs so it is missing 4 to hit how many the Xbox is using. That means its actually crunching at its full 10.3tf... Why is that?
Because per Cu the Sony console has a much higher clock speed.