No one is talking about utilization. Utilization is how much of the available HP is in use at anytime.
games dont work only with teraflops or fillrate or texture fillrate, when you load a texture to use fill triangle that time it took to change texture or change shaders or make the render call or waitiing for somethig in the cpu that is stuck that is time not used in floating point operations, you lose tflops availability every time you spend time in other things
When Sony gives its number, it isn't talking about both peak utilization and peak availability although in their implementation they go necessarily hand in hand. Their system will only use enough power at any given time to match the requirements of the softwares requests. Great we got it.
wrong, sony gave you the number 10.3 that is the maximum theoretical flops because the system wont clock faster than that, the maximum utilization is dictated by the game and different games have different uses
But their marketing, while acknowledging this variableness, still anchors the buyer on their peak rate. So now we all have to change how we view available resources to match Sony's marketing by determining utilization vs peak and arguing THAT? Why?
utilization depend the game, the game dictates what has to be used and its frames complexity dictated what need to be resolved
clocks go hand to hand with available resources, in PC is very common to talk about clocks and how it vary
sony is giving you a maximum theoretical for flops for tis system, this is real value you dont have to adapt to it different than xbsx maximum values or other GPU's maximum values, webpages dedicated to catalog GPU dont have much to change to accommodate this specifics, and aparently you dont know that this smartshift its from AMD so if there is something to change how we view available resources is because AMD and AFAIK its a mayor GPU producer
We know utilization fluctuates but we never argued "theoretical" output in terms of efficiency by balancing the power draw between CPU and GPU before.
its common in developing hardware and in overclocking, its common in hardware discussions, maybe its is new for you, or maybe I am old
anybody else remembers the turbo button in the 90's?
Now people want to say 12.47 of XSX is theoretical.. but it isn't. That speed isn't based on efficiency, its based on clock and workers.
wrong, it IS a theoretical maximum, you can achieve it if you only make floating point operations, no texture, to assets no nothing, just pure floating points operations calculated one after another during a second, a game is not like that so you dont get that maximum during a game
The maximum output available by the GPU at any instant and perpetually is 12.147. The clock doesn't change even if the power draw does. You can access 12.147 at all times and all the time and not have to worry about the power draw of the CPU in the equation. If you need to, they can both run at full speed forever and have no need to change clock. You will get 12.147 if an when you need it.
it is clear you have not idea of what you are talking about, games dont work like that, the 12.147 is not at "any instant", that is the maximum opertions during a second, a game run in frames if your game runs at 60 frames you divide those 12.2 tflops by the fps then during the frame time available(16ms) you have multiple things to do, load shaders, read input, discard object, etc you need to do certain operations in those sections lot of them require tflops, that is where you are going to use your tflops but since 12.2 is the maximum amount posible during a second you only have a fraction of that available during frametime and the sum of operations made during you 60 frames is not going to be 12.147 tflops because you had to do other things during that second, what it needs to do depends the game is running and it s API and what they do under the hood, you can make more optimization here and there that may result in more floating point operations per second available(or less) but you dont get your theoretical maximum because a game is not just a series of floating point operations its good for measure against other GPUs but its not as much as you can use during a game
Only in the case of the PS5 do we have to re-lens the world to match their PR and try to rethink everyone else's known historical and accurate view about available resources. Why?
no, you only have to account for the theoretical maximum just like XBSX, the clock adjust based in workload so the theoretical maximum is the same because in the same way if you only make a series of floating point operations in PS5 it will run them at maximum clock and do the theoretical maximum 10.3
So what is the output of the PS5 at 2.23 GHZ? =
2304*2*2.23 = 10.275TF
2304*2*2.20 = 10.137TF
2304*2*2.18 = 10.05TF < any speed below 2.18 GHZ takes the GPU into sub 10TF range.
So a 50 MHZ drop in speed changes the narrative. Put another way a %3 drop in speed puts the GPU in just below 10TF. That is not a story that anyone in that camp wants to tell but we can calculate it very easily.
the output will be what is required to run the game, its really very simple
the idea of a console is to run a game if you are fine running only floating point numbers and that entertains you its fine by me, but I prefer to run a game so I am interested in the console to run a complex game and have the required performance to do so
the curious thing is that not all parts of a game require the same amount of work to maintain a framerate, if you pause your game or go to a menu or are in a hub area the console can maintain the same framerate using less power and less clock speed, then upclock when the area is more complex and maintan the framerate, the console will cool itself better as is using less power in certain areas it will still have idle time here and there like other systems do but will achieve its performance and also saving some power when it can, that is really good in my opinion