I found this interesting and I wonder whether or not the XSX can do the same thing. I wouldn't know how it affects games however.
This is mumbo-jumbo. Both consoles are using NAND, which is a page and block-level technology. In other words, data is read at the page level, and written at the block level. This is just the nature of the technology. You cannot access NAND data at the byte (let alone bit) level unless you place it in a different type of memory with that privilege first such as SRAM, DRAM, MRAM or NOR Flash (the latter has very slow write speeds btw, but is the only one AFAIK with bit-level read accessibility since it's often used for XIP), at which point that isn't the NAND providing that level of granularity, but the different memory some of that data is in.
So basically when it comes to explaining the basics of how SSDs (particularly the NAND they use) works, that giant continuous paragraph gets it wrong, they are fantasizing something that isn't possible. The very closest you could get with that is if they were using something like ReRAM (which doesn't even exist in any notable capacities AFAIK and is still in very early stages), MRAM (capacities of only a few MBs and very expensive even for that), or 3D Xpoint (AFAIK quite expensive per-GB though it exists in pretty large capacities and mass availability mainly for data center markets). But those "only" offer byte-level accessibility at lowest granularity level.
The CPU is really is 3.2 for the PS5 that is the mode developers are coding around to keep the gpu up in speed. When games push harder that will drop more as the gpu needs to steal more and more power. Thanks to the fixed clocks the series x has a 10% CPU cushion off the bat minus 10% of a core for io.
I've heard about the profiles, but IIRC didn't Cerny clarify with DF that the final retail system will handle profiles autonomously? Which I would be very curious if could even be done, since doesn't the game code itself determine the amount of power usage that would be going to the CPU and GPU?
They would need some extremely good detection logic and ability to switch power load through the components almost instantly if handling things automatically on their end, at least if the devs decided on profiles they could query the hardware for adjusting power loads ahead of time for specific instances where they expect power usage to be greater than usual. That's just what I'm thinking off in my head, though.