Kids are seriously downplaying those 10TFlops, bare in mind X1X with it's 6TF is already as strong, if not stronger, than what most people have in their PCs with GF1050-1060/RX460-570, let alone the PS5. And with how effective RDNA is we are talking about equivalent of 3-4x more capable GPU than PS4 Pro, or over twice of the mentioned X1X.
Pure TFlops aren't the issue, it's everything that surrounds it - 36CU isn't much, but high clocks can make up for it, and as Cerny said, all the other components run faster as a result as well, so at the end of the day everything is OK. But that being said - XBX will simply have more of those additional components, like ROPs, TMUs etc., (that still run at not so shabby 1.8GHz), and the biggest game changer - number of RT engines assigned to each CU. And Cerny basically already suggested that RT performance will be struggling/underwhelming, so as a result, XBX will be able to offload lightning, shadow or reflection calculations onto those RT engines (with theoretical 13TF-worth accuracy/details), whereas PS5 will have to use some of those 10TF for those tasks. Plus all the additional features XBX has like VRS, DirectML etc. that will offload CU calculations even further, so on paper there's 2-3TF difference already, but in practice the difference might be actually even bigger.
The worst PS5 design decision is the shared TDP between CPU and GPU, so they will never be able to run at their max clock, and what's worse - Cerny said the solution doesn't take the generated heat into consideration at all, so yet another jet engine is about to be expected.