honestly tflops are not completely honest metrics. i always give this crucial example: gtx 1080 is 8.8 tflops and gtx 1060 is 4.4 tflops, whereas the average performance between them are %67. in some VEEERY rare cases, gtx 1080 ups the 1060 by %75-80, and in some more frequent cases, the difference can be as low as %55-60. this is not a joke, benchmarks, theoritical values are all up there. sometimes its hard to feed all the compute power you have, it can be memory bottlenecked, it can be bus bottlenecked, it can be anything. sometimes the game's GPU load is simply cannot saturate compute shaders, and from what I'm observing on PC space, this is a frequent occurence. no reason why it shouldn't happen with xbox sx and ps5.
avg. difference between 6700xt and 6600xt is usually %15-20. sometimes it is %30. sometimes it is low as %7-9 . theoritically 6700xt has %30t flops, but that does not seem to correlate to actual performance consistently. only if you load the game very heavily in terms of compute load, then the 6700xt truly shines over 6600xt.
at least 6600xt and 6700xt have one thing in common: high clock speeds.
that's why i honestly think that this stuff won't change, and will keep happening exactly like this.
in the case of xbox one x, one x had higher clocks and more compute units, naturally in some games it destroyed the ps4 pro.
this is exactly like the case of CPUs. you can have a 4 ghz 8/16 CPU that scores 10000 points in a benchmark, and a 5 GHz 6/12 CPU that scores 8000 points in the same benchmark. 5 ghz cpu will consistently beat the 4 ghz cpu by %20 unless the game perfectly manages to scale across all those cores and threads.
this is a similar case