That talk was derived from Vega architecture (compute focused) vs a Pascal gaming architecture, engineered to run games primarily....
So Pascal had compressed textures, lower color fidelity in many games, but at 1080p and even below you could not see it clearly. So Nvidia traded Image Quality at lower TFLOPS and lower power draw. AMD did not compress anything so though it's Vega and Polaris cards had higher TF, they sucked in more power because the IQ and colors were better....
Vega was compute focused, yet, too many games were not compute focused, so in the majority of games Nvidia won. Notwithstanding certain features which kinda crippled AMD performance like tesselation and gameworks. Now Polaris was the first step by AMD to go the pure gaming route, at 5.1 and 6.1 TF the RX 570 and 580 were great cards against the 1050ti and 1060, yet if AMD scaled up polaris, the powerdraw would have skyrocketed at 14nm......So they went with Vega and HBM for the high end, but that was not a gaming architecture, most of the times 50% of Vega's raw compute power remained idle, so the cards had lots of power, just never utilized due to the non-gaming architecture or rather devs not prioritizing on compute focused games....
Which brings us to RDNA 1. An improvement on Polaris and a partial departure from GCN at a lower node. RDNA 1 was customized as a gaming GPU and it's ultra fast....ON IPC, RDNA 1 beat Turing easily if you look at performance per watt. In essence, if AMD developed a larger chip than the 251 MM2 5700XT chip, it would most certainly be as fast and even faster than 2080ti. Yet a 5700XT at 251MM2 beats a 2070 and is on par with a 545MM2 2070S. This means that if the 5700XT was only 400MM2, the chip would be packed enough to beat the 2080ti on RDNA 1, yet 2080ti is a 754MM2 chip.....
In short, RDNA has better IPC and performance per watt over Turing by a large margin. Now the team on RDNA 2 took all of this to a new level, it's said performance per watt got upshot to 60% from RDNA 1. The IPC gains are about 20% from RDNA 1, the clock speeds are insanely high, the CU's have improved a million fold relative to instructions per CU, the node is super efficient hence the high clockspeeds.....Then people are forgetting AMD has not stopped there, they implemented their own form of HBCC straight on the board with 128MB of cache. AMD is not effing around, they are trying to push things this gen and go for a complete kill with MGPU's on RDNA 3 in the future....
Now, people are weary, they can't be patient. AMD is not going to allow Sony to announce a feature set of their architecture yet, at least one where both companies benefitted from their collaboration, fueled by Sony for their PS5 regardless. yet, that's how collaborations work, I scratch your back, you scratch mine. AMD has to sell GPU's too and they have always said their GPU's would hit shelves before the console.....After the 28th of October I'm sure all the breakdowns people want to know will be a go........Sometimes it's good to keep something in the oven for longer, keep quiet and work out your kinks....Cerny said that there were some logic issues with clocking PS5 over 2.23GHz, I'm almost certain that has been resolved, that's what time gives you......So you will see AMD GPU's clock pretty high and over that of PS5. Not sure if it would make sense for them to boost the PS5 clock even more, I guess it depends on the formidability of their cooling solution, but it's a possibility. I still think there are some nice perks we have not heard about PS5, hardware related and software related (OS especially)........There is still time to announce all of that, there is more than enough time.....I told folk AMD would hit it out of the park, they are coming for both the CPU and GPU market this holiday.....The best kit you will buy will be AMD CPU + AMD GPU as the most forward moving pieces of architecture in both realms later on this year...People who were wishing for a intel + nv combo for consoles have no idea how power hungry, expensive and limited that would have been.....