The problem is a lack of exactness when people talk about this. (greatly exacerbated by the age of twitter communication and its byte-sized quotables)
When you run a given HLSL shader on a console GPU and an equivalent PC GPU, they'll perform the same. If the PC GPU is twice as fast the shader will run twice as fast. How could it be different? The same shader compiler made by the same company is compiling it, and the same hardware is running it. Now, game developers may spend more time low-level optimizing a shader specifically for a single console GPU than they do for every PC GPU, but how much of a difference such hardware-specific optimizations really make greatly depends on the situation, and it's a huge time investment.
Thanks for the response. I was not factoring in the cost of the optimization on the developper side, they may chose not to dive too deep this time around.