When people talk about the additionally considered compute specs with the PS4, I'm guessing they're referring to the number of ACE's/queues.
Which is really just one aspect (work submission) of one aspect (scheduling) of compute performance, which is one aspect of GPU performance. Let's be brutally honest here: it is brought up about 50 times more than it should by rights be because it's one of the few easily quantified differences between the current consoles de jour.
Sorry, my knowledge on this particular section is rather limited.
Let's be more specific then, PC gaming does not exist in isolation. It stands to sense AAA (or more modest devs it does not really matter) multiplatform devs will build their engines around the GCN's strengthes (the GPU compute workloads at which it excels) so the question is : how will Kepler/Maxwell fare then ? I assume, perhaps wrongly, that those multiplatform games will make heavy use of GCN-tailored compute workloads and leverage async compute and async shaders.
We might have gotten a glimpse of an answer with Ryse which is a showcase for GCN, it did not run bad at all on Kepler/Maxwell but not quite as well as on various GCN cards.
I think after years of "to the metal" propaganda, people are too eager to look for complex (and often unprovable, especially with the simplistic performance investigation tools generally employed) answers for performance discrepancies when in fact the same ones which have been applicable for a decade are still good indicators.
Let me give you an example of what I mean. Yes, Ryse runs better than the average game on GCN cards relative to NV cards. It could be that this is indicative of some deeply rooted algorithmic optimization for "asynchronous compute" or the game being highly tuned to specifics of the GCN architecture. But isn't it far more likely that it's simply a workload less dependent on texturing/sampling or raster ops and more dependent on raw floating point operation throughput? Remember that e.g. a 290X has 6TF of theoretical FP performance while a 980 has 4.6 TF.
Basically, my point is that it has been discussed that different GPU setups perform differently for different workloads at least since the Radeon shipped with 3-texture multitexturing capabilities -- and it's true! But it has also never influenced anything significantly in the long run. If one vendor goes a bit too far (or not far enough) in one direction (TMUs, FLOPs, Bandwidth, ROPs, ...) in a given architecture they'll just correct that in the next.