I bet this has been answered more than once actually. Anyway: PS4 has about 40% more GPU power and also more memory bandwidth/no hard size limit due to ESRAM. 1080p has 44% more pixels than 900p. Even if it'd scale 1:1 with resolution (in reality it's usually a little less than that) the PS4 would manage 1080p at about the same frame rate. Furthermore the GPU architectures are identical, so most optimizations done for the Xbox One version in this regard should directly translate over to the PS4 version. My opinion: 1080p should be possible on PS4 without all that much of extra effort. Alternatively they could stay at the same resolution but use the extra power for higher res shadow maps or the like. Or maybe the Xbox One doesn't run at a stable 30 fps in GPU bound situations, but the PS4 version could.
Now 900p at 60 fps would be a entirely different matter. You'd need twice of everything, including CPU power. Only possible on PC.
We don't (though nothing hints at it so far). We'll have to wait and see.
Umm thats not true because the optimizations you would make for the Xbone would be to take advantages of the ESRAM, unless your talking about optimizing on the CPU in which case there is little to no difference in performance. You dont have to account or utilise the speed of the ESRAM on the ps4 to make up for lack of bandwidth, like getting your data read from point a to point b isnt an issue in the slightest. Nor do you need think about splitting your data into chunks nor do you need to think about what exactly your going to pass through the ESRAM.
Different use cases, nothing funny about that. Crowd AI versus clothes, ie gameplay vs non-gameplay code. GPGPU's great strength is its parallelism, but it comes with a great limitation: your CPU and GPU act as separate systems and getting results back from the GPU to the CPU potentially costs a lot of performance and is really tricky to optimize.
That's why the stuff that's good to offload to GPGPU is visuals and fluff. Particles, clothes, illumination, all those are fine because you can keep them in a black box, both computing them and rendering them on the GPU, because they don't factor into gameplay. But anything the CPU is gonna need to have an active look at is gonna cost you just to get the information back. So in many of those cases you'll fine you get better performance being GPU-bound instead of waiting for the GPU to deliver.
Yeah I remember the Infamous guys talking about this with how you spend so much time/resources syncing the data... may I ask why if your able to read from the same memory locations why the f*ck do you need to spend so time syncing the data... what the point of even reading from the same location?
Someone in this thread posted that this gen spec isnt capable of running open world games at 1080p 60fps. I would be willing to bet my all of my mortages that you could on the ps4. I dont think people understand how big of a deal GPU compute is or what is and isnt good at. While I'm not saying the Ubisoft dev is wrong for saying the game is CPU bound (how the hell can anyone disagree with his statement without seeing the game code is beyond me) I put it down to time constraints, efficiency and parity.