I am not making a case for "good" or "bad." Eve Valkyrie is probably my most anticipated VR game.
I am discussing the reality of VR development, something many people don't want to hear. I see lots of DBZ power-level like development talk in these threads and it's grating. It's not as simple as just "turning down the graphics." This is going to be a real big change in game design, because as developers many will be going from an era with virtually unlimited resources to do whatever they could dream of, back to an era where your creativity and design is again restrained by the hardware.
And, again, I'm not just talking about morpheus. I'm talking about all VR for the near future. But, specifically with regards to morpheus, there will be experiences a very high end PC can pull off that the PS4 cannot.
I said earlier in this thread, what will re-usher in that era of unbridled design decision back into VR will be foveated rendering and a rolling asynchronous time warp display. We are already starting to move towards the type of rendering pipelines that will enable foveated rendering in the future. For those unfamiliar with foveated rendering - it's a technique to enable us to mimic more accurately how our vision actually works. We don't see with clarity but save for a very tiny area in the center of our vision. This area - about the size of a pin-head - is where our fovea is centered. Extending from that point outword to the extents of our vision, we get progressively blurrier. Most of our vision and what we see is our brains filling in the gaps with the limited amount of extremely blurry visual data we are getting from our eyes.
By contrast, the view ports we use in VR maintain clarity throughout the entire area. We render the extents of our view ports at the same clarity as the center. This is because we cannot tell which area of the view port our eye is actually looking at. Once we get extremely low latency eye tracking down, we can track our eyes in the headset and figure out which area of the view port we need to be clear. We can render that in normal resolution, then render the rest of the scene in multiple passes at, say, reduced resolutions and levels of detail. This would massively speed up our ability to render scenes in VR.
Beyond that, rolling asynchronous displays will turn our displays from entirely progressively updated screens to something more resembling rasterline displays of the past, where entire vertical bands of resolution will be independently and constantly updating. In essence, we would stop updating the display in frames, and start updating in blobs of up-to-date visual data many times a second. Again, this more closely resembles how our eyes actually operate, and, most importantly, it would decrease rendering latency considerably.
Both of those advancements are still many years away, however. For the time being, we just have to live with the hardware limitations.