Urgh, I know you're probably joking at this point, but just in case nobody has explained how timewarp works (or "reprojection" as Sony is calling it), it's not the same as frame interpolation on your TV.
On your TV, you have two frames and you figure out an approximated middle frame, kind of doubling your framerate and incurring a heavy latency hit.
On the Oculus/Morpheus, you have your finished frame and you have your depth buffer (which has values indicating how far away a pixel is), with both of those, you can calculate at what position a pixel is at in 3D space. You can then treat the pixels as you do any other object in 3D programming, and rerender the scene with new camera data.
This can be used for two things, lowering latency and increasing the effective framerate. Essentially, after you've rendered your scene, you wait until just before the render gets presented to your screen, then you grab the newest rotation data from the HMD and you re-render the scene with those adjustments (which lowers the latency between head movement and screen update). You can then do the same thing again 8.3 milliseconds later using the same rendered image, which kind of gives you a refresh rate of 120fps.
The drawbacks are:
- Positional movements of the camera are ignored (the parallaxing would cause artifacts).
- The only thing changing in your "free" frame is the camera orientation.
- If you make a huge head movement, it's possible for you to get a glimpse of outside the rendered field of view.
Luckily, due to how short a period 60fps is, the drawbacks generally won't be that noticeable.