Because Rift uses a camera for 3d space positioning, if the camera's view of the controller is occluded, it loses track of 3d positioning. For example, you turn around 180 degrees and your body blocks the camera's view of your hands and controllers. Or you leave the camera's FOV. It tries to compensate with accelerometer and gyroscope data, but those are prone to drift (errors adding up over time).
Vive has sensors on the controllers, and it should be harder for those to lose sight of the two lightboxes that they use for positioning data.
Isn't it all about where you place the cameras/lighthouses? You get less occlusion with the lighthouses because you have two by default, and you place them at opposite corners of a bounding rectangle. With the standard OR you have a single camera pointing at you. With oculus touch they mention adding a second camera for improved tracking. If you had long enough USB cables to place them at opposite corners like the Vive lighthouses, shouldn't you get similar occlusion resistance?