I'm really, really liking this game so far and performance is no much issue for me.
People are saying it "chugs" since the whole overworld is loaded once you exit a building, but I am not quite sure if that's what happens.
Only the area closer to Link is loaded at its highest quality, everything else, in the blurred areas, are probably the lower-resolution end of the relevant mipmaps. UE4 has pretty robust texture streaming capabilities.
That being said, I'm curious to see why the double buffer was chosen instead of a locked 30FPS or even Adaptive Vsync (this is running on NVidia silicon, after all). I'm not sure it's the slowdown per se people are complaining about, but the rather abrupt change as the double buffer smacks the FPS from 60 to 30 once it starts dropping frames. Even if, say, during a graphically expense section, the Tegra could handle the scene at, say, 50FPS, the double buffer still pushes it to 30 up until it can catch a steady 60FPS once again.
I wonder if this has to do with the engine? A lot of Japanese devs seem to struggle with UE for whatever reason.