• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

PS VR external processing box to be size of Wii

Nothing exciting - I'm just a lowly Event Manager. Next to zero involvement with the hardware/software teams. But yeah, I've set up and taken down hundreds and hundreds of VR kits, and the PU is nowhere near the size of a Wii.



Yeah, exactly. 1xHDMI in, 1xHDMI out, power (with separate power brick), micro USB and then the VR connection cable which is one cable, but two plugs - one for video, one for data. That goes into the front of the PU, the rest go into the back.

Cool, thanks for the info. Sounds like a really cool gig!
 

GobFather

Member
Nothing exciting - I'm just a lowly Event Manager. Next to zero involvement with the hardware/software teams. But yeah, I've set up and taken down hundreds and hundreds of VR kits, and the PU is nowhere near the size of a Wii.



Yeah, exactly. 1xHDMI in, 1xHDMI out, power (with separate power brick), micro USB and then the VR connection cable which is one cable, but two plugs - one for video, one for data. That goes into the front of the PU, the rest go into the back.
thanks. !!!
 

Begaria

Member
Yeah, exactly. 1xHDMI in, 1xHDMI out, power (with separate power brick), micro USB and then the VR connection cable which is one cable, but two plugs - one for video, one for data. That goes into the front of the PU, the rest go into the back.

So the processing unit has a power brick? Is that the size of a Wii? :p
 
Yes, but we're still talking about interpolation based on initial frame-draw and positional changes (with an "oversized" frame). Due to how it creates the B frame it is labeled as reprojection.
Well, they call it reprojection instead of interpolation, because it isn't really interpolation, so when you call it that, it makes people think it's something other than what it is.

Interpolation creates an interim frame from two sandwich frames. You render frame 1, and then you render frame 2, and then you create a frame 1a that estimates what happened in between. (A+C)/2=B This also means you don't even begin creating what will ultimately be your second frame until after you've finished rendering the third.

Reprojection is totally different. It's more like waiting until the last possible moment to set the frame's origin. Most of the time, you start copying your drawing to location 0,0, on the display, but if the user turns their head 30 pixels to the right while you're busy rendering, your image is going to be out of alignment by the time it hits the screen. So you take that very same image you rendered, and you change its origin by 30 pixels, so when it's drawn to the screen, it's shifted slightly to the right. (A+xyOffset)=properA So not only are you not creating new frames from old ones, one could argue the "original" frame isn't a frame at all until the offset is applied, as it's never sent to the screen without one.
the xyOffset could be +0,+0, though.

Are you working on Morpheus devout?
Me? No, I just read a lot. lol

A group of locals that was working on PC titles has been testing our VR after I initially pointed them in the direction of some AR and VR experiments my production group was doing a few years back. I'd love for them to end up on PSVR vs limiting themselves to only GearVR.
Sorry, I'm not sure what you're getting at… Your company has a product and people like it, but you're afraid it won't work on PSVR… if the reprojecting isn't actually a form of interpolation? =/

Sorry, I'm not clear on what would prevent these guys from using your stuff with PSVR. Can you say what it is you guys made?
 
Yes, but we're still talking about interpolation based on initial frame-draw and positional changes (with an "oversized" frame).

It is not interpolation.

Interpolation is creating an "average" between two images and placing that average between the frames.

Reprojection is taking the same frame and simply shifting it to a different offset on-screen according to the head motion.

Interpolation creates more lag since you need to see the second reference frame to create that average too.

Reprojection does not create additional input lag, and simply creates an illusion of higher framerate, because it only needs the previous image to simply reproject that image at a different offset on the screen.

So they are two completely different things.
 
I can't believe people are genuinely hung up over the size of this thing. Bizarre.

Make room for it, which should take you three seconds, it's going to be awesome you jerks.


An equally fucking silly thing to complain about. I tossed the thing on the ground behind my entertainment center when I hooked it up and I've only seen it once more since launch, when I had to hook it up again after moving into a new house.

No loud fan trying to dissipate heat is a great trade off for having to see it once every few years.

my ps4 doesn't have a brick attached to it and it's quiet as a ghost. Why can't people just admit that something like this is a minor annoyance at best for some people? is there really any good reason to defend it to the death? are you that caught up in the VR hype that someone criticizing your precious playstation VR gets you all up in arms?
 

hesido

Member
Well, they call it reprojection instead of interpolation, because it isn't really interpolation, so when you call it that, it makes people think it's something other than what it is.

Interpolation creates an interim frame from two sandwich frames. You render frame 1, and then you render frame 2, and then you create a frame 1a that estimates what happened in between. (A+C)/2=B This also means you don't even begin creating what will ultimately be your second frame until after you've finished rendering the third.

Reprojection is totally different. It's more like waiting until the last possible moment to set the frame's origin. Most of the time, you start copying your drawing to location 0,0, on the display, but if the user turns their head 30 pixels to the right while you're busy rendering, your image is going to be out of alignment by the time it hits the screen. So you take that very same image you rendered, and you change its origin by 30 pixels, so when it's drawn to the screen, it's shifted slightly to the right. (A+xyOffset)=properA So not only are you not creating new frames from old ones, one could argue the "original" frame isn't a frame at all until the offset is applied, as it's never sent to the screen without one.
the xyOffset could be +0,+0, though.


Me? No, I just read a lot. lol


Sorry, I'm not sure what you're getting at… Your company has a product and people like it, but you're afraid it won't work on PSVR… if the reprojecting isn't actually a form of interpolation? =/

Sorry, I'm not clear on what would prevent these guys from using your stuff with PSVR. Can you say what it is you guys made?

You are on point. Although previously I thought the reprojection also involved with remapping the scene using z-index values, it turns out it's a simpler 2d operation probably involving (rotation + translation + scaling), as you say.
 
You are on point. Although previously I thought the reprojection also involved with remapping the scene using z-index values, it turns out it's a simpler 2d operation probably involving (rotation + translation + scaling), as you say.
Oh, really? I knew there were techniques for handling parallax (dis)occlusion, but they're less than perfect, and simply ignoring translation and its resulting effects can be an acceptable solution as well. So you're saying that Sony are ignoring translation and just handling rotation? For sure? I'd imagine that would be considerably cheaper, and produce results nearly as nice, especially at higher frame rates. Is there no option to handle parallax at all then?

I wasn't really sure how Sony were handling parallax, but I didn't bring it up because I didn't want to confuse the issue any more than necessary. lol Regardless, even if they were handling it, I'd still consider that to simply be the final step in frame preparation rather than any form of interpolation.
 
Processing Unit sitting on top of PS4 it much smaller than the Wii
CWXQxPZUYAEDKgg.png

Width wise from the front it's about as wide as a Wii. Depth wise it's probably about 2.5-3 inches shorter. Looks about as tall though.
9FHkTJB.jpg
 

hesido

Member
Oh, really? I knew there were techniques for handling parallax (dis)occlusion, but they're less than perfect, and simply ignoring translation and its resulting effects can be an acceptable solution as well. So you're saying that Sony are ignoring translation and just handling rotation? For sure? I'd imagine that would be considerably cheaper, and produce results nearly as nice, especially at higher frame rates. Is there no option to handle parallax at all then?

I wasn't really sure how Sony were handling parallax, but I didn't bring it up because I didn't want to confuse the issue any more than necessary. lol Regardless, even if they were handling it, I'd still consider that to simply be the final step in frame preparation rather than any form of interpolation.

I think translation is there but just as a 2D offset, but I do remember reading / hearing(?) that the reprojection was a 2d affair. Have to dig that one out. Edit: Probably I heard it in an interview of some sorts, hard to track that one down.

The PSVR breakout box is two Gamecubes duct taped together.
:D
 
I think translation is there but just as a 2D offset, but I do remember reading / hearing(?) that the reprojection was a 2d affair. Have to dig that one out. Edit: Probably I heard it in an interview of some sorts, hard to track that one down.
Ah, right on. Yeah, not that I didn't believe you. I just wanted to read up on the specifics. lol
 
Top Bottom