• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Sony London details PSVR optimization & in-house VR engine

I've posted my slides up here now, as you guys seem interested. :)
Good stuff! Thanks for sharing. <3 Mind if I ask you a bunch of dumb questions? lol

Speaking of sharing… I wasn't really clear whether this was "check out our toy," or something you were actually making available to other teams, internally or externally.

What do you mean by "fully multithreaded"? Do you just run a bunch of job-agnostic worker threads like ND are doing? If not, why not?

In Slide 7, what exactly do you mean by, "… when you are editing on PS4"? That's a thing?

Can you explain the difference between the single-pixel technique and using quads, and how that will improve the results?

Any chance of this stuff running on OS X? :p

———

What even is a normal? lol Is there a good rendering walkthrough somewhere that gives an overview of the process and explains the jargon?


This is just for re-projection purposes, I presume. Or is there some other reason?
In short, because of the distortion being applied, you want to start with an image a bit larger than the final output, basically to give the distortion filter more information to work with. Basically, this pushes more detail towards the middle of your field of view.
 

catweazle

Neo Member
Good stuff! Thanks for sharing. <3 Mind if I ask you a bunch of dumb questions? lol
Sure!
Speaking of sharing… I wasn't really clear whether this was "check out our toy," or something you were actually making available to other teams, internally or externally.
We don't really have the resources currently to make the engine available to other teams, as we have to focus on making games. However, what we can do is talk about our ideas so everyone can improve their technology.
What do you mean by "fully multithreaded"? Do you just run a bunch of job-agnostic worker threads like ND are doing? If not, why not?
Pretty much. We do our game logic and render logic in the same frame to reduce latency though.
In Slide 7, what exactly do you mean by, "… when you are editing on PS4"? That's a thing?
Yes, the editor sends all user input (mouse, keyboard, etc) from the PC across to the PS4, as well as any changes to data. This is pretty much essential for us as we need to check how things look in the headset constantly as it's such a different experience.
Can you explain the difference between the single-pixel technique and using quads, and how that will improve the results?
Masking out quads give you an uneven distribution of rendered pixels, which is what gives you this "stair stepping" effect. With single pixels it's basically the same as just having a lower resolution.
Any chance of this stuff running on OS X? :p
As it's internal, I'm going to say fairly slim. :)
What even is a normal? lol Is there a good rendering walkthrough somewhere that gives an overview of the process and explains the jargon?
No idea actually. I've been doing this for so long I've forgotten how to talk like a normal person. Try the Wikipedia Rendering article?
 
Yay!

We don't really have the resources currently to make the engine available to other teams, as we have to focus on making games. However, what we can do is talk about our ideas so everyone can improve their technology.
Ah, right on. I just sort of assumed this was meant to be the VR version of Phyre Engine, but towards the end you started talking about what you may do differently in a "real" engine, so then I was less sure. lol

Pretty much. We do our game logic and render logic in the same frame to reduce latency though.
Okay, dumber questions: Why isn't all of that stuff asynchronous? Can't the renderer just blithely draw frames at 60 fps, using "the current state of the world" as determined by the game logic, whatever that state may be, even if it's a bit stale? If it's time to start building your render, why wait for anything? Is the user likely to notice if the physics or AI routines miss a beat? Wouldn't a late update on that single aspect of the simulation be far less noticeable than a late or missing frame? Why not let Physics do its thing and update as often as it can, while Rendering basically just keeps tape rolling no matter what?

In that same vein, do all of the various subsystems really need to operate at the same frequency as each other, much less Rendering? For example, wouldn't 10 Hz be plenty fast enough for AI to update? Human reaction times are 150-250 ms depending on the stimulus, so would it be problematic for AI to take as long as 100 ms to make a decision? AI can get pretty expensive, right? Wouldn't procing it at 10 Hz instead of 60 Hz reduce your loads by 83%?

Yes, the editor sends all user input (mouse, keyboard, etc) from the PC across to the PS4, as well as any changes to data. This is pretty much essential for us as we need to check how things look in the headset constantly as it's such a different experience.
Oh, I see. So you're still doing the editing on the PC, but the edits are then reflected in real time on the PS4, which then sends the video output back to the PC via Remote Play? Do you ever have a buddy wear the headset while you do this and say, "Oooh, that's the spot!"

Masking out quads give you an uneven distribution of rendered pixels, which is what gives you this "stair stepping" effect. With single pixels it's basically the same as just having a lower resolution.
Hmm, I'm not really sure I follow. I thought the stair-stepping was just the result of your "pixels" being four times larger, just as if you'd rendered them natively at 540p, but then after your quick-and-dirty render, you came back and cleaned it up with some 1080p AA. But what you're doing isn't effectively the same as rendering at the lower res? On Slide 40, you say, "We are working on a technique using hardware antialiasing to only mask out single pixels rather than 2x2 quads." Can you explain the difference between the two masks? I'm not sure I understand what a single-pixel mask would be… Like, are you moving to a simple if statement to see if any given pixel should actually be rendered? What do you test for? How is the end result any different from the rotating pattern of undrawn pixels you currently have? Just that it allows you to make a pattern that covers an arbitrary number of pixels, larger than 2x2?

Oh, is it possible to rotate the mask every quad rather than every frame? Would that reduce your stair-stepping at all?

As it's internal, I'm going to say fairly slim. :)
lol Yeah, fair enough.

No idea actually. I've been doing this for so long I've forgotten how to talk like a normal person. Try the Wikipedia Rendering article?
heh Right on. Now that you mention it, I think I started reading it a few weeks ago while I was researching something else, but dropped it when I'd found my answer. I guess I should sit down and read the entire thing. :p
 

spectator

Member
In short, because of the distortion being applied, you want to start with an image a bit larger than the final output, basically to give the distortion filter more information to work with. Basically, this pushes more detail towards the middle of your field of view.

Thanks for the details!
 

Shin-Ra

Junior Member
Is that 1.2-1.3* the x*y or x&y resolution?
I've posted my slides up here now, as you guys seem interested. :)
Answer_James_Fast_And_Flexible-17.png
Thanks, that clarifies a lot, so it's 1.2x to 1.3x 1080p which is 1.44x to 1.69x the number of pixels. I wonder if different game modes are fixed at a single resolution or it's dynamic frame by frame depending on load.

I'm interested to see the results combined with TAA tuned for VR.
 
Top Bottom