Yay!
We don't really have the resources currently to make the engine available to other teams, as we have to focus on making games. However, what we can do is talk about our ideas so everyone can improve their technology.
Ah, right on. I just sort of assumed this was meant to be the VR version of Phyre Engine, but towards the end you started talking about what you may do differently in a "real" engine, so then I was less sure. lol
Pretty much. We do our game logic and render logic in the same frame to reduce latency though.
Okay, dumber questions: Why isn't all of that stuff asynchronous? Can't the renderer just blithely draw frames at 60 fps, using "the current state of the world" as determined by the game logic, whatever that state may be, even if it's a bit stale? If it's time to start building your render, why wait for
anything? Is the user likely to notice if the physics or AI routines miss a beat? Wouldn't a late update on that single aspect of the simulation be far less noticeable than a late or missing frame? Why not let Physics do its thing and update as often as it can, while Rendering basically just keeps tape rolling no matter what?
In that same vein, do all of the various subsystems really need to operate at the same frequency as each other, much less Rendering? For example, wouldn't 10 Hz be plenty fast enough for AI to update? Human reaction times are 150-250 ms depending on the stimulus, so would it be problematic for AI to take as long as 100 ms to make a decision? AI can get pretty expensive, right? Wouldn't procing it at 10 Hz instead of 60 Hz reduce your loads by 83%?
Yes, the editor sends all user input (mouse, keyboard, etc) from the PC across to the PS4, as well as any changes to data. This is pretty much essential for us as we need to check how things look in the headset constantly as it's such a different experience.
Oh, I see. So you're still doing the editing on the PC, but the edits are then reflected in real time on the PS4, which then sends the video output back to the PC via Remote Play? Do you ever have a buddy wear the headset while you do this and say, "Oooh,
that's the spot!"
Masking out quads give you an uneven distribution of rendered pixels, which is what gives you this "stair stepping" effect. With single pixels it's basically the same as just having a lower resolution.
Hmm, I'm not really sure I follow. I thought the stair-stepping was just the result of your "pixels" being four times larger, just as if you'd rendered them natively at 540p, but then after your quick-and-dirty render, you came back and cleaned it up with some 1080p AA. But what you're doing isn't effectively the same as rendering at the lower res? On Slide 40, you say, "We are working on a technique using hardware antialiasing to only mask out single pixels rather than 2x2 quads." Can you explain the difference between the two masks? I'm not sure I understand what a single-pixel mask would be
Like, are you moving to a simple if statement to see if any given pixel should actually be rendered? What do you test for? How is the end result any different from the rotating pattern of undrawn pixels you currently have? Just that it allows you to make a pattern that covers an arbitrary number of pixels, larger than 2x2?
Oh, is it possible to rotate the mask every quad rather than every frame? Would that reduce your stair-stepping at all?
As it's internal, I'm going to say fairly slim.
lol Yeah, fair enough.
No idea actually. I've been doing this for so long I've forgotten how to talk like a normal person. Try the Wikipedia Rendering article?
heh Right on. Now that you mention it, I think I started reading it a few weeks ago while I was researching something else, but dropped it when I'd found my answer. I guess I should sit down and read the entire thing.