• Hey, guest user. Hope you're enjoying NeoGAF! Have you considered registering for an account? Come join us and add your take to the daily discourse.

Eye tracking for VR: A game changer

quetz67

Banned
I think the idea of foveated rendering and the win for performance is that you can reduce per pixel quality in the parts of the screen the idea is not focussed on, and thus either bank the framerate gain or increase per-pixel complexity where the eye is focussed. Or selectively anti-alias only parts of the image, or decrease resolution away from the eye's target and increase it where it is targeted. Etc. etc.

However I don't know how close we are to that being viable in consumer devices. The latency on the tracking would need to be very low indeed for that application.

It may not need to be so low for control input, though - and that might be worthy enough of an application to pursue first, on its own.

That's what I understood, but that makes the "game changer" in the title a little too sensationalistic. It will not help gameplay, just that the OR will probably get eyetracking as soon as possible so Facebook can track at what ads the user looks.
 
Sounds terrible and distracting. Messing with text that people are reading is rarely a good idea. It's actually a really good way to break immersion and take someone out of the moment. You see something like that, you stop reading and pay more attention to the color of the words than the prose.

Besides, are people going to read a book with a HMD on for some colored text and sound gimmicks?

I meant in game.

Like if you're reading a book in Elder Scrolls.

And obviously the effects would need to be thought out; anything can be gimmicky if you don't do it right. House in House of Leaves being blue grabbed your attention and was unsettling as a result.
 

JNT

Member
Eye tracking is a game changer, the only question is when. I think we'll see it in a consumer HMD product at the very earliest in 2016.

And even more so than the switch to regular VR from screens, you'll need to get devkits out very early -- changing your engine to effectively perform FOVeated rendering can potentially be a lot more work than "just" integrating rendering for "normal" VR.

I haven't seen any of this tech first hand, so I've been cautious commenting on it so far. What I worry about is the latency between the eye movement and the result on screen. Moving your eyes only to focus on a blurry mess could potentially do more harm than good, even if it only lasts for a fraction of a second. I wonder how far technology has come regarding that issue.

Getting proper eye tracking working at a very low latency is indeed going to be a game changer.
 

tensuke

Member
6ezNDf.gif

I'm in.

What is this from?
 

ZehDon

Member
I can only say one thing with safety in regards to VR and the path we take towards it: I'm going to need a lot of new words to understand half of this stuff.
 

gofreak

GAF's Bob Woodward
That's what I understood, but that makes the "game changer" in the title a little too sensationalistic. It will not help gameplay, just that the OR will probably get eyetracking as soon as possible so Facebook can track at what ads the user looks.

Haha :p But I think it can introduce new and more intuitive ways to interact. Response to the Infamous demo at GDC, which used it for aiming, seemed very positive.
 
The development for this type of stuff seems far too expensive. Maybe years down the road, when we already have a pretty good grasp on VR and what kinds of software work for it.
 

quetz67

Banned
Haha :p But I think it can introduce new and more intuitive ways to interact. Response to the Infamous demo at GDC, which used it for aiming, seemed very positive.
That wasn't VR and actually it makes no sense anyway if I am not missing something. The game centers on what you are looking at, that means you need to return looking at the center. Will get pretty eye straining especially as soon this interferes with moving around. Can't see any serious use besides shooting galleries.
 
No, there's no point to demo that if it wont show up till PS5. They're clearly evaluating it for Morpheus. A Sony engineer already said they could miniaturize the tech.
At the GDC reveal, Rick said they were working on eye tracking "for general gaming," which makes it sound like it's not intended for Morpheus. However, then Anton jumps in and says, "It's a technology we're aware of," and gives Rick a knowing glance, which sounded more like, "We're not playing that card yet."

Do you have a link to this engineer saying it's something they could get in to the headset?


Needs depth modulation:
Accommodation
What are accommodation and depth modulation?


Yes. I've often thought about it but I struggle to think how a rendering engine could be constructed to take advantage of this by only rendering a sharp image at the center and then progressively reducing complexity and rendering cost towards the edges in a seamless and smooth way.

A very tough nut to crack.
Is it a smooth drop off, or is it more like a "window of goodness"?

Not really having any what I'm talking about, here's what I was thinking. lol Matt was saying your vision is only hi-res in about 2º of your FoV. Let's assume our display is 100º both vertical and horizontal, meaning our "good vision" only accounts for about 0.04% of the visible area of the display. So if we have a display that's 8000x8000 for each eye, you can only clearly see a 160x160 pixel area of the display.
Pretty sure that's how it works. I'm sure Matt will correct me if I'm wrong on the specifics; 160x160 seems kinda low to me.

Anyway, so why not just render the entire 64MP view with some quick and dirty method, and then have another render pass which just does a nice, detailed render of whatever section of the screen the eye is focused on? Then you composite the two renders and send them off to the display, similar to how you'd do a HUD overlay. Since you're only following comparatively slow eye movements in "real time," it's pretty easy to predict which portion of the screen needs a hi-res pass, especially if you give yourself a nice buffer and do a detailed render for like a 4º FoV.


That wasn't VR and actually it makes no sense anyway if I am not missing something. The game centers on what you are looking at, that means you need to return looking at the center. Will get pretty eye straining especially as soon this interferes with moving around. Can't see any serious use besides shooting galleries.
I assumed you'd still control the camera with the right analog, and the eye tracking would just control the crosshair. Is that not how they have it set up?

I saw the brief clip of Kieghley trying it at the launch event, but they didn't really show much. Was something more detailed shown at GDC? Anyone have a link?
 

mrklaw

MrArseFace
it is something interesting to put on the list after the initial launch. for the start, 1080/1440p panels will be ok, driven traditionally. But as the desire for increased resolution starts to overtake available screen density (pretty soon I'd expect), then I think people will need to look at interesting solutions.

however, simply having foveated rendering doesn't do away with the need for crazy high screen display resolutions - even if you're only rending a small portion with good detail, you're still looking at 8-16k screens, and I'm not sure even the smartphone arms race will go that high. You'd need VR to be big enough to drive display advances itself.
 
it is something interesting to put on the list after the initial launch. for the start, 1080/1440p panels will be ok, driven traditionally. But as the desire for increased resolution starts to overtake available screen density (pretty soon I'd expect), then I think people will need to look at interesting solutions.
I would think it would still be beneficial even with lower resolution displays. Imagine if Sony could go from 1080p to 1440p, doubling the resolution, and instead of doubling the load on the GPU, they could cut the load by 75%? That would make it comparatively easy to get really high frame rates while still keeping image quality high where it matters.

however, simply having foveated rendering doesn't do away with the need for crazy high screen display resolutions - even if you're only rending a small portion with good detail, you're still looking at 8-16k screens, and I'm not sure even the smartphone arms race will go that high. You'd need VR to be big enough to drive display advances itself.
The microdisplay in the HMZ is a 0.7" diagonal with 2000+ PPI. Is there anything stopping them from simply scaling that up to a 2"x2", 4000x4000 display? Would the yields be horrific or something? What about an array of smaller panels?
 

Branduil

Member
Sounds like features for the next generation of VR displays. I imagine we're at least several years away from all of that being possible for consumer devices.
 
Is that not how they have it set up?

I saw the brief clip of Kieghley trying it at the launch event, but they didn't really show much. Was something more detailed shown at GDC? Anyone have a link?

Demo on the video I saw showed camera control and aiming both with your head/eyes. It looked like large bounding box wii fps controls with the controller strapped to your head. Seems really unwieldy, but I've not tried it, the aiming part should be ok I guess if it works well.
 
Huh, a tread bump. Time to post relevant missing info:

Update 1: An actually working VR eye-tracking solution
Uses a virtual retinal display (like the avegant glyph) with a beamsplitter.
It can be very accurate, as it is in essence a real time retinal scan.

Light emitted from a virtual retinal display (10) light source (12) passes through a beamsplitter (42) to a scanning subsystem (16) and on to an eyepiece (20) and the viewer's eye. Some of the light is reflected from the viewer's eye passing back along the same path. Such light however is deflected at the beamsplitter toward a photodetector (44). The reflected light is detected and correlated to the display scanner's position. The content of the reflected light and the scanner position for such sample is used to generate a map of the viewer's retina. Such map includes 'landmarks' such as the viewer's optic nerve, fovea, and blood vessels. The map of the viewer's retina is stored and used for purposes of viewer identification. The viewer's fovea position is monitored to track where the viewer is looking.
edit: image

Source: Google patents https://www.google.com/patents/WO1999036826A1?cl=en
US patent office: http://pdfpiw.uspto.gov/.piw?PageNu...1=5982555.PN.%26OS=PN/5982555%26RS=PN/5982555

Answering questions:
I think the idea of foveated rendering and the win for performance is that you can reduce per pixel quality in the parts of the screen the idea is not focussed on, and thus either bank the framerate gain or increase per-pixel complexity where the eye is focussed. Or selectively anti-alias only parts of the image, or decrease resolution away from the eye's target and increase it where it is targeted. Etc. etc.
The eye is only sharp at a 2 degree cone of vision around the fovea. The resolution of the eye decreases rapidly above that, so "decrease resolution" is the core of the idea. Also see explanations below.
270px-AcuityHumanEye.svg.png


Depth of Field can be achieved by using light field technology
Thats true, but VRDs can create actual depth without the need to render light fields.

What are accommodation and depth modulation?
Depth modulation is the ability to dynamically change the focus depth of the display.
Accommodation is the process by which the lenses in our eye changes focus to maintain a clear image or focus on an object as its distance varies.
Current HMDs project everything as if it were at infinity, creating a depth conflict (steropsis deph is different from focus depth)
220px-Accommodation_%28PSF%29.svg.png


Matt was saying your vision is only hi-res in about 2º of your FoV. Let's assume our display is 100º both vertical and horizontal, meaning our "good vision" only accounts for about 0.04% of the visible area of the display. So if we have a display that's 8000x8000 for each eye, you can only clearly see a 160x160 pixel area of the display. Pretty sure that's how it works. I'm sure Matt will correct me if I'm wrong on the specifics; 160x160 seems kinda low to me.

The best vision is around 0.8 arcminutes, so it would be 150x150 pixels, yes (2*60/0.8). (diffraction limit is 0.6 arcminutes, but we are also contrast limited)
Anyway, so why not just render the entire 64MP view with some quick and dirty method, and then have another render pass which just does a nice, detailed render of whatever section of the screen the eye is focused on? Then you composite the two renders and send them off to the display, similar to how you'd do a HUD overlay. Since you're only following comparatively slow eye movements in "real time," it's pretty easy to predict which portion of the screen needs a hi-res pass, especially if you give yourself a nice buffer and do a detailed render for like a 4º FoV.
That's almost exaclty the concept behind foveated rendering, but no need to render at 64MP. Upscaling is enough.
foveatedllae5.png

Yes. I've often thought about it but I struggle to think how a rendering engine could be constructed to take advantage of this by only rendering a sharp image at the center and then progressively reducing complexity and rendering cost towards the edges in a seamless and smooth way.
Here is one way (also see image above):
Microsoft Research said:
Our system exploits foveation on existing graphics hardware by rendering three nested and overlapping render targets or eccentricity layers centered around the current gaze point. Refer to Figure 1. These layers are denoted the inner/foveal layer, middle layer, and outer layer. The inner layer is smallest in angular diameter and rendered at the highest resolution (native display) and the finest LOD. The two peripheral layers cover a progressively larger angular diameter but are rendered at progressively lower resolution and coarser LOD. These outer layers are also updated at half the temporal rate of the inner layer. We then interpolate these layers
 

pj

Banned
Yes. I've often thought about it but I struggle to think how a rendering engine could be constructed to take advantage of this by only rendering a sharp image at the center and then progressively reducing complexity and rendering cost towards the edges in a seamless and smooth way.

A very tough nut to crack.

It would just render the whole screen at a low level of detail with some sort of blurring effect, then do a high resolution pass for the small area being focused on. The engine should have access to lower LOD assets that would normally be used when the object is far away.

The actual eye tracking seems like it should be much harder than making the engine work with it
 

Man

Member
FOVeated rendering practically guarantees a 'PS1 to PS2' era jump for VR visuals but also makes the HMD device standalone with a 'mobile' chipset. In addition to opening heaps of gameplay design opportunities.

Sony may actually release such a product as a standalone platform and not as a successor to the PS4 (something that's not console, nor portable or mobile).
 
Update 1: An actually working VR eye-tracking solution
Uses a virtual retinal display (like the avegant glyph) with a beamsplitter.
It can be very accurate, as it is in essence a real time retinal scan.
Like, working working? As in, Sony could pay UW a license fee today, and start putting this in to Morpheus tomorrow? This patent is from 1999. Why does everyone seem to think we're "not close" to head-mounted eye tracking? What's wrong with this one? What's the deal with VRDs anyway? Wiki sez they have resolution "approaching the limit of human vision," and great color reproduction. It also lists a FoV of 120º, but for some reason the Avegant Glyph is only 45º, so I'm not sure what's going on there.

Depth modulation is the ability to dynamically change the focus depth of the display.
Accommodation is the process by which the lenses in our eye changes focus to maintain a clear image or focus on an object as its distance varies.
Current HMDs project everything as if it were at infinity, creating a depth conflict (steropsis deph is different from focus depth)
Okay, so when the user is looking at a fly on their nose, we want to defocus everything behind it? And eye tracking gives us that? What about when the eyes refocus without moving, or does that even happen?

The best vision is around 0.8 arcminutes, so it would be 150x150 pixels, yes (2*60/0.8). (diffraction limit is 0.6 arcminutes, but we are also contrast limited)
Wow, that is pretty tiny. So if I'm understanding your other post correctly, this gives us a 100-fold increase in rendering efficiency? So if a PS4 is running a game at 1MP per eye and 60 Hz, then with foveated rendering, we can jump to 16MP per eye and 375 Hz?? So we could get visuals as good or better than Second Son, at 16x the resolution at 120 Hz, or what?

And with a larger field of view, we actually get a larger performance multiplier, since a proportionally smaller section of the display gets a full pass?

That's almost exaclty the concept behind foveated rendering, but no need to render at 64MP. Upscaling is enough.
Hey, I figured something out!! Yeah, when I said "quick and dirty," I figured it would be mostly upscaling, but I didn't know if it would also involve stuff like reducing textures and AA, etc.

Here is one way (also see image above):
Microsoft Research said:
Our system exploits foveation on existing graphics hardware by rendering three nested and overlapping render targets or eccentricity layers centered around the current gaze point. Refer to Figure 1. These layers are denoted the inner/foveal layer, middle layer, and outer layer. The inner layer is smallest in angular diameter and rendered at the highest resolution (native display) and the finest LOD. The two peripheral layers cover a progressively larger angular diameter but are rendered at progressively lower resolution and coarser LOD. These outer layers are also updated at half the temporal rate of the inner layer. We then interpolate these layers
I was wondering about that, but I wasn't sure if it would be a good idea or not. So we'd refresh the in-focus area at like 120 Hz while the rest is refreshed at 60 Hz? Or would we need to refresh at like 180/90 or 240/120 Hz to avoid flicker in the periphery?

What are your thoughts on "giant" microdisplays? Can they just re-tool a fabrication line to start producing 2.8" 16MP displays with "current" technology, or would additional research need to be done to make them larger? What about arrays? Or should we be looking at VRDs? Or something else?

Can HDMI even carry this much data, or would the headset need to do its own upscaling and compositing of the component layers?
 

GraveHorizon

poop meter feature creep
That's what I understood, but that makes the "game changer" in the title a little too sensationalistic. It will not help gameplay, just that the OR will probably get eyetracking as soon as possible so Facebook can track at what ads the user looks.

Once they know what people like to look at
(boobs, butts, and bright colors)
, they'll retool their ads in the manner of the Kinect. "Gave at this ad more than 0.8 seconds to be taken to the advertiser's virtual store!"
 
Like, working working? As in, Sony could pay UW a license fee today, and start putting this in to Morpheus tomorrow?
UW have a working as a prototype in a lab. The UW technology itself was sold to microvision, which seem to have abandoned the project at the end of the VR craze in favor of pico projectors for smartphones.
I don't think it's a hard think to do anyways. It's just a beamsplitter + photodetector, so Sony especially should have little issues adapting their own technologies from 3 chip video cameras, apart from cost issues.
sarah-vrd-smallu4jsn.jpg

Source: http://www.hitl.washington.edu/research/true3d/ (note: seems like optics have improved by a lot since then, huh)

What's the deal with VRDs anyway? Wiki sez they have resolution "approaching the limit of human vision," and great color reproduction. It also lists a FoV of 120º, but for some reason the Avegant Glyph is only 45º, so I'm not sure what's going on there.
Wikipedia links to the prototype above, which has a poor VGA screen with 40 degree FoV. Seems like they copied their wishlist before they edited it.
Source: http://www.hitl.washington.edu/publications/p-95-1/
Modern digital micromirror devices are commercially available from many manufacturers. For example. Avegant uses a 0.3" 720p chip from Texas Instrument. They also have a 1.3" 4k chip, but price might be prohibitive.
And it doesn't have to be DMD. LCOS would work as well.
Source: http://www.ti.com/lit/ds/symlink/dlp3114.pdf
Beam splitters and photodetectors aren't hard to find either. It's all a question of cost
Okay, so when the user is looking at a fly on their nose, we want to defocus everything behind it? And eye tracking gives us that?
Something like that. We want at least the focus depth of the object we are looking at to be the same as the stereopsis depth. Doesn't matter if our eyes are moving or not. Stereopsis = depth we get from viewing with two eyes. Ideally focus and object depth are equal for the whole scene, which would work without eye-tracking. Though I'm not sure how that would work with VRDs, but it works with light field rendering.

Wow, that is pretty tiny. So if I'm understanding your other post correctly, this gives us a 100-fold increase in rendering efficiency? So if a PS4 is running a game at 1MP per eye and 60 Hz, then with foveated rendering, we can jump to 16MP per eye and 375 Hz?? So we could get visuals as good or better than Second Son, at 16x the resolution at 120 Hz, or what?
Not quite, the benefit get lower with lower resolutions and accounting for tracking latencies. Also rasterization is only one part of the rendering cost. But otherwise yeah, that's the jist of the idea.
And with a larger field of view, we actually get a larger performance multiplier, since a proportionally smaller section of the display gets a full pass?
yep
I was wondering about that, but I wasn't sure if it would be a good idea or not. So we'd refresh the in-focus area at like 120 Hz while the rest is refreshed at 60 Hz? Or would we need to refresh at like 180/90 or 240/120 Hz to avoid flicker in the periphery?
The periphery is less important, but to avoid low persistence flicker, it would actually have to be higher in the periphery.
The latest model for flicker fusion is (See my other neogaf post: http://www.neogaf.com/forum/showthread.php?t=773531 )
Code:
CFF = (0.24E + 10.5)(Log L+log p + 1.39 Log d - 0.0426E + 1.09) (Hz)
E = how far from center of your gaze (degrees)
L = how bright (eye luminance, 215cd/m2 full white screen = 3.45 Log L)
d = field of view (degrees)
p = eye adaption (pupil area in mm2, typical 0.5-1.3 Log p)
So it's something like 80/90/100 Hz for the three layers (might calculate exact values for it later)
What are your thoughts on "giant" microdisplays? Can they just re-tool a fabrication line to start producing 2.8" 16MP displays with "current" technology, or would additional research need to be done to make them larger? What about arrays? Or should we be looking at VRDs? Or something else?
As I said above, 4k DMDs are already available. It's just a matter of price. A 1080p chip costs $50 on amazon.
See: http://www.amazon.com/s/ref=nb_sb_n...s&field-keywords=dlp+chip&rh=i:aps,k:dlp+chip
Can HDMI even carry this much data, or would the headset need to do its own upscaling and compositing of the component layers?
Not HDMI, but display port 1.3 is enough for 8K 60 or 4k 120, and should be available in the second half of this year.
 

Man

Member
SensoMotoric Instruments (SMI) have done the expected and implemented eye-tracking to a custom Oculus Rift: http://arstechnica.com/gaming/2014/...displays-like-the-oculus-rift-consumer-ready/

Sony showing SMI's technology working with inFamous at the same time as they were unveiling Project Morpheus seems to indicate that this is the path they are going to take (though we will have to see if they await implementing this until the PS5-era VR glasses).
Oculus is probably heavily on this behind closed doors as well.
 

spekkeh

Banned
This looks like it would be 20 years off at least, and by that point it may be less important. Human saccades are incredibly fast and we only just about eliminated popin during normal avatar movement. The rendering engine shifting entire assets in ms?
 
SensoMotoric Instruments (SMI) have done the expected and implemented eye-tracking to a custom Oculus Rift: http://arstechnica.com/gaming/2014/...displays-like-the-oculus-rift-consumer-ready/

Sony showing SMI's technology working with inFamous at the same time as they were unveiling Project Morpheus seems to indicate that this is the path they are going to take (though we will have to see if they await implementing this until the PS5-era VR glasses).
Oculus is probably heavily on this behind closed doors as well.
"SMI would not comment on whether it was talking to Sony about putting its technology into Sony's recently announced Project Morpheus HMD for PS4."

Seems like if they weren't already in talks with Sony, he'd be free to say more. ;) He does say they're currently shopping the tech to "HMD manufacturers," and that they're close to having a consumer-ready version, however, and we know they're on good terms with Sony.

So are they using this for foveated rendering, or just for IPD calibration? It sounded like they briefly hinted at FR, but they mostly just talked about the IPD stuff, which is an issue I've heard Anton mention a few times, with no specifics on how they were planning to "solve" it.

If they do FR, would they want ultra-hi-res displays, like a "mega microdisplay," or would they just make it really cheap to drive a 1440p display?
 

Man

Member
FOVeated Rendering will probably take a few years to figure out anyhow so maybe Sony's including the current eyetracking-tech for IPD and gameplay opportunities which is big features of itself (and as a light-basis for further FOVeated research).
 

Zaptruder

Banned
This looks like it would be 20 years off at least, and by that point it may be less important. Human saccades are incredibly fast and we only just about eliminated popin during normal avatar movement. The rendering engine shifting entire assets in ms?

It doesn't have to do that. It can just shift fill rate rendering.

i.e. things inside the focal = render at full resolution.

things outside the focal = render at decreasing resolutions.

If you can make it as responsive as mouse movement, you can make it responsive enough for eye saccades.

I mean, saccades are fast - but it still takes time for our brains to adjust to what we're looking at.


Anyway, eye facing IR cameras inside the goggles are a great multi-solution inclusion - eye tracking for auto-IPD adjustment (which is critical towards reducing the awkward parts of the adoption barrier - mass market is generally unwilling/unable to do calibration - meaning if it doesn't work, they'll just whine and complain), but also useful for increasing frame rate for VR rendering... and for VR communication - eye tracking for avatar emotivity. Been able to see where people are looking is such an incredibly important part of communication that people don't really think about too much at all - but has been important enough for evolutionary processes to force our irises to shrink/scelera to grow.

And it even has UI benefits - minimize UI when not looking at it; maximize UI when looking at it.

If this is within the realm of possibility, Oculus and Sony would be foolish not to include it in as part of their first consumer VR kits. And going by the latest updates, it very much look like they will indeed include it.
 
You know I kinda think (hope) that both CV1 of Morpheus and the Rift will have this, it's so useful. I hope the technology is there to do it in 2015 or 2016.
 
Eye tracking for aiming in Infamous looked amazing (how come I'm only seeing this clip now?), and I'm sure that it could be used for lots of gameplay-related innovations, and really make VR awesome. However, FOVeated rendering seems very immature by contrast. I mean, the demo looked like it was actually working, he was running around, shooting guys, HITTING them, and it all looked awesome. The demos of FOVeated rendering seem way more immature.

Either way, if they integrate eye-tracking into the Morpheus that would be amazing, but I think that if they were going to, they'd have hinted at it already with the dev-kits.
 

hesido

Member
This is the one and probably ONLY solution to remove the convergence / divergence disparity for 3d content. I myself is affected very badly for "near" objects screwing with my eye and brain, and it puts a lot of strain on my eye muscles.
 

Zaptruder

Banned
Eye tracking for aiming in Infamous looked amazing (how come I'm only seeing this clip now?), and I'm sure that it could be used for lots of gameplay-related innovations, and really make VR awesome. However, FOVeated rendering seems very immature by contrast. I mean, the demo looked like it was actually working, he was running around, shooting guys, HITTING them, and it all looked awesome. The demos of FOVeated rendering seem way more immature.

Either way, if they integrate eye-tracking into the Morpheus that would be amazing, but I think that if they were going to, they'd have hinted at it already with the dev-kits.

http://msrvideo.vo.msecnd.net/rmcvideos/173013/dl/173013.mp4

A link to a Siggraph 2012 video on foveated rendering done by a Microsoft researcher.

Pretty good results... significant speed ups to rendering as well.
 

spekkeh

Banned
It doesn't have to do that. It can just shift fill rate rendering.
I have an MSc in game technology, but readily admit much of rendering pipeline stuff goes way over my head, so I'll believe you on your word here. It would be amazing tech, that with a sort of adaptive depth of field could also actually some of the discomfort that 3D media currently impose, where your eyes want to focus on something that's blurred out.

I also agree that eyetracking in HMDs (or in any UI really) would be great.

edit:

http://msrvideo.vo.msecnd.net/rmcvideos/173013/dl/173013.mp4

A link to a Siggraph 2012 video on foveated rendering done by a Microsoft researcher.

Pretty good results... significant speed ups to rendering as well.
wow, wizards!
 

Seanspeed

Banned
You know I kinda think (hope) that both CV1 of Morpheus and the Rift will have this, it's so useful. I hope the technology is there to do it in 2015 or 2016.
It sounds like the tech is there, its just about packaging it into a VR headset and being able to do it affordably, which is what may not be there for initial headset releases.
 
http://msrvideo.vo.msecnd.net/rmcvideos/173013/dl/173013.mp4

A link to a Siggraph 2012 video on foveated rendering done by a Microsoft researcher.

Pretty good results... significant speed ups to rendering as well.

I watched it before replying. And while the performance gains are substantial, that's pretty darn far away from an actual working in-game concept. The fly through is slow, the landscape uniform etc. I realize that the test was to benchmark where they'd have the best trade-off and so on, but it's just static I think you can't really apply it to gaming just yet. It's more a proof of concept than anything that actually impresses me.

For it to work with a game, you'd have to have eye-tracking (what if I want to look in the corner at some enemy that shows up there?), quick as hell change of the high-resolution focal point, and so on. Somewhere down the line it may be feasible, but I'm not particularly convinced it's something we'll see in the near future.
 

Zaptruder

Banned
I watched it before replying. And while the performance gains are substantial, that's pretty darn far away from an actual working in-game concept. The fly through is slow, the landscape uniform etc. I realize that the test was to benchmark where they'd have the best trade-off and so on, but it's just static I think you can't really apply it to gaming just yet. It's more a proof of concept than anything that actually impresses me.

For it to work with a game, you'd have to have eye-tracking (what if I want to look in the corner at some enemy that shows up there?), quick as hell change of the high-resolution focal point, and so on. Somewhere down the line it may be feasible, but I'm not particularly convinced it's something we'll see in the near future.

I don't see why this isn't immediately applicable to game tech. It's a demonstration that's not dissimilar to fxaa. A general purpose rendering solution that doesn't negatively affect performance and is widely applicable irrespective of art style.

The video shows its application with eye tracking. Literally using a sample group of people that have the freedom to look anywhere on the screen to test between foveated rendering and normal rendering.

The only limitation to its use is the lack of standard eye tracking tech in these platforms.
 

Fafalada

Fafracer forever
cyberheater said:
Yes. I've often thought about it but I struggle to think how a rendering engine could be constructed to take advantage of this by only rendering a sharp image at the center and then progressively reducing complexity and rendering cost towards the edges in a seamless and smooth way.
3-4way resolution cascade with deferred shading and stenciling the focal area falloff - easily a 90% reduction in shader complexity or so (with the 2% hi-res target), solving the number one concern everyone cites for high-performance VR on platforms that have a hw-cap.
To effectively scale that to geometry complexity and other items is a bit more work - but it's still just leveraging what's effectively been solved problems for decades. Sure - the obvious things won't get you that 98% reduction but getting over 80% there is low-hanging fruits - hinging on the notion that those tracking sensors can keep up with the high refresh rates that is.
 
Top Bottom