Oculus is combating one of the key causes of VR sickness and visual strain — the lack of retinal blur — by implementing focus and depth of field corrections on both the software and hardware side of its technologies. According to a research paper published by a trio of Oculus researchers, the issue is the fixed image being projected by the eyepiece a fixed distance from a user's eyes. The researchers tested a prototype headset that aims to remedy this through the use of a number of hardware tricks, as well as algorithms on the software side that dynamically change focus and lighting based on a number of factors. They found that by dynamically changing the focus of a VR scene and using hardware that's able to adapt more readily to software conditions based on a user's view, they could greatly reduce eye strain and dizziness.
The prototype rig is quite similar to an off-the-shelf Oculus Rift unit at a glance, but it's an entirely different beast on the inside. The unit contains a beam splitter and polarizer, among other elements that, in essence, can change how the display image is sent to the eyepiece. To get this special hardware to respond, researchers tried a few different configurations on the software side with different numbers of adaptive planes and focal surfaces, and compared that to results with a single fixed plane, as seen on current headsets, as well as multiple fixed planes. They used fairly realistic VR scenes for testing, allowing easy comparison to real life and objective measurement of errors in factors like color, depth perception, and focus, measured in diopters. They ended up finding that a configuration with four focal surfaces produced the truest focus out of the box. Adding adaptive technologies and tweaks to that multifocal surface, along with software optimization, was able to produce a high-resolution, true-to-life VR scenario.
The heavy optimizations and specialized hardware that are required by this solution make it difficult to apply Oculus's new techniques to current VR systems. The research team is working on an in-place optimization framework that would apply those tweaks dynamically to almost any piece of VR software, but it will take time to develop, and won't do a lot of good without hardware that can take advantage of it. The research paper only laid out the concepts and the work that the team did, and did not mention any sort of time frame for commercialization.