1 | ||
Editor: DonovanBaarda
Time: 2016/03/18 15:22:48 GMT+11 |
||
Note: |
changed: - Real Focus ========== With the new gen of VR goggles they are finally starting to have decent low latency head tracking with a wide field of view. They can even compensate for lens distortions and "rainbow-effects" using shaders that adjust the image. The last remaining visual depth cue they don't have yet is focus. The lenses are designed to give the image a fixed focus at a reasonable distance. Most real flight simulators use a `collimated display <http://www.diy-cockpits.org/coll/>`_ which gives you focus at infinity. This makes everything on the screen appear to be far away, which is what you want for flight-simulator terrain. The human eye `adjusts focus <https://en.wikipedia.org/wiki/Accommodation_(eye)>`_ in response to the objects distance to get a clear image. Close objects require more focus adjustment, but beyond a certain distance it becomes effectively infinity. This focusing is both a reaction to the distance required for a clear image, and a depth-cue used to estimate distance. Using a VR display with a fixed focus distance means the focus distance never changes, regardless of how far away the object is meant to be. This means you need to learn to stop focusing in response to distance to see a clear image, and to stop using focus distance as a depth cue. The focal distance contradicting the other depth cues probably contributes to nausea and eye strain. Extended use of these VR display's risks making the users eyes "lazy" about focusing, affecting their normal vision. The focus effect also changes with the dilation of the pupil. The smaller the pupil, the more "pin-hole" like it is, which increases the depth of field, requiring less focusing. The pupil mostly adjusts with brightness, but it can also adjust to improve focus. However, it is possible to implement realistic focusing, with various degrees of sophistication. There are 3 parts of the problem 1. Identifying what part of the image/scene they are looking at to figure out what distance they are trying to focus on. 2. Adjusting the focal distance so the eye has to actively focus to see the image and gets depth cues. 3. Adjusting the focus of the image to emulate the effects of focusing. The simplest solution to the first part is to just assume that the user is focusing on the middle of the image. A fancier solution can be done using eye-tracking technology already in use for camera view-finders. Camera's use these to identify what part of the image to focus on. They require calibration for the particular user to work best, and have fairly rough accuracy, but it's better than nothing. The second part requires an active motor adjusting the focus lens. This is similar to the autofocus mechanism in cameras. The third part is already being done in games using depth of field shaders. They currently assume that the center of the image is the focus point, but could be made smarter to use eye tracking input. Doing it right would also take into account brightness to adjust the depth of field.
With the new gen of VR goggles they are finally starting to have decent low latency head tracking with a wide field of view. They can even compensate for lens distortions and "rainbow-effects" using shaders that adjust the image.
The last remaining visual depth cue they don't have yet is focus. The lenses are designed to give the image a fixed focus at a reasonable distance. Most real flight simulators use a collimated display which gives you focus at infinity. This makes everything on the screen appear to be far away, which is what you want for flight-simulator terrain.
The human eye adjusts focus in response to the objects distance to get a clear image. Close objects require more focus adjustment, but beyond a certain distance it becomes effectively infinity. This focusing is both a reaction to the distance required for a clear image, and a depth-cue used to estimate distance.
Using a VR display with a fixed focus distance means the focus distance never changes, regardless of how far away the object is meant to be. This means you need to learn to stop focusing in response to distance to see a clear image, and to stop using focus distance as a depth cue. The focal distance contradicting the other depth cues probably contributes to nausea and eye strain. Extended use of these VR display's risks making the users eyes "lazy" about focusing, affecting their normal vision.
The focus effect also changes with the dilation of the pupil. The smaller the pupil, the more "pin-hole" like it is, which increases the depth of field, requiring less focusing. The pupil mostly adjusts with brightness, but it can also adjust to improve focus.
However, it is possible to implement realistic focusing, with various degrees of sophistication. There are 3 parts of the problem
The simplest solution to the first part is to just assume that the user is focusing on the middle of the image. A fancier solution can be done using eye-tracking technology already in use for camera view-finders. Camera's use these to identify what part of the image to focus on. They require calibration for the particular user to work best, and have fairly rough accuracy, but it's better than nothing.
The second part requires an active motor adjusting the focus lens. This is similar to the autofocus mechanism in cameras.
The third part is already being done in games using depth of field shaders. They currently assume that the center of the image is the focus point, but could be made smarter to use eye tracking input. Doing it right would also take into account brightness to adjust the depth of field.