Tracking

The FOV of the virtual cameras must match the visible display area. In general, Oculus recommends not changing the default FOV.

Overview

  • The Rift sensors collect information about user yaw, pitch, and roll.
  • 6DoF position tracking to the Rift.
    • Allow users to set the origin point based on a comfortable position for them with guidance for initially positioning themselves.
    • Do not disable or modify position tracking, especially while the user is moving in the real world.
    • Warn the user if they are about to leave the camera tracking volume; fade the screen to black before tracking is lost.
  • Implement the “head model” code available in our SDK demos whenever position tracking is unavailable.
  • Optimize your entire engine pipeline to minimize lag and latency.
  • Implement Oculus VR’s predictive tracking code (available in the SDK demos) to further reduce latency.
  • If latency is truly unavoidable, variable lags are worse than a consistent one.

Orientation Tracking

The Oculus Rift headset contains a gyroscope, accelerometer, and magnetometer. We combine the information from these sensors through a process known as sensor fusion to determine the orientation of the user’s head in the real world, and to synchronize the user’s virtual perspective in real-time. These sensors provide data to accurately track and portray yaw, pitch, and roll movements.

We have found a very simple model of the user’s head and neck to be useful in accurately translating sensor information from head movements into camera movements. We refer to this in short as the head model, and it reflects the fact that movement of the head in any of the three directions actually pivots around a point roughly at the base of your neck—near your voice-box. This means that rotation of the head also produces a translation at your eyes, creating motion parallax, a powerful cue for both depth perception and comfort.

Position Tracking

The Rift features 6-degree-of-freedom (6DoF) position tracking. Underneath the Rift's fabric cover is an array of infrared micro-LEDs, which are tracked in real space by the included infrared camera. Positional tracking should always correspond 1:1 with the user’s movements as long as they are inside the tracking camera’s volume. Augmenting the response of position tracking to the player’s movements can be discomforting.

The SDK reports a rough model of the user’s head in space based on a set of points and vectors. The model is defined around an origin point, which should be centered approximately at the pivot point of the user’s head and neck when they are sitting up in a comfortable position in front of the camera.

You should give users the ability to reset the head model’s origin point based on where they are sitting and how their Rift is set up. Users may also shift or move during gameplay, and therefore should have the ability to reset the origin at any time. However, your content should also provide users with some means of guidance to help them best position themselves in front of the camera to allow free movement during your experience without leaving the tracking volume. Otherwise, users might unknowingly set the origin to a point on the edge of the camera’s tracking range, causing them to lose position tracking when they move. This head model origin set function can take the form of a set-up or calibration utility separate from gameplay.

The head model is primarily composed of three vectors. One vector roughly maps onto the user’s neck, which begins at the origin of the position tracking space and points to the “center eye,” a point roughly at the user’s nose bridge. Two vectors originate from the center eye, one pointing to the pupil of the left eye, the other to the right. More detailed documentation on user position data can be found in the SDK.

Room scale opens new possibilities for more comfortable, immersive experiences and gameplay elements. Players can lean in to examine a cockpit console, peer around corners with a subtle shift of the body, dodge projectiles by ducking out of their way, and much more.

Although room scale holds a great deal of potential, it also introduces new challenges. First, users can leave the viewing area of the tracking camera and lose position tracking, which can be a very jarring experience. To maintain a consistent, uninterrupted experience, the Oculus Guardian system you should provide users with warnings as they approach the edges of the camera’s tracking volume before position tracking is lost. They should also receive some form of feedback that will help them better position themselves in front of the camera for tracking.

We recommend fading the scene to black before tracking is lost, which is a much less disorienting and discomforting sight than seeing the environment without position tracking while moving. The SDK defaults to using orientation tracking and the head model when position tracking is lost.

The second challenge introduced by position tracking is that users can now move the virtual camera into unusual positions that might have been previously impossible. For instance, users can move the camera to look under objects or around barriers to see parts of the environment that would be hidden from them in a conventional video game. On the one hand, this opens up new methods of interaction, like physically moving to peer around cover or examine objects in the environment. On the other hand, users may be able to uncover technical shortcuts you might have taken in designing the environment that would normally be hidden without position tracking. Take care to ensure that art and assets do not break the user’s sense of immersion in the virtual environment.

A related issue is that the user can potentially use position tracking to clip through the virtual environment by leaning through a wall or object. One approach is to design your environment so that it is impossible for the user to clip through an object while still inside the camera’s tracking volume. Following the recommendations above, the scene would fade to black before the user could clip through anything. Similar to preventing users from approaching objects closer than the optical comfort zone of 0.75-3.5 meters, however, this can make the viewer feel distanced from everything, as if surrounded by an invisible barrier. Experimentation and testing will be necessary to find an ideal solution that balances usability and comfort.

Although we encourage developers to explore innovative new solutions to these challenges of position tracking, we discourage any method that takes away position tracking from the user or otherwise changes its behavior while the virtual environment is in view. Seeing the virtual environment stop responding (or responding differently) to position tracking, particularly while moving in the real world, can be discomforting to the user. Any method for combating these issues should provide the user with adequate feedback for what is happening and how to resume normal interaction.

Latency

We define latency as the total time between movement of the user’s head and the updated image being displayed on the screen (“motion-to-photon”), and it includes the times for sensor response, fusion, rendering, image transmission, and display response.

Minimizing latency is crucial to immersive and comfortable VR. One of the features of the Rift is its low latency head tracking technology. The more you can minimize motion-to-photon latency in your game, the more immersive and comfortable the experience will be for the user.

One approach to combating the effects of latency is our predictive tracking technology. Although it does not actually reduce the length of the motion-to-photon pipeline, it uses information currently in the pipeline to predict where the user will be looking in the future. This compensates for the delay associated with the process of reading the sensors and then rendering to the screen by anticipating where the user will be looking at the time of rendering and drawing that part of the environment to the screen instead of where the user was looking at the time of sensor reading. We encourage developers to implement the predictive tracking code provided in the SDK. For details on how this works, see Steve LaValle’s The Latent Power of Prediction blog post as well as the relevant SDK documentation.

At Oculus we believe the threshold for compelling VR to be at or below 20ms of latency. Above this range, users tend to feel less immersed and comfortable in the environment. When latency exceeds 60ms, the disjunction between one’s head motions and the motions of the virtual world start to feel out of sync, causing discomfort and disorientation. Large latencies are believed to be one of the primary causes of simulator sickness.[1] Independent of comfort issues, latency can be disruptive to user interactions and presence. Obviously, in an ideal world, the closer we are to 0ms, the better. If latency is unavoidable, it will be more uncomfortable the more variable it is. You should therefore shoot for the lowest and least variable latency possible.

[1] Kolasinski, E.M. (1995). Simulator sickness in virtual environments (ARTI-TR-1027). Alexandria, VA: Army Research Institute for the Behavioral and Social Sciences. Retrieved from http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA295861