Oculus Go Development

On 6/23/20 Oculus announced plans to sunset Oculus Go. Information about dates and alternatives can be found in the Oculus Go introduction.

Oculus Quest Development

All Oculus Quest developers MUST PASS the concept review prior to gaining publishing access to the Quest Store and additional resources. Submit a concept document for review as early in your Quest application development cycle as possible. For additional information and context, please see Submitting Your App to the Oculus Quest Store.


The rendering section introduces some of the things you should optimize for or avoid when rendering your scene.

Use text in UI and scene elements that can be easily read. There are several ways to ensure text legibility in VR. For rendering purposes, we recommend using a signed distance field font in your app. This ensures smooth rendering of the font even when zoomed or shrunk. You should also consider the languages your app supports. The complexity of combined letter combinations may influence the legibility. For instance, your app may want to use a font that supports East Asian languages well. Localization may also affect text layout as some languages employ more letters than others for the same copy. Font size and placement in your scene is important as well. For Gear VR, choosing a font size larger than 30-pt will generally give you minimal legibility at the fixed z-depth of 4.5m (in Unity). Larger than 48-pt will generally ensure a comfortable reading experience. For Rift, a font size larger than 25-pt would give you the minimal legibility at the fixed z-depth of 4.5m (in Unity). Larger than 42-pt would generally ensure comfortable reading experience.

Flicker plays a significant role in the oculomotor component of simulator sickness, and is generally perceived as a rapid “pulsing” of lightness and darkness on part or all of a screen. The degree to which a user will perceive flicker is a function of several factors, including: the rate at which the display is cycling between “on” and “off” modes, the amount of light emitted during the “on” phase, how much of which parts of the retina are being stimulated, and even the time of day and fatigue level of the individual. Although flicker can become less consciously noticeable over time, it can still lead to headaches and eyestrain. Some people are extremely sensitive to flicker and experience eyestrain, fatigue, or headaches as a result. Others will never even notice it or have any adverse symptoms. Still, there are certain factors that can increase or decrease the likelihood any given person will perceive display flicker.

  • First, people are more sensitive to flicker in the periphery than in the center of vision.
  • Second, brighter screen images produce more flicker. Bright imagery, particularly in the periphery (e.g., standing in a bright, white room) can potentially create noticeable display flicker. Try to use darker colors whenever possible, particularly for areas outside the center of the player’s viewpoint. In general, the higher the refresh rate, the less perceptible flicker is.

Do not create purposely flickering content. High-contrast, flashing (or rapidly alternating) stimuli can trigger photosensitive seizures in some people. Related to this point, high-spatial-frequency textures (such as fine black-and-white stripes) can also trigger photosensitive seizures. The International Standards Organization has published ISO 9241-391:2016 as a standard for image content to reduce the risk of photosensitive seizures. The standard addresses potentially harmful flashes and patterns. You must ensure that your content conforms to standards and best practices on image safety.

Use parallax mapping instead of normal mapping. Normal mapping provides realistic lighting cues to convey depth and texture without adding to the vertex detail of a given 3D model. Although widely used in modern games, it is much less compelling when viewed in stereoscopic 3D. Because normal mapping does not account for binocular disparity or motion parallax, it produces an image akin to a flat texture painted onto the object model. Parallax mapping builds on the idea of normal mapping, but accounts for depth cues normal mapping does not. Parallax mapping shifts the texture coordinates of the sampled surface texture by using an additional height map provided by the content creator. The texture coordinate shift is applied using the per-pixel or per-vertex view direction calculated at the shader level. Parallax mapping is best utilized on surfaces with fine detail that would not affect the collision surface, such as brick walls or cobblestone pathways.

Apply the appropriate distortion correction for the platform you’re developing for. Lenses in VR headsets distort the rendered image; this distortion is corrected by the post-processing steps in the SDKs. It is extremely important that this distortion be done correctly and according to the SDK guidelines. Incorrect distortion can “look” fairly correct, but still feel disorienting and uncomfortable, so attention to the details is critical. All of the distortion correction values need to match the physical device, none of them may be user-adjustable.

Latency and Lag

We’ll spend some time discussing the effects latency and lag have on users in VR. We don’t have specific recommendations for fixing these issues as they can have numerous causes. Please review the Mobile Testing and Troubleshooting and the Rift Optimizing Your Application guides for information about optimizing your game loop.

Although developers have no control over many aspects of system latency (such as display updating rate and hardware latencies), it is important to make sure your VR experience does not lag or drop frames. Many games can slow down as a result of numerous or more complex elements being processed and rendered to the screen. While this is a minor annoyance in traditional video games, it can be extremely uncomfortable for users in VR.

We define latency as the total time between movement of the user’s head and the updated image being displayed on the screen (motion-to-photon), and it includes the times for sensor response, fusion, rendering, image transmission, and display response.

Past research findings on the effects of latency are somewhat mixed. Many experts recommend minimizing latency to reduce discomfort because lag between head movements and corresponding updates on the display can lead to sensory conflicts and errors in the vestibular-ocular reflex. We therefore encourage minimizing latency as much as possible.

It is worth noting that some research with head-mounted displays suggests a fixed latency creates about the same degree of discomfort whether it’s as short as 48 ms or as long as 300 ms; however, variable and unpredictable latencies in cockpit and driving simulators create more discomfort the longer they become on average. This suggests that people can eventually get used to a consistent and predictable bit of lag, but fluctuating, unpredictable lags are increasingly discomforting the longer, on average, they become.

We believe the threshold for compelling VR to be at or below 20ms of latency. Above this range, users reported feeling less immersed and comfortable in the environment. When latency exceeds 60ms, the disjunction between one’s head motions and the motions of the virtual world start to feel out of sync, causing discomfort and disorientation. Large latencies are believed to be one of the primary causes of discomfort. Independent of comfort issues, latency can be disruptive to user interactions and presence. In an ideal world the closer you are to 0ms the better. If latency is unavoidable, it will be more uncomfortable the more variable it is. Your goal should be the lowest and least variable latency possible.