Rendering Techniques

Be mindful of the Rift screen’s resolution, particularly with fine detail. Make sure text is large and clear enough to read and avoid thin objects and ornate textures in places where users will focus their attention.

Display Resolution

The current Rift has a 2160 x 1200 low-persistence OLED display with a 90-hz refresh rate. This represents a leap forward from the original DK1 in many respects, which featured a 1280 x 720, full-persistence 60-hz LCD display. The higher resolution means images are clearer and sharper, while the low persistence and high refresh rate eliminate much of the motion blur (i.e., blurring when moving your head) found in DK1.

The DK1 panel, which uses a grid pixel structure, gives rise to a “screen door effect” (named for its resemblance to looking through a screen door) due to the space between pixels. The Rift, on the other hand, has a pentile structure that produces more of a honeycomb-shaped effect. Red colors tend to magnify the effect due to the unique geometry of the display’s sub-pixel separation.

Combined with the effects of lens distortion, some detailed images (such as text or detailed textures) may look different inside the Rift than on your computer monitor. Be sure to view your artwork and assets inside the Rift during the development process and make any adjustments necessary to ensure their visual quality.

"Screen Door" Effect

Understanding and Avoiding Display Flicker

Display flicker is generally perceived as a rapid “pulsing” of lightness and darkness on all or parts of a screen. Some people are extremely sensitive to flicker and experience eyestrain, fatigue, or headaches as a result. Others will never even notice it or have any adverse symptoms. Still, there are certain factors that can increase or decrease the likelihood any given person will perceive display flicker.

The degree to which a user will perceive flicker is a function of several factors, including: the rate at which the display is cycling between “on” and “off” modes, the amount of light emitted during the “on” phase, how much of which parts of the retina are being stimulated, and even the time of day and fatigue level of the individual.

Two pieces of information are important to developers. First, people are more sensitive to flicker in the periphery than in the center of vision. Second, brighter screen images produce more flicker. Bright imagery, particularly in the periphery (e.g., standing in a bright, white room) can potentially create noticeable display flicker. Try to use darker colors whenever possible, particularly for areas outside the center of the player’s viewpoint.

The higher the refresh rate, the less perceptible flicker is. This is one of the reasons it is so critical to run at 75fps v-synced, unbuffered. As VR hardware matures over time, refresh rate and frame rate will very likely exceed 75fps.

Rendering resolution

The Rift has a display resolution of 2160 x 1200, but the distortion of the lenses means the rendered image on the screen must be transformed to appear normal to the viewer. In order to provide adequate pixel density for the transformation, each eye requires a rendered image that is actually larger than the resolution of its half of the display.

Such large render targets can be a performance problem for some graphics cards, and dropping frame rate produces a poor VR experience. Dropping display resolution has little effect, and can introduce visual artifacts. Dropping the resolution of the eye buffers, however, can improve performance while maintaining perceived visual quality.

This process is covered in more detail in the SDK.

Dynamically-rendered impostors/billboards

Depth perception becomes less sensitive at greater distances from the eyes. Up close, stereopsis might allow you to tell which of two objects on your desk is closer on the scale of millimeters. This becomes more difficult further out. If you look at two trees on the opposite side of a park, they might have to be meters apart before you can confidently tell which is closer or farther away. At even larger scales, you might have trouble telling which of two mountains in a mountain range is closer to you until the difference reaches kilometers.

You can use this relative insensitivity to depth perception in the distance to free up computational power by using “imposter” or “billboard” textures in place of fully 3D scenery. For instance, rather than rendering a distant hill in 3D, you might simply render a flat image of the hill onto a single polygon that appears in the left and right eye images. This image appears to the eyes in VR the same as in traditional 3D games.

Note: The effectiveness of these imposters will vary depending on the size of the objects involved, the depth cues inside of and around those objects, and the context in which they appear.[1] You will need to engage in individual testing with your assets to ensure the imposters look and feel right. Be wary that the impostors are sufficiently distant from the camera to blend in inconspicuously, and that interfaces between real and impostor scene elements do not break immersion.
Normal Mapping vs. Parallax Mapping

The technique known as “normal mapping” provides realistic lighting cues to convey depth and texture without adding to the vertex detail of a given 3D model. Although widely used in modern games, it is much less compelling when viewed in stereoscopic 3D. Because normal mapping does not account for binocular disparity or motion parallax, it produces an image akin to a flat texture painted onto the object model.

“Parallax mapping” builds on the idea of normal mapping, but accounts for depth cues normal mapping does not. Parallax mapping shifts the texture coordinates of the sampled surface texture by using an additional height map provided by the content creator. The texture coordinate shift is applied using the per-pixel or per-vertex view direction calculated at the shader level. Parallax mapping is best utilized on surfaces with fine detail that would not affect the collision surface, such as brick walls or cobblestone pathways.

[1] Allison, R. S., Gillam, B. J., & Vecellio, E. (2009). Binocular depth discrimination and estimation beyond interaction space. Journal of Vision, 9, 1–14.