Oculus Go Development

On 6/23/20 Oculus announced plans to sunset Oculus Go. Information about dates and alternatives can be found in the Oculus Go introduction.

Oculus Quest Development

All Oculus Quest developers MUST PASS the concept review prior to gaining publishing access to the Quest Store and additional resources. Submit a concept document for review as early in your Quest application development cycle as possible. For additional information and context, please see Submitting Your App to the Oculus Quest Store.

Expressive Features for Avatars

Expressive features let Oculus Avatars take advantage of their facial geometry, enabling realistic and nuanced animation of various facial behaviors. Expressive features increase social presence and make interactions seem more natural and dynamic. Expressive features are comprised of the following:

  • Realistic lip-syncing powered by Oculus Lipsync technology.
  • Natural eye behavior, including gaze dynamics and blinking.
  • Ambient facial micro-expressions when an avatar is not speaking.

Using Oculus Avatars with Expressive Features

For the code to add an avatar with expressive features to an Unreal project, see the “Launching the Avatar Samples Unreal Project” section of Unreal Getting Started Guide.

To use expressive lip-syncing, you will need to download and integrate the OvrLipSync plugin into your project’s plugin folder and update your uproject. See AvatarSamples, where it’s currently included, or download the latest from the Oculus developer downloads page.

To see expressive features in a sample, build and run AvatarSamples. In the Unreal Editor, you will see several new options exposed to configure the avatar. Make sure that Enable Expressive is set.

Keep the following considerations in mind concerning materials:

  • We recommend using masked material for performance and better blending into the scene. The sample uses a masked material instead of the previous translucency effect.
  • Translucent material will use the translucency pass to achieve alpha around the edges of the avatar and the base of the hands. This still uses the depth buffer and does not blend as well with the rest of the scene.
  • Opaque can be used if maximum performance is needed.

Setting Up Gaze Targets

As an avatar’s gaze shifts, different gaze targets enter and leave its field of view, and its eyes automatically move between and focus on valid gaze targets. See the “Gaze Modeling” section of this topic for more information on gaze targeting.

Any object in a scene can be made a gaze target by doing the following:

An object of interest can be tagged as a gaze target by attaching a UOvrAvatarGazeTarget to an object. This is exposed through the editor for convenience. There are four types of gaze targets, listed here in descending order of saliency:

  • Avatar Head
  • Avatar Hand
  • Object
  • Static Object

Unless otherwise specified in by code using void SetGazeTransform(USceneComponent* sceneComp) the transform will point to the root object scene component that it is attached. It may desirable to tune that on larger objects to refine the point of interest.

Expressive Features

The following sections provide more information on each of the expressive features.


OVRLipSync uses voice input from the microphone to drive realistic lip-sync animation for the avatar. Machine-learned viseme prediction translates the input into a set of blend shape weights used to animate the avatar’s mouth in real-time. Physically based blending is used to produce more natural and dynamic mouth movement, along with subtle facial movements around the mouth.

Gaze Modeling

Gaze modeling enables an avatar’s eyes to look around and exhibit gaze dynamics and patterns developed by studying human behavior. Here are the currently implemented kinds of eye behavior:

  • Fixated – Gaze is focused on a gaze target. This state periodically triggers micro-saccades, small jerk-like movements that occur while looking at an object as the eye adjusts focus and moves its target onto the retina.
  • Saccade - Fast, sweeping eye movements where the gaze quickly moves to refocus on a new gaze target.
  • Smooth Pursuit - A movement where the gaze smoothly tracks a gaze target across the avatar’s field of view.

Gaze targets are specifically tagged objects that represent a visual point of interest for an avatar’s gaze. As an avatar’s gaze shifts, different gaze targets enter and leave its field of view, and its eyes will automatically move between and focus on valid gaze targets using the behaviors listed above. If no gaze targets are present, an avatar will exhibit ambient eye movement behavior.

When presented with multiple gaze targets, the targeting algorithm that determines which gaze target an avatar looks at calculates a saliency score to each target. This score is based on direction and velocity of head movement, eccentricity from the center of vision, distance, target type, and whether the object is moving.

Blink modeling enables an avatar to simulate human blinking behaviors, such as blinking to keep their eyes clean and at the end of spoken sentences. Blinks may also be triggered by sweeping eye movements.

Expression Modeling

Expression modeling enables slight facial nuances and micro-expressions on an avatar’s face to increase social presence when it isn’t actively speaking. These expressions are minor to avoid the implication they are indicating an actual mood or response to anything happening around them.

Testing Expressive Features

You can test expressive features by using the following user IDs:

  • 10150030458727564
  • 10150030458738922
  • 10150030458747067
  • 10150030458756715
  • 10150030458762178
  • 10150030458769900
  • 10150030458775732
  • 10150030458785587
  • 10150030458806683
  • 10150030458820129
  • 10150030458827644
  • 10150030458843421