Oculus Lipsync for Unity Development
End-of-Life Notice for Oculus Spatializer Plugin
This documentation is no longer being updated and is subject for removal.
Oculus Lipsync offers a Unity plugin for use on Windows or macOS that can be used to sync avatar lip movements to speech sounds and laughter. Lipsync analyzes the audio input stream from microphone input or an audio file and predicts a set of values called
visemes, which are gestures or expressions of the lips and face that correspond to a particular speech sound. The term viseme is used when discussing lip reading and is a basic visual unit of intelligibility. In computer animation, visemes may be used to animate avatars so that they look like they are speaking.
Lipsync uses a repertoire of visemes to modify avatars based on a specified audio input stream. Each viseme targets a specified geometry morph target in an avatar to influence the amount that target will be expressed on the model. With Lipsync we can generate realistic lip movement in sync with what is being spoken or heard. This enhances the visual cues that one can use when populating an application with avatars, whether the character is controlled by the user or is a non-playable character (NPC).
The Lipsync system maps to 15 separate viseme targets: sil, PP, FF, TH, DD, kk, CH, SS, nn, RR, aa, E, ih, oh, and ou. The visemes describe the face expression produced when uttering the corresponding speech sound. For example the viseme
sil corresponds to a silent/neutral expression,
PP corresponds to pronouncing the first syllable in “popcorn” and
FF the first syllable of “fish”. See the
Viseme Reference Images for images that represent each viseme.
These 15 visemes have been selected to give the maximum range of lip movement, and are agnostic to language. For more information, see the
Viseme MPEG-4 Standard.
The following animated image shows how you could use Lipsync to say “Welcome to the Oculus Lipsync demo.”

In Lipsync version 1.30.0 and newer, Lipsync offers support for laughter detection, which can help add more character and emotion to your avatars.
The following animation shows an example of laughter detection.

The following sections describe the requirements, download and setup for development with the Lipsync plugin for Unity.
The Lipsync Unity integration requires Unity 5.x Professional or Personal or later, targeting Android or Windows platforms, running on Windows 7, 8 or 10. OS X 10.9.5 and later are also currently supported. See
Unity Compatibility and Requirements for details on our recommended versions.
To download the Lipsync Unity integration and import it into a Unity project, complete the following steps.
- Download the the Oculus Lipsync Unity package from the Oculus Lipsync Unity page.
- Extract the zip archive.
- Open your project in the Unity Editor, or create a new project.
- In the Unity Editor, select Assets > Import Package > Custom Package
- Select the OVRLipSync.unity package in the LipSync\UnityPlugin sub-folder from the archive you extracted in the first step and import. When the Importing Package dialog opens, leave all assets selected and click Import.
Note: We recommend removing any previously-imported versions of the Lipsync Unity integration before importing a new version.
If you wish to use both OVRVoiceMod and OVRLipsync plugins, you should install the Unity unified package.
Description | Topic |
---|
Using Oculus Lipsync | |
Use precomputed visemes to improve performance | |
Lipsync Sample | |
Viseme reference images | |