Oculus Lipsync Guide

Oculus Lipsync describes a set of plugins and APIs that can be used to sync avatar lip movements to speech sounds. Oculus Lipsync analyzes the audio input stream from microphone input or an audio file and predicts a set of values called visemes, which are gestures or expressions of the lips and face that correspond to a particular speech sound. Visemes can be used to animate the lips of an avatar. With Oculus Lipsync, visemes can be precomputed to save CPU or generated in real time.

Animated Lipsync Example

The following animated image shows how you could use Oculus Lipsync to say “Welcome to the Oculus Lipsync demo.”

Visemes and Oculus Lipsync

A viseme describes a visual gesture or expression of the lips and face that corresponds to a particular speech sound, similar to how a phoneme describes a sound. The term viseme is used when discussing lip reading and is a basic visual unit of intelligibility. In computer animation, visemes may be used to animate avatars so that they look like they are speaking.

Oculus Lipsync uses a repertoire of visemes to modify avatars based on a specified audio input stream. Each viseme targets a specified geometry morph target in an avatar to influence the amount that target will be expressed on the model. Thus, with Oculus Lipsync we can generate realistic lip movement in sync with what is being spoken or heard. This enhances the visual cues that one can use when populating an application with avatars, whether the character is controlled by the user or is a non-playable character (NPC).

The Oculus Lipsync system maps to 15 separate viseme targets: sil, PP, FF, TH, DD, kk, CH, SS, nn, RR, aa, E, ih, oh, and ou. The visemes describe the face expression produced when uttering the corresponding speech sound. For example the viseme sil corresponds to a silent/neutral expression, PP corresponds to pronouncing the first syllable in “popcorn” and FF the first syllable of “fish”. See the Viseme Reference Images for images that represent each viseme.

These 15 visemes have been selected to give the maximum range of lip movement, and are agnostic to language. For more information, see the Viseme MPEG-4 Standard.

Topic Guide by Development Platform

As mentioned previously, Oculus Lipsync offers plugins for popular game engines and APIs for native developmenet. The following table lists links to topics on installation and how to use Oculus Lipsync for Unity, Unreal or native C++ development.

TopicUnityUnrealNative development
Overview of the tool, requirements, download and setupUnity OverviewUnreal OverviewNative Overview
Using Oculus LipsyncUsing the Unity Lipsync packageUse the Unreal Lipsync packageUsing the Native Lipsync Package
Precompute visemes to improve performanceGuide to Precomputing Visenes for UnityGuide to Precomputing Visenes for UnrealNone
SampleExploring Oculus Lipsync with the Unity SampleExploring Oculus Lipsync with the Unreal SampleExploring Oculus Lipsync with the Native Sample

Reference Content

TopicDescription
Viseme Reference ImagesProvides a visual guide to mouth shapes that represent various phonemes.
Lipsync API Reference for Native DevelopmentGuide to the APIs for native Lipsync development.