Oculus Lipsync describes a set of plugins and APIs that can be used to sync avatar lip movements to speech sounds. Oculus Lipsync analyzes the audio input stream from microphone input or an audio file and predicts a set of values called visemes, which are gestures or expressions of the lips and face that correspond to a particular speech sound. Visemes can be used to animate the lips of an avatar. With Oculus Lipsync, visemes can be precomputed to save CPU or generated in real time.
The following animated image shows how you could use Oculus Lipsync to say “Welcome to the Oculus Lipsync demo.”
A viseme describes a visual gesture or expression of the lips and face that corresponds to a particular speech sound, similar to how a phoneme describes a sound. The term viseme is used when discussing lip reading and is a basic visual unit of intelligibility. In computer animation, visemes may be used to animate avatars so that they look like they are speaking.
Oculus Lipsync uses a repertoire of visemes to modify avatars based on a specified audio input stream. Each viseme targets a specified geometry morph target in an avatar to influence the amount that target will be expressed on the model. Thus, with Oculus Lipsync we can generate realistic lip movement in sync with what is being spoken or heard. This enhances the visual cues that one can use when populating an application with avatars, whether the character is controlled by the user or is a non-playable character (NPC).
The Oculus Lipsync system maps to 15 separate viseme targets: sil, PP, FF, TH, DD, kk, CH, SS, nn, RR, aa, E, ih, oh, and ou. The visemes describe the face expression produced when uttering the corresponding speech sound. For example the viseme sil corresponds to a silent/neutral expression, PP corresponds to pronouncing the first syllable in “popcorn” and FF the first syllable of “fish”. See the Viseme Reference Images for images that represent each viseme.
These 15 visemes have been selected to give the maximum range of lip movement, and are agnostic to language. For more information, see the Viseme MPEG-4 Standard.
As mentioned previously, Oculus Lipsync offers plugins for popular game engines and APIs for native developmenet. The following table lists links to topics on installation and how to use Oculus Lipsync for Unity, Unreal or native C++ development.
Topic | Unity | Unreal | Native development |
---|---|---|---|
Overview of the tool, requirements, download and setup | Unity Overview | Unreal Overview | Native Overview |
Using Oculus Lipsync | Using the Unity Lipsync package | Use the Unreal Lipsync package | Using the Native Lipsync Package |
Precompute visemes to improve performance | Guide to Precomputing Visenes for Unity | Guide to Precomputing Visenes for Unreal | None |
Sample | Exploring Oculus Lipsync with the Unity Sample | Exploring Oculus Lipsync with the Unreal Sample | Exploring Oculus Lipsync with the Native Sample |
Topic | Description |
---|---|
Viseme Reference Images | Provides a visual guide to mouth shapes that represent various phonemes. |
Lipsync API Reference for Native Development | Guide to the APIs for native Lipsync development. |