This documentation is no longer being updated and is subject for removal.
Oculus Lipsync maps human speech to a set of mouth shapes, called “visemes”, which are a visual analog to phonemes. Each viseme depicts the mouth shape for a specific set of phonemes. Over time these visemes are interpolated to simulate natural mouth motion. Below we give the reference images we used to create our own demo shapes. For each row we give the viseme name, example phonemes that map to that viseme, example words, and images showing both mild and emphasized production of that viseme. We hope that you will find these useful in creating your own models. For more information on these 15 visemes and how they were selected, please read the following documentation: Viseme MPEG-4 Standard
Animated example
The animation that follows shows the visemes from the reference image section.
Reference Images
You can click each image to view in larger size. Only a subset of phonemes are shown for each viseme.