Precompute Visemes to Save CPU Processing in Unity
End-of-Life Notice for Oculus Spatializer Plugin
This documentation is no longer being updated and is subject for removal.
You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is useful for lip synced animations on non-playable characters or in mobile apps because the device has less processing power.
Oculus Lipsync provides a tool for Unity that provides pre-computed visemes for an audio source and a context called OVRLipSyncContextCanned. It is similar to OVRLipSyncContext, but reads the visemes from a pre-computed viseme asset file instead of generating them in real-time.
Precomputing viseme assets from an audio file
You can generate viseme assets files for audio clips that meet these requirements:
- Load Type is set to Decompress on Load.
- Preload Audio Data checkbox is selected.
The following image shows an example.

To generate a viseme assets file:
- Select one or more audio clips in the Unity project window.
- Click Tools > Oculus > Generate Lip Sync Assets.
The viseme assets files are saved in the same folder as the audio clips, with the file name: audioClipName_lipSync.asset.
Playing back precomputed visemes
To play back your precomputed visemes, following the following steps.
- On your Unity object, pair an OVR Lip Sync Context Canned (Script) component with both an Audio Source component and either an OVR Lip Sync Context Texture Flip or a OVR Lip Sync Context Morph Target (Script) component setup as described in Using Lip Sync Integration for Unity.
- Drag the viseme asset file to the Current Sequence field of the OVR Lip Sync Context Canned component.
- Play the source audio file on the attached Audio Source component.
The following image shows an example.
