This website uses cookies to improve our services and deliver relevant ads.
By interacting with this site, you agree to this use. For more information, see our Cookies Policy

Archived Documentation

This version of the guide is out of date. Click here for the latest version.

Precomputing Visemes to Save CPU Processing

You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations on non-playable characters or in mobile apps as there is less processing power available.

We provide both a tool in Unity for generating pre-computed visemes for an audio source and a context called OVRLipSyncContextCanned. It works much the same as OVRLipSyncContext but reads the visemes from a pre-computed viseme asset file instead of generating them in real-time.

Precomputing viseme assets from an audio file

You can generate viseme assets files for audio clips that meet these requirements:

  • Preload Audio check box is selected.
  • Compression Mode is set to Decompress on Load.

Note: You do not have to ship the audio clips with these settings, but you do need to have them set up as such to generate viseme assets files.

To generate a viseme assets file:

  1. Select one or more audio clips in the Unity project window.
  2. Click Tools > Oculus > Generate Lip Sync Assets.

The viseme assets files are saved in the same folder as the audio clips, with the file name: audioClipName_lipSync.asset.

Playing back precomputed visemes

  1. On your Unity object, pair an OVRLipSyncContextCanned script component with both an Audio Source component and either an OVRLipSyncContextTextureFlip or a OVRLipSyncContextMorphTarget script component setup as described in Using Lip Sync Integration.
  2. Drag the viseme asset file to OVRLipSyncContextCanned component's Current Sequence field.
  3. Play the source audio file on the attached Audio Source component.