Precomputing Visemes to Save CPU Processing (Unity)

You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is useful for lip synced animations on non-playable characters or in mobile apps because the device has less processing power.

Oculus Lipsync provides a tool for Unity that provides pre-computed visemes for an audio source and a context called OVRLipSyncContextCanned. It is similar to OVRLipSyncContext, but reads the visemes from a pre-computed viseme asset file instead of generating them in real-time.

Precomputing viseme assets from an audio file

You can generate viseme assets files for audio clips that meet these requirements:

  • Load Type is set to Decompress on Load.
  • Preload Audio Data checkbox is selected.
Note: You do not have to ship the audio clips with these settings, but you do need to set up this way to generate viseme assets files.

The following image shows an example.

To generate a viseme assets file:

  1. Select one or more audio clips in the Unity project window.
  2. Click Tools > Oculus > Generate Lip Sync Assets.

The viseme assets files are saved in the same folder as the audio clips, with the file name: audioClipName_lipSync.asset.

Playing back precomputed visemes

To play back your precomputed visemes, following the following steps.

  1. On your Unity object, pair an OVR Lip Sync Context Canned (Script) component with both an Audio Source component and either an OVR Lip Sync Context Texture Flip or a OVR Lip Sync Context Morph Target (Script) component setup as described in Using Lip Sync Integration for Unity.
  2. Drag the viseme asset file to the Current Sequence field of the OVR Lip Sync Context Canned component.
  3. Play the source audio file on the attached Audio Source component.

The following image shows an example.