Precomputing Visemes to Save CPU

You can save a lot of processing power by precomputing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations in mobile apps as there is less processing power available as compared to the Rift.

We provide both a tool in Unity for generating precomputed visemes for an audio source and a context called OVRLipSyncContextCanned. It works much the same as OVRLipSyncContext but reads the visemes from a precomputed viseme asset file instead of generating them in real-time.

Precomputing Viseme Assets from an Audio File

You can generate viseme assets files for audio clips that meet these requirements:
  • Preload Audio check box is selected.
  • Compression Mode is set to Decompress on Load.
Note: You do not have to ship the audio clips with these settings, but you do need to have them set up as such to generate viseme assets files.

To generate a viseme assets file:

  1. Select one or more audio clips in the Unity project window.
  2. Click Tools > Oculus > Generate Lip Sync Assets.

The viseme assets files are saved in the same folder as the audio clips, with the file name: audioClipName_lipSync.asset.

Playing Back Precomputed Visemes

  1. On your Unity object, pair an OVRLipSyncContextCanned script component with both an Audio Source component and either an OVRLipSyncContextTextureFlip or a OVRLipSyncContextMorphTarget script component.
  2. Drag the viseme asset file to OVRLipSyncContextCanned component's Current Sequence field.
  3. Play the source audio file on the attached Audio Source component.