This website uses cookies to improve our services and deliver relevant ads.
By interacting with this site, you agree to this use. For more information, see our Cookies Policy
This version of the guide is out of date. Click here for the latest version.
You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations on non-playable characters or in mobile apps as there is less processing power available.
We provide both a tool in Unity for generating pre-computed visemes for an audio source and a context called OVRLipSyncContextCanned. It works much the same as OVRLipSyncContext but reads the visemes from a pre-computed viseme asset file instead of generating them in real-time.
You can generate viseme assets files for audio clips that meet these requirements:
Note: You do not have to ship the audio clips with these settings, but you do need to have them set up as such to generate viseme assets files.
To generate a viseme assets file:
The viseme assets files are saved in the same folder as the audio clips, with the file name: audioClipName_lipSync.asset.