All Oculus Quest developers MUST PASS the concept review prior to gaining publishing access to the Quest Store and additional resources. Submit a concept document for review as early in your Quest application development cycle as possible. For additional information and context, please see Submitting Your App to the Oculus Quest Store.
You can save a lot of processing power by pre-computing the visemes for recorded audio instead of generating the visemes in real-time. This is particularly useful for lip synced animations on non-playable characters or in mobile apps as there is less processing power available.
To Generate LipSync Sequence:
The following image shows an example:
OVRLipSyncPlaybackActorcomponent to your scene. The
OVRLipSyncPlaybackActorworks the same as an OVRLip Sync Actor component, but reads the visemes from a pre-computed sequence asset instead of generating them in real-time.
OVRLipSyncPlaybackActorto the previously precomputed lipsync asset. The following image shows an example: