Wit.ai’s Composer can be used to create a scene in Unity that incorporates Voice SDK as a fundamental part of it. By using Composer to plan and organize the voice interaction, you gain a useful tool that can help create a more immersive, smoother voice-augmented scene.
Composer requires an active Wit.ai account and Voice SDK v46 or later.
To Create a Composer-based Scene
Create a new Wit app.
In Wit.ai, go to Management > Settings and copy the Server Access Token.
In your Unity project, click Meta > Voice SDK > Settings and then paste the Wit.ai Server Access Token into the Wit Configuration box.
Click Link to link your Unity app with your Wit app.
Save the new Wit configuration file with a unique name for your app.
Add Text-to-Speech to your Unity project. For more information, see TTS Setup.
Add the composer package to your Unity project. This can be done from GitHub, using the Unity Package manager, or by importing the package as a custom package.
In your Wit.ai project, create a composer graph.
An example composer graph is shown in Demo Scenes.
In Unity, create a GameObject with the App Voice Experience component added to it.
Create another GameObject and attach the following components:
App Voice Composer Service
Composer Speech Handler
Composer Action Handler
In the App Voice Composer Service, link the Voice Service slot to your App Voice Experience component. This is in the Voice Settings section at the top of the App Voice Composer Service.
In the Composer Speech Handler, add a new speaker, and link it to your TTS Speaker created in the TTS Setup.
In the Composer Action Handler, add a handler pointing to the desired function for each event you’ve made in the Composer section of Wit.ai.
You may also consider trying some of the sample projects inside your /Assets/Oculus/Voice/Features/Composer/Samples/ folder.