Attention Systems
Your app informs your users of the current state of the system by means of an attention system. What the system tells them about can vary, but in this case, it refers to the current state of the user’s voice interaction with your app.
- MicOff: The voice service is inactive.
- MicOn: The voice service has become active.
- Listening: The service is listening for user input. In this state, you can keep getting the user voice’s volume level and live transcription.
- Response: The service returns the NLU result.
- Error: The voice service returns a system error.
Creating an effective attention system for your app is essential to providing a good experience because if users can’t find information right away, they often become frustrated, angry, or at least disappointed with their app experience. This, in turn, greatly reduces the chance that they’ll return to play the game or use the app again.
In the toolkit, a variety of prefabs have been included to help you with setting up an attention system for your app. These prefabs can be found in the
‘AttentionSystem’ folder on GitHub or in the downloaded toolkit (
...\[Toolkit Package]\Scripts\Runtime\BuildingBlocks\AttentionSystem\
). You can also add them all to your GameObject by adding the VoiceUI script, found in the same folder.
These prefabs are designed to help you represent the voice state in one of the common ways. The available prefabs that can be used include the following:
AttenSys-Label.prefab: This prefab shows the voice state to the user using simple text. For example, when you activate the microphone, it enters the listening mode, the text could show
Listening, then show
Response when the data comes back from voice service.
AttenSys-UI with VolumeVis.prefab**: This option represents the voice states and volume visualization using microphone icons, the MicOn and MicOff sound effects available in the Asset Library, along with a circular background.
AttenSys-UI with Dictation.prefab: This prefab option represents the voice states and live transcription using icons, the sound effects available in the Asset Library, and text. This live transcription shows what the user says in real time.
You can see each of these prefabs demonstrated in the
ToolkitCollection sample scene, found in the
Scenes
folder on GitHub or in the downloaded toolkit package (
...\[Toolkit Package]\Scenes\
).
Instead of adding individual prefabs to a GameObject, you can alternatively use the
VoiceUI
script to implement a complete attention system. It provides auto-registration and deregistration of voice events, as well as invoking attention system callbacks. You can find the VoiceUI script in the AttentionSystem
AttentionSystem
folder in the downloaded toolkit package (
...\[Toolkit Package]\Scripts\Runtime\BuildingBlocks\AttentionSystem\
).
To add the VoiceUI script to an App Voice Experience GameObject
- In the Unity editor, select the App Voice Experience GameObject in your scene to which you want to add the VoiceUI script.
- Click Add Component.
- In the Search box, search for VoiceUI.
Select the
VoiceUI script from the search results to add it to the GameObject.
In the
Inspector window, expand
VoiceUI (Script). From the
Register States list, select the voice state callbacks you want to add.

Apply your visual and audio feedback on each callback using
+ or
-.

- Now, play your Unity scene to see the result.