Oculus Quest Development

All Oculus Quest developers MUST PASS the concept review prior to gaining publishing access to the Quest Store and additional resources. Submit a concept document for review as early in your Quest application development cycle as possible. For additional information and context, please see Submitting Your App to the Oculus Quest Store.

Unity Audio

This guide describes guidelines and resources for creating a compelling VR audio experience in Unity.

If you’re unfamiliar with Unity’s audio handling, we recommend starting with the Unity Audio guide.

General Audio Best Practices

  • Do not use more than 16 audio sources.
  • Avoid using Decompress on Load for audio clips.
  • Do not use ONSP reflections for Android applications.
  • Disable Preload Audio Data for all individual audio clips.

Note: You are viewing the native version of this topic. For audio in Unreal apps, see Audio for Unreal Apps. For Audio in Unity apps, see Audio for Unity Apps.

Audio is crucial for creating a persuasive VR experience. Because of the key role that audio cues play in our sense of being present in an actual, physical space, any effort that development teams devote to getting it should be worth the effort, as it will contribute to the user’s sense of immersion.

Oculus provides free, easy-to-use spatializer plugins for engines and middleware including Unity, Unreal and native development. Our spatialization features support both Rift and Android development.

Our ability to localize audio sources in three-dimensional space is a fundamental part of how we experience sound. Spatialization is the process of modifying sounds to make them localizable, so they seem to originate from distinct locations relative to the listener. It is a key part of creating presence in virtual reality games and applications.

Audio Features

Oculus provides the following audio features:

  • Spatialization and Head Tracking - Transform monophonic sound sources to make them sound as though they originate from a specific desired direction.
  • Audio Propagation - Provides real-time reverb and occlusion simulation based on game geometry. You simply tag the scene meshes that you want included in the simulation and select the acoustic material for each mesh.
  • Volumetric Sources - Sound sources can be given a radius which will make them sound volumetric. This will spread the sound out, so that as the source approaches the listener, and then completely envelops the listener, the sound will be spread out over a volume of space.
  • Near-field Rendering Sound sources in close proximity to a listener’s head have properties that make some aspects of their spatialization independent of their distance. Our near-field rendering automatically approximates the effects of acoustic diffraction to create a more realistic representation of audio sources closer than 1 meter.
  • Synchronization of Avatar Lip Movements to speech and laughter sounds with Oculus Lipsync.

For more about these audio features, see the Learn section.

Download and Use Oculus Audio Tools

See the following documents for Oculus integrations with popular development tools.

Unity

Audio input and output automatically use the Rift microphone and headphones unless configured to use the Windows default audio device by the user in the Oculus app. Events OVRManager.AudioOutChanged and AudioInChanged occur when audio devices change, making audio playback impossible without a restart.

  • For instructions on using Unity and Wwise with Rift, see Rift Audio in the Native PC Developer Guide.

Audiokinetic Wwise

FMOD Studio

Avid Pro-Tools

Various Windows DAWS (VST)

Learn More

Sound design and mixing is an art form, and VR is a new medium in which it is expressed. Whether you’re an aspiring sound designer or a veteran, VR provides many new challenges and inverts some of the common beliefs we’ve come to rely upon when creating music and sound cues for games and traditional media.

Watch Brian Hook’s Introduction to VR Audio from Oculus Connect 2014. https://www.youtube.com/watch?v=kBBuuvEP5Z4

Watch Tom Smurdon and Brian Hook’s talk at GDC 2015 about VR Audio: https://www.youtube.com/watch?v=2RDV6D7jDVs

Learn more about audio propagation simulation in the Facebook Reality Labs blog post

Read the Introduction to VR Audio white paper for key ideas and how to address them in VR, or see any of the additional topics:

TopicDescription
Oculus Audio FeaturesA more in-depth description of some Oculus audio features.
Localization and the Human Auditory SystemDescribes how humans localize sound.
3D Audio SpatializationDescribes spatialization and head-related transfer functions.
Listening DevicesDescribes different listening devices and their advantages and disadvantages.
Environmental ModelingDescribes environmental modeling including reverberation and reflections.
Sound Design for SpatializationNow that we’ve established how humans place sounds in the world and, more importantly, how we can fool people into thinking that a sound is coming from a particular point in space, we need to examine how we must change our approach to sound design to support spatialization.
Mixing Scenes for Virtual RealityAs with sound design, mixing a scene for VR is an art as well as a science, and the following recommendations may include caveats.
VR Audio GlossaryDefinitions of technical terms VR audio terms.

Oculus Audio Community

If you’re interested in learning more about Oculus VR audio or just want to chat with other audio-minded developers, drop by the Audio Developer Forums.