Add Camera Rig Using OVRCameraRig

The Oculus Integration package contains a prefab, OVRCameraRig, which provides the transform object to represent the Oculus tracking space. Under the tracking space, there is a custom VR camera, which replaces Unity’s conventional camera. OVRCameraRig also provides access to OVRManager, which is the main interface to the VR hardware.

The OVRCameraRig prefab contains a tracking space game object to fine-tune the relationship between the head tracking reference frame and your world. Under the tracking space object, you will find a center eye anchor, which is the main Unity camera, two anchor game objects for each eye, and left and right hand anchors for controllers.

Understand Camera Behaviour

When you enable VR support in Unity, your head-mounted device automatically passes the head and positional tracking reference to Unity. This lets the camera’s position and orientation finely match with the user’s position and orientation in the real world. The head-tracked pose values overrides the camera’s transform values, which means the camera is always in a position relative to the player object.

In a typical first- or third-person setup, instead of having a stationary camera, you may want the camera to follow or track the player object. The player object can be a character in motion, such as an avatar, car, or a gun turret. To move the camera to follow the player object, you can either make the camera a child of the player object or have an object track the player, and the camera, in turn, follows that object. Based on your app design, you may want to create a script that references the player object and attach the script to OVRCameraRig.

Add OVRCameraRig in the Scene

OVRCameraRig is a replacement to Unity’s main camera, which means you can safely delete Unity’s main camera from the Hierarchy view. The primary benefit of using OVRCameraRig is that it provides access to OVRManager, which provides the main interface to the VR hardware. Before you add OVRCameraRig, make sure you follow the pre-requisites.


Add OVRCameraRig

  1. From the Hierarchy view, right-click Main Camera, and click Delete.
  2. In the Project view, expand the Assets > Oculus > VR > Prefab folder.
  3. Drag and drop the OVRCameraRig prefab into the scene. You can also drag and drop it in the Hierarchy view.

Configure OVRCameraRig and OVRManager Settings

There are two main scripts attached to the OVRCameraRig prefab: OVRCameraRig.cs and OVRManager.cs. Each script provides settings for camera, display, tracking, quality, and performance of your app.

In Hierarchy View, select the OVRCameraRig prefab, and in the Inspector View, review the following settings under OVRCameraRig.cs and OVRManager.cs scripts.

OVRCameraRig.cs Settings

OVRCameraRig.cs is a component that controls stereo rendering and head tracking. It maintains three child anchor transforms at the poses of the left and right eyes, as well as a virtual center eye that is halfway between them. It is the main interface between Unity and the cameras. and attached to a prefab that makes it easy to add comfortable VR support to a scene.

Note: All camera control should be done through this component.

  • Use Per Eye Camera: Select this option to use separate cameras for left and right eyes.
  • Use Fixed Update For Tracking: Select this option to update all the tracked anchors in the FixedUpdate() method instead of Update() method to favor physics fidelity. However, if the fixed update rate doesn’t match the rendering frame rate, which is derived by using OVRManager.display.appFramerate, the anchors visibly judder.
  • Disable Eye Anchor Cameras: Select this option to disable the cameras on the eye anchors. In this case, the main camera of the game is used to provide the VR rendering and the tracking space anchors are updated to provide reference poses.

OVRManager.cs Settings

OVRManager.cs is the main interface to the VR hardware and is added to the OVRCameraRig prefab. It is a singleton that exposes the Oculus SDK to Unity, and includes helper functions that use the stored Oculus variables to help configure the camera behavior. It can be a part of any app object and should only be declared once.

Target Devices:

All apps that target Oculus Quest are automatically compatible to run on Oculus Quest 2. However, when you query the headset type that the app is running on, Oculus returns Oculus Quest even if the headset is Oculus Quest 2 for the best compatibility. If you precisely want to identify the headset type, select Oculus Quest and Oculus Quest 2 as target devices. In this case, when you query the headset type, Oculus returns the exact headset that the app is running on. Based on the target headset, Oculus automatically adds the <meta-data android:name="com.oculus.supportedDevices" android:value="quest" /> element for Oculus Quest, and the <meta-data android:name="com.oculus.supportedDevices" android:value="quest|quest2" /> element for both Oculus Quest and Oculus Quest 2 in the Android Manifest file. There is no need to update the Android Manifest file manually.

  • To select the target device, select the relevant checkbox.

When you create apps that target both, Oculus Quest and Oculus Quest 2, you can check the headset type to optimize the app and improve the user experience. Call the OVRManager.systemHeadsetType() to return the headset type that the app is running on. For example, the method returns Oculus_Quest or Oculus_Quest_2 depending on the headset type. The OVRManager.SystemHeadsetType enum lists the headset types that the OVRManager.systemHeadsetType() method returns.

Performance and Quality:

  • Use Recommended MSAA Level: True, by default. Select this option to let OVRManager automatically choose the appropriate MSAA level based on the Oculus device. For example, for Oculus Quest, the MSAA level is set to 4x. Currently supported only for Unity’s built-in render pipeline.

    Note: For Universal Render Pipeline (URP), you need to manually set the MSAA level to 4x. We are aware of the issue that URP does not set the MSAA level automatically. We will announce the fix in the Release Notes page.

  • Monoscopic: If true, both eyes see the same image, rendered from the center eye pose, saving performance on low-end devices. We do not recommend using this setting as it doesn’t provide the correct experience in VR.

  • Enable Adaptive Resolution: Enable to configure app resolution to scale down as GPU exceeds 85% utilization, and to scale up as it falls below 85% (range 0.5 - 2.0; 1 = normal density). To minimize the perceived artifacts from changing resolution, there is a two second minimum delay between every resolution change.
  • Min Render Scale: Sets minimum bound for Adaptive Resolution (default value is 0.7).
  • Max Render Scale (Rift only): Sets maximum bound for Adaptive Resolution (default value is 1.0).
  • Head Pose Relative Offset Rotation: Sets the relative offset rotation of head poses.
  • Head Pose Relative Offset Translation: Sets the relative offset translation of head poses.
  • Profiler TCP Port: The TCP listening port of Oculus Profiler Service, which is activated in debug or development builds. When the app is running on editor or device, go to Tools > Oculus > Oculus Profiler Panel to view the real-time system metrics.


  • Tracking Origin Type: Sets the tracking origin type.

    Eye Level tracks the position and orientation relative to the device’s position.

    Floor Level tracks the position and orientation relative to the floor, based on the user’s standing height as specified in the Oculus Configuration Utility.

  • Use Positional Tracking: When enabled, head tracking affects the position of the virtual cameras.
  • Use IPD in Positional Tracking: When enabled, the distance between the user’s eyes affects the position of each OVRCameraRig’s cameras.
  • Reset Tracker on Load: When enabled, each scene causes the head pose to reset. When disabled, subsequent scene loads do not reset the tracker. This keeps the tracker orientation the same from scene to scene, as well as keep magnetometer settings intact.
  • Allow Recenter: Select this option to reset the pose when the user clicks the Reset View option from the universal menu. You should select this option for apps with a stationary position in the virtual world and allow the Reset View option to place the user back to a predefined location (such as a cockpit seat). Do not select this option if you have a locomotion system because resetting the view effectively teleports the user to potentially invalid locations.

For Oculus Rift, OVRManager.display.RecenterPose() recenters the head pose and the tracked controller pose, if present (see OVRInput for more information on tracking controllers).

If Tracking Origin Type is set to Floor Level, OVRManager.display.RecenterPose() resets the x-, y-, and z-axis position to origin. If it is set to Eye Level, the x-, y-, and z-axis are all reset to origin, with the y-value corresponding to the height calibration performed with Oculus Configuration Utility. In both cases, the y rotation is reset to 0, but the x and z rotation are unchanged to maintain a consistent ground plane.

  • Reorient HMD On Controller Recenter: Specifies device recentering behavior when controller recenter is performed. When enabled, recenters the device as well, whereas false does not recenter the HMD.
  • Late Controller Update: Select this option to update the pose of the controllers immediately before rendering for lower latency between real-world and virtual controller movement. If controller poses are used for simulation/physics, the position may be slightly behind the position used for rendering (~10ms). Any calculations done at simulation time may not exactly match the controller’s rendered position.


You can remaster your app by setting the specific color space at runtime for your Oculus device to overcome the color variation that may occur due to different color spaces in use.

  • Select Enable Specific Color Gamut to set the specific color space. For more information about the available color gamut primaries, go to the Set Specific Color Space topic.

Quest Features:

There are certain settings that are applicable to Oculus Quest only.

  • Focus Aware: Select this option to allow users to access system UI without context switching away from the app. For more information about enabling focus awareness, go to the Enable Focus Awareness for System Overlays topic.
  • Hand Tracking Support: From the list, select the type of input affordance for your app. For example, Controllers only, Controllers and Hands, or Hands only. For more information about setting up hand tracking, go to the Set Up Hand Tracking topic.

Android Build Settings:

The shader stripping feature lets you skip unused shaders from compilation to significantly reduce the player build time. Select Skip Unneeded Shaders to enable shader stripping. For more information about understanding different tiers and stripping shaders, go to the Strip Unused Shaders topic.


  • Custom Security XML Path: If you don’t want Oculus to generate a security XML and instead use your own XMl, specify the XML file path.
  • Disable Backups: Select this option to ensure private user information is not inadvertently exposed to unauthorized parties or insecure locations. It adds the allowBackup="false" flag in the AndroidManifest.xml file.
  • Enable NSC Configuration: Select this option to prevent the app or any embedded SDK from initiating cleartext HTTP connections and force the app to use HTTPS encyrption.

Mixed Reality Capture:

Mixed Reality Capture (MRC) places real-world objects in VR. In other words, it combines images from the real world with the virtual one. To enable the mixed reality support, select Show Properties, and then select enableMixedReality. For more information about setting up mixed reality capture, go to the Unity Mixed Reality Capture guide.