The Oculus SDK is designed to be as easy to integrate as possible. This section outlines a basic Oculus integration with a C++ game engine or application.
We’ll discuss initializing the LibOVR, HMD device enumeration, head tracking, frame timing, and rendering for the Rift.
Many of the code samples below are taken directly from the OculusRoomTiny demo source code (available in Oculus/LibOVR/Samples/OculusRoomTiny). OculusRoomTiny and OculusWorldDemo are great places to view sample integration code when in doubt about a particular system or feature.
To add Oculus support to a new application, do the following:
Enumerate Oculus devices, create the ovrHmd object, and start sensor input.
Integrate head-tracking into your application’s view and movement code. This involves:
Reading data from the Rift sensors through or ovrHmd_GetTrackingStateorovrHmd_GetEyePose.
Applying Rift orientation and position to the camera view, while combining it with other application controls.
Modifying movement and game play to consider head orientation.
Initialize rendering for the HMD.
Select rendering parameters such as resolution and field of view based on HMD capabilities.
For SDK rendered distortion, configure rendering based on system rendering API pointers and viewports.
For client rendered distortion, create the necessary distortion mesh and shader resources.
Modify application frame rendering to integrate HMD support and proper frame timing:
Make sure your engine supports multiple rendering views.
Add frame timing logic into the render loop to ensure that motion prediction and timewarp work correctly.
Render each eye’s view to intermediate render targets.
Apply distortion correction to render target views to correct for the optical characteristics of the lenses (only necessary for client rendered distortion).
Customize UI screens to work well inside of the headset.