You are currently viewing archived documentation. Use the left navigation to return to the latest.

Multi-Threaded Engine Support

Modern applications, particularly video game engines, often distribute processing over multiple threads.

When integrating the Oculus SDK, make sure to call the API functions in the appropriate manner and to manage timing correctly for accurate HMD pose prediction. This section describes two multi-threaded scenarios that you can use. Hopefully the insight provided will enable you to handle these issues correctly, even if your application’s multi-threaded approach differs from those presented. As always, if you require guidance, please visit http://developer.oculus.com.

One of the factors that dictates API policy is our use of the application rendering API inside of the SDK (e.g., Direct3D). Generally, rendering APIs impose their own multi-threading restrictions. For example, it is common to call core rendering functions from the same thread that was used to create the main rendering device. As a result, these limitations impose restrictions on the use of the Oculus API.

These rules apply:

  • All tracking interface functions are thread-safe, allowing the tracking state to be sampled from different threads.
  • All rendering functions including the configure and frame functions, are not thread-safe. You can use ConfigureRendering on one thread and handle frames on another thread, but you must perform explicit synchronization because functions that depend on configured state are not reentrant.
  • All of the following calls must be done on the render thread (the thread used by the application to create the main rendering device):

    • ovrHmd_BeginFrame (or ovrHmd_BeginFrameTiming)
    • ovrHmd_EndFrame
    • ovrHmd_GetEyePose
    • ovrHmd_GetEyeTimewarpMatrices

Update and Render on Different Threads

It is common for video game engines to separate the actions of updating the state of the world and rendering a view of it.

In addition, executing these on separate threads (mapped onto different cores) allows them to execute concurrently and use more of the available CPU resources. Typically the update operation executes AI logic and player character animation which, in VR, requires the current headset pose. For the rendering operation, it needs to determine the the left and right eye view transform to render, which also needs the head pose. The main difference between the two is the level of accuracy required. The AI logic only requires a moderately accurate head pose. For rendering, it is critical that the head pose is very accurate and that the image is displayed on the screen matches as closely as possible. The SDK employs two techniques to ensure this. The first technique is prediction, where the application can request the predicted head pose at a future point in time. The ovrFrameTiming struct provides accurate timing information for this purpose. The second technique is Timewarp, where we wait until a very short time before presenting the next frame to the display, perform another head pose reading, and re-project the rendered image to take account of any changes in predicted head pose that occurred since the head pose was read during rendering.

Generally, the closer we are to the time that the frame is displayed, the better the prediction of head pose at that time will be. It is perfectly fine to read head pose several times during the render operation, each time passing in the same future time that the frame will display (in the case of calling ovrHmd_GetFrameTiming), and each time receiving a more accurate estimate of the future head pose. However, for Timewarp to function correctly, you must pass the actual head pose that was used to determine the view matrices when you make the call to ovrHmd_EndFrame (for SDK distortion rendering) or ovrHmd_GetEyeTimewarpMatrices (for client distortion rendering).

When obtaining the head pose for the update operation, it is usually sufficient to get the current head pose (rather than the predicted one). This can be obtained with:

ovrTrackingState ts  = ovrHmd_GetTrackingState(hmd, ovr_GetTimeInSeconds());

The next section describes a scenario that uses the final head pose to render from a non-render thread, which requires prediction.

Render on Different Threads

In some engines, render processing is distributed across more than one thread.

For example, one thread may perform culling and render setup for each object in the scene (we'll call this the “main” thread), while a second thread makes the actual D3D or OpenGL API calls (we'll call this the “render” thread). The difference between this and the former scenario is that the non-render thread needs to obtain accurate predictions of head pose.

To do this, it needs an accurate estimate of the time until the frame being processed appears on the screen. Furthermore, due to the asynchronous nature of this approach, while the render thread is rendering a frame, the main thread might be processing the next frame. As a result, the application must associate the head poses that were obtained in the main thread with the frame, such that when that frame is being rendered by the render thread, the application is able to pass the correct head pose transforms into ovrHmd_EndFrame or ovrHmd_GetEyeTimewarpMatrices. For this purpose, we introduce the concept of a frameIndex which is created by the application, incremented each frame, and passed into several of the API functions.

Essentially, there are three additional things to consider:

  1. The main thread needs to assign a frame index to the current frame being processed for rendering. This is used in the call to ovrHmd_GetFrameTiming to return the correct timing for pose prediction etc.
  2. The main thread should call the thread safe function ovrHmd_GetTrackingState with the predicted time value.
  3. When the rendering commands generated on the main thread are executed on the render thread, pass in the corresponding value of frameIndex when calling ovrHmd_BeginFrame. Similarly, when calling ovrHmd_EndFrame, pass in the actual pose transform used when that frame was processed on the main thread (from the call to ovrHmd_GetTrackingState).

The following code illustrates this in more detail:

 void MainThreadProcessing()
{
    frameIndex++;
        
    // Ask the API for the times when this frame is expected to be displayed. 
    ovrFrameTiming frameTiming = ovrHmd_GetFrameTiming(hmd, frameIndex);

    // Get the corresponding predicted pose state.  
    ovrTrackingState state = ovrHmd_GetTrackingState(hmd, frameTiming.ScanoutMidpointSeconds);

    ovrPosef pose = state.HeadPose.ThePose;

    SetFrameHMDData(frameIndex, pose);

    // Do render pre-processing for this frame. 

    ...
        
}

void RenderThreadProcessing()
{

    int frameIndex;
    ovrPosef pose;
    
    GetFrameHMDData(&frameIndex, &pose);
    
    // Call begin frame and pass in frameIndex.
    ovrFrameTiming hmdFrameTiming = ovrHmd_BeginFrame(hmd, frameIndex);

    // Execute actual rendering to eye textures.
    ovrTexture eyeTexture[2]);  

    ...
    
    ovrPosef renderPose[2] = {pose, pose};
        
    ovrHmd_EndFrame(hmd, pose, eyeTexture); 
}