This website uses cookies to improve our services and deliver relevant ads.
By interacting with this site, you agree to this use. For more information, see our Cookies Policy
A number of changes were made to the API since the 0.5 release.
This Oculus SDK 0.6.0.1 release introduces queue ahead. Queue ahead improves CPU and GPU parallelism and increases the amount of time that the GPU has to process frames. For more information, see Adaptive Queue Ahead.
The Oculus SDK 0.6 release introduces the compositor, a separate process for applying distortion and displaying scenes and other major changes.
There are four major changes to Oculus SDK 0.6:
The compositor service moves distortion rendering from the application process to the OVRServer process using texture sets that are shared between the two processes. A texture set is basically a swap chain, with buffers rotated to allow game rendering to proceed while the current frame is distorted and displayed.
Layer support allows multiple independent application render targets to be independently sent to the HMD. For example, you might render a heads-up display, background, and game space each in their own separate render target. Each render target is a layer, and the layers are combined by the compositor (rather than the application) right before distortion and display. Each layer may have a different size, resolution, and update rate.
The API simplification is a move towards the final API, which primarily removes support for application-based distortion rendering. For more information on each of these, see the Developer Guide for this SDK release. API changes are discussed briefly below.
The following are major new features for the Oculus SDK and runtime:
Added two samples:
The following are major new features for the Oculus SDK and runtime:
The following are major new features for Unity:
This release represents a major revision of the API. These changes significantly simplify the API while retaining essential functionality. Changes to the API include:
The following bugs were fixed since 0.5:
The following are known issues:
Prior to Oculus SDK 0.6, the Oculus SDK relied on the game engine to create system textures for eye rendering. To use the SDK, developers stored the API-specific texture pointers into the ovrTexture structure and passed them into ovr_EndFrame for distortion and display on the Rift. After EndFrame returned, a new frame was rendered into the texture, repeating the process Oculus SDK 0.6 changes this in two major ways.
The first is by introducing the concept of ovrSwapTextureSet, a collection of textures that are used in round-robin fashion for rendering. A texture set is basically a swap chain for rendering to the Rift, with buffers rotated to allow the game rendering to proceed while the current frame is distorted and displayed. Unlike textures in earlier SDKs, ovrSwapTextureSet and its internal textures must be created by calling ovr_CreateSwapTextureSetD3D11 or ovr_CreateSwapTextureSetGL. Implementing these functions in the SDK allows us to support synchronization and properly share texture memory with the compositor process. For more details on texture sets, we advise reading the “New Features” section on them.
The second is with the introduction of layers. Instead of a single pair of eye-buffers holding all the visual data in the scene, the application can have multiple layers of different types overlaid on each other. Layers are a large change to the API, and we advise reading the “New Features” section on them for more details. This part of the guide gives only the bare minimum instructions to port an existing single-layer app to the new API.
With the introduction of texture sets and layers, you need to make several changes to how your application handles eye buffer textures in the game engine.
Previously, the app would have used the API's standard texture creation calls to make render targets for the eye buffers - either one render target for each eye, or a single shared render target with the eyes side-by-side on it. Fundamentally the same process happens, but using the ovr_CreateSwapTextureSet function for your API instead. So the code might have been similar to the following:
D3D11_TEXTURE2D_DESC dsDesc; dsDesc.Width = size.w; dsDesc.Height = size.h; dsDesc.MipLevels = 1; dsDesc.ArraySize = 1; dsDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM; dsDesc.SampleDesc.Count = 1; dsDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET; DIRECTX.Device->CreateTexture2D(&dsDesc, NULL, &(eye->Tex)); DIRECTX.Device->CreateShaderResourceView(Tex, NULL, &(eye->TexSv)); DIRECTX.Device->CreateRenderTargetView(Tex, NULL, &(eye->TexRtv));
Instead, the replacement code should be similar to the following:
D3D11_TEXTURE2D_DESC dsDesc;
dsDesc.Width = size.w;
dsDesc.Height = size.h;
dsDesc.MipLevels = 1;
dsDesc.ArraySize = 1;
dsDesc.Format = DXGI_FORMAT_B8G8R8A8_UNORM;
dsDesc.SampleDesc.Count = 1;
dsDesc.BindFlags = D3D11_BIND_SHADER_RESOURCE | D3D11_BIND_RENDER_TARGET;
ovr_CreateSwapTextureSetD3D11(session, DIRECTX.Device, &dsDesc, &(eyeBuf->TextureSet));
for (int i = 0; i < eyeBuf->TextureSet->TextureCount; ++i)
{
ovrD3D11Texture* tex = (ovrD3D11Texture*)&(eyeBuf->TextureSet->Textures[i]);
DIRECTX.Device->CreateRenderTargetView(tex->D3D11.pTexture, NULL, &(eyeBuf->TexRtv[i]));
}
The application must still create and track the RenderTargetViews on the textures inside the texture sets - the SDK does not do this automatically (not all texture sets need to be rendertargets). The SDK does create ShaderResourceViews for its own use.
Texture sets cannot be multisampled - this is an unfortunate restriction of the way the OS treats these textures. If you wish to use MSAA eyebuffers, you must create the MSAA eyebuffers yourself as before, then create matching non-MSAA texture sets, and have each frame resolve the MSAA eyebuffer target into the respective texture set. See the OculusRoomTiny (MSAA) sample app for more information.
Before shutting down the HMD using ovr_Destroy() and ovr_Shutdown(), make sure to destroy the texture sets using ovr_DestroySwapTextureSet.
Scene rendering would have previously just rendered to the eyebuffers created above. Now, a texture set is a series of textures, effectively in a swap chain, so a little more work is required. Scene rendering now needs to:
So previously, for each eye:
DIRECTX.SetAndClearRenderTarget(pEyeRenderTexture[eye]->TexRtv, pEyeDepthBuffer[eye]); DIRECTX.SetViewport(Recti(eyeRenderViewport[eye]));
The new code looks more like:
ovrSwapTextureSet *sts = &(pEyeRenderTexture[eye]->TextureSet); sts->CurrentIndex = (sts->CurrentIndex + 1) % sts->TextureCount; int texIndex = sts->CurrentIndex; DIRECTX.SetAndClearRenderTarget(pEyeRenderTexture[eye]->TexRtv[texIndex], pEyeDepthBuffer[eye]); DIRECTX.SetViewport(Recti(eyeRenderViewport[eye]));
The game then submits the frame by calling ovr_SubmitFrame and passing in the texture set inside a layer, which replaces the older ovr_EndFrame function which took two raw ovr*Texture structures. The layer type that matches the previous eye-buffer behavior is the “EyeFov” layer type - that is, an eyebuffer with a supplied FOV, viewport, and pose. Additionally, ovr_SubmitFrame requires a few more pieces of information from the app that are now explicit instead of being implicit. Doing so allows them to dynamically adjusted, and supplied separately for each layer. The new state required is:
So previously the code read:
ovrD3D11Texture eyeTexture[2];
for (int eye = 0; eye < 2; eye++)
{
eyeTexture[eye].D3D11.Header.API = ovrRenderAPI_D3D11;
eyeTexture[eye].D3D11.Header.TextureSize = pEyeRenderTexture[eye]->Size;
eyeTexture[eye].D3D11.Header.RenderViewport = eyeRenderViewport[eye];
eyeTexture[eye].D3D11.pTexture = pEyeRenderTexture[eye]->Tex;
eyeTexture[eye].D3D11.pSRView = pEyeRenderTexture[eye]->TexSv;
}
ovr_EndFrame(HMD, EyeRenderPose, &eyeTexture[0].Texture);
This is replaced with the following.
ovrLayerEyeFov ld;
ld.Header.Type = ovrLayerType_EyeFov;
ld.Header.Flags = 0;
for (int eye = 0; eye < 2; eye++)
{
ld.ColorTexture[eye] = pEyeRenderTexture[eye]->TextureSet;
ld.Viewport[eye] = eyeRenderViewport[eye];
ld.Fov[eye] = HMD->DefaultEyeFov[eye];
ld.RenderPose[eye] = EyeRenderPose[eye];
}
ovrLayerHeader* layers = &ld.Header;
ovrResult result = ovr_SubmitFrame(HMD, 0, nullptr, &layers, 1);The slightly odd-looking indirection through the variable “layers” is because this argument to ovr_SubmitFrame would normally be an array of pointers to each of the visible layers. Since there is only one layer in this case, it's not an array of pointers, just a pointer.
Before you begin migration, make sure to do the following:
In this release, there are significant changes to the game loop. For example, the ovr_BeginFrame function is removed and ovr_EndFrame is replaced by ovr_SubmitFrame . To update your game loop:
Replace calls to ovr_GetEyePoses(..) with ovr_calcEyePoses(..):
ovrTrackingState state;
ovr_GetEyePoses(m_hmd, frameIndex, m_offsets, m_poses, &state);
becomes:
ovrFrameTiming timing = ovr_GetFrameTiming(m_hmd, frameIndex);
ovrTrackingState state = ovr_GetTrackingState(m_hmd, timing.DisplayMidpointSeconds);
ovr_CalcEyePoses(state.HeadPose.ThePose, m_offsets, poses);
Replace calls to ovr_ConfigureRendering(..) with ovr_GetRenderDesc(..) as described above:
ovrBool success = ovr_ConfigureRendering(m_hmd, &apiConfig, distortionCaps, m_fov, desc);
becomes:
for (int i = 0; i < ovrEye_Count; ++i)
desc[i] = ovr_GetRenderDesc(m_hmd, (ovrEyeType)i, m_fov[i]);
Swap the target texture each frame. Instead of rendering to the same texture or pair of textures each frame, you need to advance to the next texture in the ovrSwapTextureSet:
sts->CurrentIndex = (sts->CurrentIndex + 1) % sts->TextureCount;
camera->SetRenderTarget(((ovrD3D11Texture&)sts->Textures[ts->CurrentIndex]).D3D11.pTexture);
Replace calls to ovr_EndFrame(..) with ovr_SubmitFrame(..):
ovr_EndFrame(m_hmd, poses, textures);
becomes:
ovrViewScaleDesc viewScaleDesc;
viewScaleDesc.HmdSpaceToWorldScaleInMeters = 1.0f;
ovrLayerEyeFov ld;
ld.Header.Type = ovrLayerType_EyeFov;
ld.Header.Flags = 0;
for (int eye = 0; eye < 2; eye++)
{
viewScaleDesc.HmdToEyeViewOffset[eye] = m_offsets[eye];
ld.ColorTexture[eye] = m_texture[eye];
ld.Viewport[eye] = m_viewport[eye];
ld.Fov[eye] = m_fov[eye];
ld.RenderPose[eye] = m_poses[eye];
}
ovrLayerHeader* layers = &ld.Header;
ovr_SubmitFrame(m_hmd, frameIndex, &viewScaleDesc, &layers, 1);
Please refer to OculusRoomTiny source code for an example of how ovrTextureSets can be used to submit frames in the updated game loop.
ovr_SubmtiFrame on success can return a couple different values. ovrSuccess means distortion completed successfully and was displayed to the HMD. ovrSuccess_NotVisible means the frame submission succeeded but that what was rendered was not visible on the HMD because another VR app has focus. In this case the application should skip rendering and resubmit the same frame until submit frame returns ovrSuccess rather than ovrSuccess_NotVisible.
The 0.6 simplifies the PC SDK, so you can remove a lot of functions that are no longer needed. To remove functions:
Now that you have finished updating your code, you are ready to test the results. To test the results: