ovr_WaitToBeginFrame
, ovr_BeginFrame
, and ovr_EndFrame
. We cover this process in detail in this section.HmdToEyePose
member of the ovrEyeRenderDesc
struct is an ovrPosef
6-DOF transform. This means that in HMDs with canted displays (where displays are not parallel to one another), the rendering cameras may be rotated and translated away from the HMD pose. As such, the forward-facing direction of the user is dictated by the Z-axis of the HMD pose, while the eye poses might have Z-axes that point in different directions.ovrEyeRenderDesc::HmdToEyePose
, and such that the distance between them is the same as the distance between the eyes, or the interpupillary distance (IPD).ovrSession
object for the headset as was described earlier.ovrHmdDesc
data.ovrTextureSwapChain
objects, used to represent eye buffers, in an API-specific way: call ovr_CreateTextureSwapChainDX
for either Direct3D 11 or 12, ovr_CreateTextureSwapChainGL
for OpenGL, or ovr_CreateTextureSwapChainVk
for Vulkan.VK_PRESENT_MODE_IMMEDIATE_KHR
for AMD and VK_PRESENT_MODE_MAILBOX_KHR
for nVidia. This prevents an issue where the loop waits for vsync on the main monitor before rendering the next frame, which introduces latency and degrades performance.ovr_GetTrackingState
and ovr_CalcEyePoses
to compute eye poses needed for view rendering based on frame timing information.ovr_GetTextureSwapChainCurrentIndex
and ovr_GetTextureSwapChainBufferDX
, ovr_GetTextureSwapChainBufferGL
, or ovr_GetTextureSwapChainBufferVk
. After rendering to the texture is complete, the application must call ovr_CommitTextureSwapChain
.ovr_WaitToBeginFrame
, and then when your application is ready to begin rendering the frame, call ovr_BeginFrame
. When your application is ready to submit the frame, call ovr_EndFrame
, passing swap texture set(s) from the previous step within an ovrLayerEyeFov
structure. Although a single layer is required to submit a frame, you can use multiple layers and layer types for advanced rendering. ovr_EndFrame
passes layer textures to the compositor which handles distortion, timewarp, and GPU synchronization before presenting it to the headset. Note that the combination of ovr_WaitToBeginFrame
, ovr_BeginFrame
, and ovr_EndFrame
enables you to implement performance optimization techniques in multi-threaded environments, for example by splitting apart and overlapping the processing of multiple frames at the same time. In previous releases, these three functions were combined into ovr_SubmitFrame
, but that call has now been deprecated; please use ovr_WaitToBeginFrame
, ovr_BeginFrame
, and ovr_EndFrame
instead.ovr_DestroyTextureSwapChain
to destroy swap texture buffers. Call ovr_DestroyMirrorTexture
to destroy a mirror texture. To destroy the ovrSession object, call ovr_Destroy
.ovrTextureSwapChain
. The following code shows how the required texture size can be computed:// Configure Stereo settings.
Sizei recommenedTex0Size = ovr_GetFovTextureSize(session, ovrEye_Left,
session->DefaultEyeFov[0], 1.0f);
Sizei recommenedTex1Size = ovr_GetFovTextureSize(session, ovrEye_Right,
session->DefaultEyeFov[1], 1.0f);
Sizei bufferSize;
bufferSize.w = recommenedTex0Size.w + recommenedTex1Size.w;
bufferSize.h = max ( recommenedTex0Size.h, recommenedTex1Size.h );
session->DefaultEyeFov
). The function ovr_GetFovTextureSize
computes the desired texture size for each eye based on these parameters.ovr_GetSessionPhysicalDeviceVk
to get the current physical device matching the luid. Then, create a VkDevice
associated with the returned physical device.Win32_VulkanAppUtil.h
that comes with the OculusRoomTiny_Advanced
sample app that ships with the Oculus SDK.static const uint32_t AMDVendorId = 0x1002;
isAMD = (gpuProps.vendorID == AMDVendorId);
static const char* deviceExtensions[] =
{
VK_KHR_SWAPCHAIN_EXTENSION_NAME,
VK_KHX_EXTERNAL_MEMORY_EXTENSION_NAME,
#if defined(VK_USE_PLATFORM_WIN32_KHR)
VK_KHX_EXTERNAL_MEMORY_WIN32_EXTENSION_NAME,
#endif
};
static const char* deviceExtensionsAMD[] =
{
VK_KHR_SWAPCHAIN_EXTENSION_NAME
};
ovr_SetSynchonizationQueueVk
to identify the queue.ovr_CreateTextureSwapChainGL
, ovr_CreateTextureSwapChainDX
, or ovr_CreateTextureSwapChainVk
to allocate the texture swap chains in an API-specific way.ovrTextureSwapChain textureSwapChain = 0;
ovrTextureSwapChainDesc desc = {};
desc.Type = ovrTexture_2D;
desc.ArraySize = 1;
desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB;
desc.Width = bufferSize.w;
desc.Height = bufferSize.h;
desc.MipLevels = 1;
desc.SampleCount = 1;
desc.StaticImage = ovrFalse;
if (ovr_CreateTextureSwapChainGL(session, &desc, &textureSwapChain) == ovrSuccess)
{
// Sample texture access:
int texId;
ovr_GetTextureSwapChainBufferGL(session, textureSwapChain, 0, &texId);
glBindTexture(GL_TEXTURE_2D, texId);
...
}
ovrTextureSwapChain textureSwapChain = 0;
std::vector<ID3D11RenderTargetView*> texRtv;
ovrTextureSwapChainDesc desc = {};
desc.Type = ovrTexture_2D;
desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB;
desc.ArraySize = 1;
desc.Width = bufferSize.w;
desc.Height = bufferSize.h;
desc.MipLevels = 1;
desc.SampleCount = 1;
desc.StaticImage = ovrFalse;
desc.MiscFlags = ovrTextureMisc_None;
desc.BindFlags = ovrTextureBind_DX_RenderTarget;
if (ovr_CreateTextureSwapChainDX(session, DIRECTX.Device, &desc, &textureSwapChain) == ovrSuccess)
{
int count = 0;
ovr_GetTextureSwapChainLength(session, textureSwapChain, &count);
texRtv.resize(textureCount);
for (int i = 0; i < count; ++i)
{
ID3D11Texture2D* texture = nullptr;
ovr_GetTextureSwapChainBufferDX(session, textureSwapChain, i, IID_PPV_ARGS(&texture));
DIRECTX.Device->CreateRenderTargetView(texture, nullptr, &texRtv[i]);
texture->Release();
}
}
ovrTextureSwapChain TexChain;
std::vector<D3D12_CPU_DESCRIPTOR_HANDLE> texRtv;
std::vector<ID3D12Resource*> TexResource;
ovrTextureSwapChainDesc desc = {};
desc.Type = ovrTexture_2D;
desc.ArraySize = 1;
desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB;
desc.Width = sizeW;
desc.Height = sizeH;
desc.MipLevels = 1;
desc.SampleCount = 1;
desc.MiscFlags = ovrTextureMisc_DX_Typeless;
desc.StaticImage = ovrFalse;
desc.BindFlags = ovrTextureBind_DX_RenderTarget;
// DIRECTX.CommandQueue is the ID3D12CommandQueue used to render the eye textures by the app
ovrResult result = ovr_CreateTextureSwapChainDX(session, DIRECTX.CommandQueue, &desc, &TexChain);
if (!OVR_SUCCESS(result))
return false;
int textureCount = 0;
ovr_GetTextureSwapChainLength(Session, TexChain, &textureCount);
texRtv.resize(textureCount);
TexResource.resize(textureCount);
for (int i = 0; i < textureCount; ++i)
{
result = ovr_GetTextureSwapChainBufferDX(Session, TexChain, i, IID_PPV_ARGS(&TexResource[i]));
if (!OVR_SUCCESS(result))
return false;
D3D12_RENDER_TARGET_VIEW_DESC rtvd = {};
rtvd.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
rtvd.ViewDimension = D3D12_RTV_DIMENSION_TEXTURE2D;
texRtv[i] = DIRECTX.RtvHandleProvider.AllocCpuHandle(); // Gives new D3D12_CPU_DESCRIPTOR_HANDLE
DIRECTX.Device->CreateRenderTargetView(TexResource[i], &rtvd, texRtv[i]);
}
Note: For Direct3D 12, when callingovr_CreateTextureSwapChainDX
, the caller provides a ID3D12CommandQueue instead of a ID3D12Device to the SDK. It is the caller’s responsibility to make sure that this ID3D12CommandQueue instance is where all VR eye-texture rendering is executed. Or, it can be used as a “join-node” fence to wait for the command lists executed by other command queues rendering the VR eye textures.
bool Create(ovrSession aSession, VkExtent2D aSize, RenderPass& renderPass, DepthBuffer& depthBuffer)
{
session = aSession;
size = aSize;
ovrTextureSwapChainDesc desc = {};
desc.Type = ovrTexture_2D;
desc.ArraySize = 1;
desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB;
desc.Width = (int)size.width;
desc.Height = (int)size.height;
desc.MipLevels = 1;
desc.SampleCount = 1;
desc.MiscFlags = ovrTextureMisc_DX_Typeless;
desc.BindFlags = ovrTextureBind_DX_RenderTarget;
desc.StaticImage = ovrFalse;
ovrResult result = ovr_CreateTextureSwapChainVk(session, Platform.device, &desc, &textureChain);
if (!OVR_SUCCESS(result))
return false;
int textureCount = 0;
ovr_GetTextureSwapChainLength(session, textureChain, &textureCount);
texElements.reserve(textureCount);
for (int i = 0; i < textureCount; ++i)
{
VkImage image;
result = ovr_GetTextureSwapChainBufferVk(session, textureChain, i, &image);
texElements.emplace_back(RenderTexture());
CHECK(texElements.back().Create(image, VK_FORMAT_R8G8B8A8_SRGB, size, renderPass, depthBuffer.view));
}
return true;
}
OVR_FORMAT_R8G8B8A8_UNORM_SRGB
. This is also recommended for applications that provide static textures as quad-layer textures to the compositor. Failure to do so will cause the texture to look much brighter than expected.ovr_CreateTextureSwapChainDX
is used by the distortion compositor for the ShaderResourceView when reading the contents of the texture. As a result, the application should request texture swap chain formats that are in sRGB-space (e.g. OVR_FORMAT_R8G8B8A8_UNORM_SRGB
).OVR_FORMAT_R8G8B8A8_UNORM
) and handles the linear-to-gamma conversion using HLSL code, or does not care about any gamma-correction, then:OVR_FORMAT_R8G8B8A8_UNORM_SRGB
) texture swap chain.ovrTextureMisc_DX_Typeless
flag in the desc.DXGI_FORMAT_R8G8B8A8_UNORM
)Note: TheovrTextureMisc_DX_Typeless
flag for depth buffer formats (e.g. OVR_FORMAT_D32) is ignored as they are always converted to be typeless.
ovrTextureMisc_DX_Typeless
flag in D3D11:ovrTextureSwapChainDesc desc = {};
desc.Type = ovrTexture_2D;
desc.ArraySize = 1;
desc.Format = OVR_FORMAT_R8G8B8A8_UNORM_SRGB;
desc.Width = sizeW;
desc.Height = sizeH;
desc.MipLevels = 1;
desc.SampleCount = 1;
desc.MiscFlags = ovrTextureMisc_DX_Typeless;
desc.BindFlags = ovrTextureBind_DX_RenderTarget;
desc.StaticImage = ovrFalse;
ovrResult result = ovr_CreateTextureSwapChainDX(session, DIRECTX.Device, &desc, &textureSwapChain);
if(!OVR_SUCCESS(result))
return;
int count = 0;
ovr_GetTextureSwapChainLength(session, textureSwapChain, &count);
for (int i = 0; i < count; ++i)
{
ID3D11Texture2D* texture = nullptr;
ovr_GetTextureSwapChainBufferDX(session, textureSwapChain, i, IID_PPV_ARGS(&texture));
D3D11_RENDER_TARGET_VIEW_DESC rtvd = {};
rtvd.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
rtvd.ViewDimension = D3D11_RTV_DIMENSION_TEXTURE2D;
DIRECTX.Device->CreateRenderTargetView(texture, &rtvd, &texRtv[i]);
texture->Release();
}
ofovr_CreateTextureSwapChainGL
is used by the distortion compositor when reading the contents of the texture. As a result, the application should request texture swap chain formats preferably in sRGB-space (e.g. OVR_FORMAT_R8G8B8A8_UNORM_SRGB
). Furthermore, your application should call glEnable(GL_FRAMEBUFFER_SRGB
); before rendering into these textures.OVR_FORMAT_R8G8B8A8_UNORM_SRGB
) texture swap chain.GL_FRAMEBUFFER_SRGB
); when rendering into the texture.ovr_CreateTextureSwapChainVk
is used by the distortion compositor when reading the contents of the texture. Your application should request texture swap chain formats in the sRGB-space (e.g. OVR_FORMAT_R8G8B8A8_UNORM_SRGB
) as the compositor does sRGB-correct rendering. The compositor will rely on the GPU’s hardware sampler to perform the sRGB-to-linear conversion.OVR_FORMAT_R8G8B8A8_UNORM
) while handling the linear-to-gamma conversion via SPIRV code, the application must still request the corresponding sRGB format and also use ovrTextureMisc_DX_Typeless
in the Flag field of ovrTextureSwapChainDesc
. This allows the application to create a RenderTargetView
in linear format, while allowing the compositor to treat it as sRGB. Failure to do this will result in unexpected gamma-curve artifacts. The ovrTextureMisc_DX_Typeless
flag for depth buffer formats (e.g. OVR_FORMAT_D32_FLOAT
) is ignored as they are always converted to be typeless.ovr_CreateMirrorTextureDX
, ovr_CreateMirrorTextureGL
, and ovr_CreateMirrorTextureWithOptionsVk
for D3D, OpenGL, and Vulkan, respectively.ovr_WaitToBeginFrame
, ovr_BeginFrame
, and ovr_EndFrame
. After the frame is submitted, the compositor handles distortion, timewarp, and GPU synchronization before presenting it to the headset. // Initialize VR structures, filling out description.
ovrEyeRenderDesc eyeRenderDesc[2];
ovrPosef hmdToEyeViewPose[2];
ovrHmdDesc hmdDesc = ovr_GetHmdDesc(session);
eyeRenderDesc[0] = ovr_GetRenderDesc(session, ovrEye_Left, hmdDesc.DefaultEyeFov[0]);
eyeRenderDesc[1] = ovr_GetRenderDesc(session, ovrEye_Right, hmdDesc.DefaultEyeFov[1]);
hmdToEyeViewPose[0] = eyeRenderDesc[0].HmdToEyePose;
hmdToEyeViewPose[1] = eyeRenderDesc[1].HmdToEyePose;
// Initialize our single full screen Fov layer.
ovrLayerEyeFov layer;
layer.Header.Type = ovrLayerType_EyeFov;
layer.Header.Flags = 0;
layer.ColorTexture[0] = textureSwapChain;
layer.ColorTexture[1] = textureSwapChain;
layer.Fov[0] = eyeRenderDesc[0].Fov;
layer.Fov[1] = eyeRenderDesc[1].Fov;
layer.Viewport[0] = Recti(0, 0, bufferSize.w / 2, bufferSize.h);
layer.Viewport[1] = Recti(bufferSize.w / 2, 0, bufferSize.w / 2, bufferSize.h);
// ld.RenderPose and ld.SensorSampleTime are updated later per frame.
ovrEyeRenderDesc
structure contains useful values for rendering, including the HmdToEyePose
for each eye. Eye view offsets are used later to adjust for eye separation.ovrLayerEyeFov
structure for a full screen layer. Starting with Oculus SDK 0.6, frame submission uses layers to composite multiple view images or texture quads on top of each other. This example uses a single layer to present a VR scene. For this purpose, we use ovrLayerEyeFov
, which describes a dual-eye layer that covers the entire eye field of view. Since we are using the same texture set for both eyes, we initialize both eye color textures to pTextureSet and configure viewports to draw to the left and right sides of this shared texture, respectively.// Get both eye poses simultaneously, with IPD offset already included.
double displayMidpointSeconds = ovr_GetPredictedDisplayTime(session, 0);
ovrTrackingState hmdState = ovr_GetTrackingState(session, displayMidpointSeconds, ovrTrue);
ovr_CalcEyePoses(hmdState.HeadPose.ThePose, hmdToEyeViewPose, layer.RenderPose);
ovr_GetTrackingState
.ovr_GetTrackingState
needs to know when the current frame will actually be displayed. The code above calls ovr_GetPredictedDisplayTime
to obtain displayMidpointSeconds for the current frame, using it to compute the best predicted tracking state. The head pose from the tracking state is then passed to ovr_CalcEyePoses
to calculate correct view poses for each eye. These poses are stored directly into the layer.RenderPose[2] array. With eye poses ready, we can proceed onto the actual frame rendering.if (isVisible)
{
// Get next available index of the texture swap chain
int currentIndex = 0;
ovr_GetTextureSwapChainCurrentIndex(session, textureSwapChain, ¤tIndex);
++frameIndex;
ovrResult result = ovr_WaitToBeginFrame(session, frameIndex);
// Clear and set up render-target.
DIRECTX.SetAndClearRenderTarget(pTexRtv[currentIndex], pEyeDepthBuffer);
// Render Scene to Eye Buffers
result = ovr_BeginFrame(session, frameIndex);
for (int eye = 0; eye < 2; eye++)
{
// Get view and projection matrices for the Rift camera
Vector3f pos = originPos + originRot.Transform(layer.RenderPose[eye].Position);
Matrix4f rot = originRot * Matrix4f(layer.RenderPose[eye].Orientation);
Vector3f finalUp = rot.Transform(Vector3f(0, 1, 0));
Vector3f finalForward = rot.Transform(Vector3f(0, 0, -1));
Matrix4f view = Matrix4f::LookAtRH(pos, pos + finalForward, finalUp);
Matrix4f proj = ovrMatrix4f_Projection(layer.Fov[eye], 0.2f, 1000.0f, 0);
// Render the scene for this eye.
DIRECTX.SetViewport(layer.Viewport[eye]);
roomScene.Render(proj * view, 1, 1, 1, 1, true);
}
// Commit the changes to the texture swap chain
ovr_CommitTextureSwapChain(session, textureSwapChain);
}
// Submit frame with one layer we have.
ovrLayerHeader* layers = &layer.Header;
result = ovr_EndFrame(session, frameIndex, nullptr, &layers, 1);
isVisible = (result == ovrSuccess);
originPos
and originRot
values) with the new pose computed based on the tracking state and stored in the layer. These original values can be modified by input to move the player within the 3D world.ovr_EndFrame
to pass frame data to the compositor. From this point, the compositor takes over by accessing texture data through shared memory, distorting it, and presenting it on the Rift.ovr_EndFrame
returns once the submitted frame is queued up and the runtime is available to accept a new frame. When successful, its return value is either ovrSuccess
or ovrSuccess_NotVisible
.ovrSuccess_NotVisible
is returned if the frame wasn’t actually displayed, which can happen when the VR application loses focus. Our sample code handles this case by updating the isVisible flag, checked by the rendering logic. While frames are not visible, rendering should be paused to eliminate unnecessary GPU load.ovrError_DisplayLost
, the device was removed and the session is invalid. Release the shared resources (ovr_DestroyTextureSwapChain
), destroy the session (ovr_Destroy
), recreate it (ovr_Create
), and create new resources (ovr_CreateTextureSwapChainXXX
). The application’s existing private graphics resources do not need to be recreated unless the new ovr_Create
call returns a different GraphicsLuid
.ovr_GetPredictedDisplayTime
function, relying on the application-provided frame index to ensure correct timing is reported across different threads.ovr_GetTimeInSeconds
. Current time should rarely be used, however, since simulation and motion prediction will produce better results when relying on the timing values returned by ovr_GetPredictedDisplayTime
. This function has the following signature:ovr_GetPredictedDisplayTime(ovrSession session, long long frameIndex);
ovr_WaitToBeginFrame
, ovr_BeginFrame
, and ovr_EndFrame
. The details of multi-threaded timing are covered in the next section, Rendering on Different Threads.ovr_GetPredictedDisplayTime
could either be off by one frame depending which thread the function is called from, or worse, could be randomly incorrect depending on how threads are scheduled. To address this issue, previous section introduced the concept of frameIndex that is tracked by the application and passed across threads along with frame data.ovr_EndFrame
, along with the frame index. (This must occur after calling ovr_WaitToBeginFrame
and ovr_BeginFrame
.)ovr_GetPredictedDisplayTime
to obtain the correct timing for pose prediction.ovr_GetTrackingState
with the predicted time value. It can also call ovr_CalcEyePoses
if necessary for rendering setup.ovr_BeginFrame
and ovr_EndFrame
.void MainThreadProcessing()
{
frameIndex++;
ovrResult result = ovr_WaitToBeginFrame(session, frameIndex);
// Ask the API for the times when this frame is expected to be displayed.
double frameTiming = ovr_GetPredictedDisplayTime(session, frameIndex);
// Get the corresponding predicted pose state.
ovrTrackingState state = ovr_GetTrackingState(session, frameTiming, ovrTrue);
ovrPosef eyePoses[2];
ovr_CalcEyePoses(state.HeadPose.ThePose, hmdToEyeViewOffset, eyePoses);
SetFrameHMDData(frameIndex, eyePoses);
// Do render pre-processing for this frame.
...
}
void RenderThreadProcessing()
{
int frameIndex;
ovrPosef eyePoses[2];
GetFrameHMDData(&frameIndex, eyePoses);
ovrResult result = ovr_BeginFrame(session, frameIndex);
layer.RenderPose[0] = eyePoses[0];
layer.RenderPose[1] = eyePoses[1];
// Execute actual rendering to eye textures.
...
// Submit frame with the one layer we have.
ovrLayerHeader* layers = &layer.Header;
result = ovr_EndFrame(session, frameIndex, nullptr, &layers, 1);
}
ovr_CreateTextureSwapChainDX
, the Oculus SDK caches off the ID3D12CommandQueue
provided by the caller for future usage. As the application calls ovr_EndFrame
, the SDK drops a fence on the cached ID3D12CommandQueue
to know exactly when a given set of eye-textures are ready for the SDK compositor.ID3D12CommandQueue
on a single thread is the easiest. But,it might also split the CPU rendering workload for each eye-texture pair or push non-eye-texture rendering work, such as shadows, reflection maps, and so on, onto different command queues. If the application populates and executes command lists from multiple threads, it will also have to make sure that the ID3D12CommandQueue
provided to the SDK is the single join-node for the eye-texture rendering work executed through different command queues.Layer type | Description |
---|---|
EyeFov | The standard “eye buffer” familiar from previous SDKs, which is typically a stereo view of a virtual scene rendered from the position of the user’s eyes. Although eye buffers can be mono, this can cause discomfort. Previous SDKs had an implicit field of view (FOV) and viewport; these are now supplied explicitly and the application can change them every frame, if desired. |
Quad | A monoscopic image that is displayed as a rectangle at a given pose and size in the virtual world. This is useful for heads-up-displays, text information, object labels and so on. By default the pose is specified relative to the user’s real-world space and the quad will remain fixed in space rather than moving with the user’s head or body motion. For head-locked quads, use the ovrLayerFlag_HeadLocked flag as described below. |
Cubemap | A cubemap consists of six rectangles. These rectangles are placed around the user, as if the user is sitting inside of a room that is cube shaped. Each wall is a texture that your application submits. The cubemap appears to be at an infinite distance, and essentially is the background behind all other objects that your application renders. The cubemap does not look like a cube to the user. Rather, it simply appears to be the background, at an infinite distance. For example, you can use cubmaps to create the sky that will appear behind all the buildings in your VR experience. You don’t need to handle occlusion by objects in the foreground. You can simply setup the cubemap, and it will appear in the background everywhere in your scene. |
EyeMatrix | The EyeMatrix layer type is similar to the EyeFov layer type and is provided to assist compatibility with legacy Samsung Gear VR applications. |
Cylindrical | You can use Cylindrical layers to create curved quads, instead of flat quads. ovrLayerCylinder describes a layer of type ovrLayerType_Cylinder which is a single cylinder that is positioned relative to a re-centered origin (represented by C in the below illustration). See Cylindrical Layer Parameters below. |
Depth Buffers | This layer specifies a monoscopic or stereoscopic view, with depth textures in addition to color textures. This layer is implemented by the ovrLayerEyeFovDepth struct. It is equivalent to ovrLayerEyeFov , but with the addition of DepthTexture and ProjectionDesc. Depth buffers are typically used to support positional time warp. |
Disabled | Ignored by the compositor, disabled layers do not cost performance. We recommend that applications perform basic frustum-culling and disable layers that are out of view. However, there is no need for the application to repack the list of active layers tightly together when turning one layer off; disabling it and leaving it in the list is sufficient. Equivalently, the pointer to the layer in the list can be set to null. |
ovrLayerCylinder
in the reference documentation.ovrLayerType
enum, and an associated structure holding the data required to display that layer. For example, the EyeFov
layer is type number ovrLayerType_EyeFov
and is described by the data in the structure ovrLayerEyeFov
. These structures share a similar set of parameters, though not all layer types require all parameters:Parameter | Type | Description |
---|---|---|
Header.Type | enum ovrLayerType | Must be set by all layers to specify what type they are. |
Header.Flags | A bitfield of ovrLayerFlags | See below for more information. |
ColorTexture | TextureSwapChain | Provides color and translucency data for the layer. Layers are blended over one another using premultiplied alpha. This allows them to express either lerp-style blending, additive blending, or a combination of the two. Layer textures must be RGBA or BGRA formats and might have mipmaps, but cannot be arrays, cubes, or have MSAA. If the application desires to do MSAA rendering, then it must resolve the intermediate MSAA color texture into the layer’s non-MSAA ColorTexture. |
Viewport | ovrRecti | The rectangle of the texture that is actually used, specified as an ovrRecti structure that supplies the size and position of the rectangle as two 2D vectors, containing integers for width/height and x/y, respectively. The x/y values supplied for the position indicate the lower-left corner of the rectangle. In theory, texture data outside this region is not visible in the layer. However, the usual caveats about texture sampling apply, especially with mipmapped textures. It is good practice to leave a border of RGBA(0,0,0,0) pixels around the displayed region to avoid “bleeding,” especially between two eye buffers packed side by side into the same texture. The size of the border depends on the exact usage case, but around 8 pixels seems to work well in most cases. |
Fov | ovrFovPort | The field of view used to render the scene in an Eye layer type. Note this does not control the HMD’s display, it simply tells the compositor what FOV was used to render the texture data in the layer - the compositor will then adjust appropriately to whatever the actual user’s FOV is. Applications may change FOV dynamically for special effects. Reducing FOV may also help with performance on slower machines, though typically it is more effective to reduce resolution before reducing FOV. |
RenderPose | ovrPosef | The camera pose the application used to render the scene in an Eye layer type. This is typically predicted by the SDK and application using the ovr_GetTrackingState and ovr_CalcEyePoses functions. The difference between this pose and the actual pose of the eye at display time is used by the compositor to apply timewarp to the layer. |
SensorSampleTime | double | The absolute time when the application sampled the tracking state. The typical way to acquire this value is to have an ovr_GetTimeInSeconds call right next to the ovr_GetTrackingState call. The SDK uses this value to report the application’s motion-to-photon latency in the Performance HUD. If the application has more than one ovrLayerType_EyeFov layer submitted at any given frame, the SDK scrubs through those layers and selects the timing with the lowest latency. In a given frame, if no ovrLayerType_EyeFov layers are submitted, the SDK will use the point in time when ovr_GetTrackingState was called with the latencyMarker set to ovrTrue as the substitute application motion-to-photon latency time. |
QuadPoseCenter | ovrPosef | Specifies the orientation and position of the center point of a Quad layer type. The supplied direction is the vector perpendicular to the quad. The position is in real-world meters (not the application’s virtual world, the actual world the user is in) and is relative to the “zero” position set by ovr_RecenterTrackingOrigin or ovr_SpecifyTrackingOrigin unless the ovrLayerFlag_HeadLocked flag is used. |
QuadSize | ovrVector2f | Specifies the width and height of a Quad layer type. As with position, this is in real-world meters. |
ovrTextureSwapChain
for the left and right eyes, and a viewport for each.ovrTextureSwapChain
for both left and right eyes, but a different viewport for each. This allows the application to render both left and right views to the same texture buffer. Remember to add a small buffer between the two views to prevent “bleeding”, as discussed above.ovrTextureSwapChain
for both left and right eyes, and the same viewport for each.ovrLayerFlag_HighQuality
— enables 4x anisotropic sampling in the compositor for this layer. This can provide a significant increase in legibility, especially when used with a texture containing mipmaps; this is recommended for high-frequency images such as text or diagrams and when used with the Quad layer types. For Eye layer types, it will also increase visual fidelity towards the periphery, or when feeding in textures that have more than the 1:1 recommended pixel density. For best results, when creating mipmaps for the textures associated to the particular layer, make sure the texture sizes are a power of 2. However, the application does not need to render to the whole texture; a viewport that renders to the recommended size in the texture will provide the best performance-to-quality ratios.ovrLayerFlag_TextureOriginAtBottomLeft
— the origin of a layer’s texture is assumed to be at the top-left corner. However, some engines (particularly those using OpenGL) prefer to use the bottom-left corner as the origin, and they should use this flag.ovrLayerFlag_HeadLocked
— Most layer types have their pose orientation and position specified relative to the “zero position” defined by calling ovr_RecenterTrackingOrigin
. However the app may wish to specify a layer’s pose relative to the user’s face. When the user moves their head, the layer follows. This is useful for reticles used in gaze-based aiming or selection. This flag may be used for all layer types, though it has no effect when used on the Direct type.ovrTextureSwapChain
the application wants to update and calling ovr_CommitTextureSwapChain
, the data for each layer is put into the relevant ovrLayerEyeFov
/ ovrLayerQuad
/ ovrLayerDirect
structure. The application then creates a list of pointers to those layer structures, specifically to the Header field which is guaranteed to be the first member of each structure. Then the application builds a ovrViewScaleDesc
struct with the required data, and calls the ovr_WaitToBeginFrame
, ovr_BeginFrame
, and ovr_EndFrame
functions.ovrResult result = ovr_WaitToBeginFrame(Session, frameIndex);
result = ovr_BeginFrame(Session, frameIndex);
// Create eye layer.
ovrLayerEyeFov eyeLayer;
eyeLayer.Header.Type = ovrLayerType_EyeFov;
eyeLayer.Header.Flags = 0;
for ( int eye = 0; eye < 2; eye++ )
{
eyeLayer.ColorTexture[eye] = EyeBufferSet[eye];
eyeLayer.Viewport[eye] = EyeViewport[eye];
eyeLayer.Fov[eye] = EyeFov[eye];
eyeLayer.RenderPose[eye] = EyePose[eye];
}
// Create HUD layer, fixed to the player's torso
ovrLayerQuad hudLayer;
hudLayer.Header.Type = ovrLayerType_Quad;
hudLayer.Header.Flags = ovrLayerFlag_HighQuality;
hudLayer.ColorTexture = TheHudTextureSwapChain;
// 50cm in front and 20cm down from the player's nose,
// fixed relative to their torso.
hudLayer.QuadPoseCenter.Position.x = 0.00f;
hudLayer.QuadPoseCenter.Position.y = -0.20f;
hudLayer.QuadPoseCenter.Position.z = -0.50f;
hudLayer.QuadPoseCenter.Orientation.x = 0;
hudLayer.QuadPoseCenter.Orientation.y = 0;
hudLayer.QuadPoseCenter.Orientation.z = 0;
hudLayer.QuadPoseCenter.Orientation.w = 1;
// HUD is 50cm wide, 30cm tall.
hudLayer.QuadSize.x = 0.50f;
hudLayer.QuadSize.y = 0.30f;
ID3D11Texture2D* tex = nullptr;
ovr_GetTextureSwapChainBufferDX(Session, TheHudTextureSwapChain, 0, IID_PPV_ARGS(&tex));
D3D11_TEXTURE2D_DESC desc;
tex->GetDesc(&desc);
// Display all of the HUD texture.
hudLayer.Viewport.Pos.x = 0.0f;
hudLayer.Viewport.Pos.y = 0.0f;
hudLayer.Viewport.Size.w = desc.Width;
hudLayer.Viewport.Size.h = desc.Height;
// The list of layers.
ovrLayerHeader *layerList[2];
layerList[0] = &eyeLayer.Header;
layerList[1] = &hudLayer.Header;
// Set up positional data.
ovrViewScaleDesc viewScaleDesc;
viewScaleDesc.HmdSpaceToWorldScaleInMeters = 1.0f;
viewScaleDesc.HmdToEyeViewOffset[0] = HmdToEyePose[0];
viewScaleDesc.HmdToEyeViewOffset[1] = HmdToEyePose[1];
result = ovr_EndFrame(Session, frameIndex, &viewScaleDesc, layerList, 2);
ovrLayerFlag_HighQuality
flag.ovr_EndFrame
queues the layers for display, and transfers control of the committed textures inside the ovrTextureSwapChains
to the compositor. It is important to understand that these textures are being shared (rather than copied) between the application and the compositor threads, and that composition does not necessarily happen at the time ovr_EndFrame
is called, so care must be taken. To continue rendering into a texture swap chain the application should always get the next available index with ovr_GetTextureSwapChainCurrentIndex
before rendering into it. For example:ovrResult result = ovr_WaitToBeginFrame(Hmd, frameIndex);
result = ovr_BeginFrame(Hmd, frameIndex);
// Create two TextureSwapChains to illustrate.
ovrTextureSwapChain eyeTextureSwapChain;
ovr_CreateTextureSwapChainDX ( ... &eyeTextureSwapChain );
ovrTextureSwapChain hudTextureSwapChain;
ovr_CreateTextureSwapChainDX ( ... &hudTextureSwapChain );
// Set up two layers.
ovrLayerEyeFov eyeLayer;
ovrLayerEyeFov hudLayer;
eyeLayer.Header.Type = ovrLayerType_EyeFov;
eyeLayer...etc... // set up the rest of the data.
hudLayer.Header.Type = ovrLayerType_Quad;
hudLayer...etc... // set up the rest of the data.
// the list of layers
ovrLayerHeader *layerList[2];
layerList[0] = &eyeLayer.Header;
layerList[1] = &hudLayer.Header;
// Each frame...
int currentIndex = 0;
ovr_GetTextureSwapChainCurrentIndex(... eyeTextureSwapChain, ¤tIndex);
// Render into it. It is recommended the app use ovr_GetTextureSwapChainBufferDX for each index on texture chain creation to cache
// textures or create matching render target views. Each frame, the currentIndex value returned can be used to index directly into that.
ovr_CommitTextureSwapChain(... eyeTextureSwapChain);
ovr_GetTextureSwapChainCurrentIndex(... hudTextureSwapChain, ¤tIndex);
// Render into it. It is recommended the app use ovr_GetTextureSwapChainBufferDX for each index on texture chain creation to cache
// textures or create matching render target views. Each frame, the currentIndex value returned can be used to index directly into that.
ovr_CommitTextureSwapChain(... hudTextureSwapChain);
eyeLayer.ColorTexture[0] = eyeTextureSwapChain;
eyeLayer.ColorTexture[1] = eyeTextureSwapChain;
hudLayer.ColorTexture = hudTextureSwapChain;
result = ovr_EndFrame(Hmd, frameIndex, nullptr, layerList, 2);
HmdToEyeOffset
vector provided by the ovr_GetRenderDesc
function. Starting with version 1.17, HmdToEyeOffset
has been renamed to HmdToEyePose
using the type ovrPosef
which contains a Position
and Orientation
, effectively giving eye poses six degrees-of-freedom. This means that each eye’s render frustum can now be rotated away from the HMD’s orientation, in addition to being translated by the SDK. Because of this, the eye frustums’ axes are no longer guaranteed to be parallel to each other or to the HMD’s orientation axes. This generalization provides greater freedom to the SDK in defining the HMD geometry. But it also means that, as a VR app developer, you need to be more careful about your previous assumptions, especially when it comes to rendering.HmdToEyePose
:HmdToEyeOffset
, you can use HmdToEyePose.Position
instead. However, unless you are absolutely sure about what you are doing, there is a good chance you actually want to treat HmdToEyePose
as a whole transform, rather than separate out Orientation from Position.ovrLayerQuad
instead.ovrFovPort
structures of both eyes, in order to take advantage of various rendering optimizations. This is normally done by using an ovrFovPort
, which takes the maximum of the ovrFovPort
values for both eyes, on all four sides of the frustum. Before generating the monoscopic frustum this way however, be sure to remove any potential rotation from the ovrFovPort
values by calling FovPort::Uncant
, which is located in the ovr_math.h
header. See the OculusWorldDemo sample code, to see how to use FovPort::Uncant
.