Application SpaceWarp Developer Guide
This guide covers how to implement AppSW in your Native OpenXR application. To learn AppSW works, and how to debug it, go here. Application SpaceWarp (AppSW) is a feature that achieves a step function improvement in both performance and latency at a significant magnitude. It’s one of the most substantial optimizations shipped to Quest developers. In our initial testing, it gave apps up to 70 percent additional compute, potentially with little to no perceptible artifacts.
However, enabling AppSW is a serious technical commitment. It requires modifying your app’s materials and render pipeline; any materials that have not been modified to support AppSW will produce artifacts when running with AppSW.
To help you use AppSW optimally, we have created this guide to discuss the technical considerations and tradeoffs to implement it appropriately.
API and integration considerations
AppSW can be enabled in Quest apps through OpenXr extension
XR_FB_space_warp. It can help developers better understand how the basic API works. We provide additional details in the Native engine section.
There are some key considerations worth knowing for both engine and native app developers prior to starting development:
- AppSW is only supported on one layer in your app, and that layer must have a
PROJECTION
layer type. - Motion Vector texture and depth texture do not need to be full resolution. In fact, using higher than the recommended resolution for motion vector/depth texture will not give you additional quality benefits. Use the OpenXR extension to query for recommended resolutions for motion vector and depth textures.
- Developers can toggle between AppSW mode and full FPS mode on any frame by disconnecting and reconnecting
XrCompositionLayerSpaceWarpInfoFB
from the “next” pointer. XrCompositionLayerSpaceWarpInfoFB::appSpaceDeltaPose
may still be a bit hard to understand even after reading the OpenXR extension description so we want to dive deeper to explain more. For any OpenXR app, we need to build the relationship between the HMD device reference frame and the app’s world reference frame. Determine where you should put the HMD’s reference frame in the app’s world and represent it by a pose in app world space (eg. appSpacePose
). This pose can help to translate an HMD space position to app world space position and vice versa. appSpacePose
is usually unchanged if you are just wearing the HMD and walking around. Many apps also use this concept to implement artificial locomotion such as scripted movement, teleportation, etc. Therefore, appSpacePose
changes based on the app locomotion logic. appSpaceDeltaPose
is used to capture the appSpacePose
difference between the previous frame and the current frame. Typically, you can calculate appSpaceDeltaPose
by Inv(appSpacePoseInPrevFrame)
* appSpacePoseInCurrFrame
, assuming the multiplication order is C = B * A, which means multiplying C is equivalent with multiplying A then multiplying B. appSpaceDeltaPose
is used for two purposes in the runtime:
- Filling in background motion vector - if a pixel on screen isn’t touched by any drawcalls, the pixel will be kept as clear color, which can’t be used as the correct motion vector. In that case, the XrRuntime tries to generate correct data automatically. But the runtime also needs to know if there is any motion driven by the app’s artificial locomotion and
appSpaceDeltaPose
can provide that information. - Turning off frame extrapolation for extreme cases - If an app triggers a large camera movement, we can’t generate a valid new frame for the new camera position due to limited previous frame data. In this scenario, we may want to disable frame extrapolation for the frame and
appSpaceDeltaPose
can be used to detect situations such as teleportation, camera cut, and so on.
To help you further understand this in context, we have also created a code example in the SDK package. Find more explanations with context in XrSpaceWarp\Src\XrSpaceWarp.c
.
- We recommend signed 16-bit float pixel format for motion vector swap chain as it has enough precision for motion vectors.
- AppSW is an OpenXR only feature and is available on both the Quest headsets.
How to Enable AppSW in App
Native apps can integrate AppSpaceWarp with the
OpenXR extension XR_FB_space_warp. Follow the steps below to enable this feature. You can use the native sample
XrSpaceWarp
as an example to follow along with this guide.
- Enable the extension with
XR_FB_SPACE_WARP_EXTENSION_NAME
. - Query the recommended motion vector resolution with
xrGetSystemProperties
and XrSystemSpaceWarpPropertiesFB
. - Using the resolution from step 2, allocate the motion vector swap chain and corresponding depth swap chain.
- Recommend motion vector swap chain use 16-bit signed float, e.g.
GL_RGBA16F
for GLES, VK_FORMAT_R16G16B16A16_SFLOAT
for Vulkan. XrSwapchainCreateInfo::usageFlags
can be XR_SWAPCHAIN_USAGE_SAMPLED_BIT | XR_SWAPCHAIN_USAGE_COLOR_ATTACHMENT_BIT
- Allocate depth buffer swap chain with the available depth format, e.g.
GL_DEPTH24_STENCIL8
for GLES, VK_FORMAT_D24_UNORM_S8_UINT
for Vulkan. Since this buffer will be used to render depth in app and read in compositor, don’t forget to add XR_SWAPCHAIN_USAGE_SAMPLED_BIT | XR_SWAPCHAIN_USAGE_DEPTH_STENCIL_ATTACHMENT_BIT
into XrSwapchainCreateInfo::usageFlags
.
- Bind the motion vector swap chain into the motion vector render pass. For more details on how to render motion vectors, please reference the motion vector generation section. Normally, only opaque objects should be rendered into the motion vector pass. The motion vector shader can be much simpler than the object’s main pass shader since previous/current NDC positions are the only things required. Developers can normally skip all lighting calculations and texture sampling unless you have “discard” in your fragment shader. It is common to have a single motion vector shader shared for many different materials. For the depth swap chain, you can simply resolve the depth buffer into it. We have not found benefits from enabling MSAA in the motion vector pass, so we recommend turning it off to reduce motion vector pass overhead.
- Once the motion vector and depth swap chain are filled with correct content, we can submit them in
XrEndFrame
through XrCompositionLayerSpaceWarpInfoFB
. We describe how to fill the struct in OpenXR extension XR_FB_space_warp and discuss some parameters like appSpaceDeltaPose earlier in this guide.
These are the basic steps for using the extension. We strongly encourage developers to check out the XrSpaceWarp
sample in our SDK package (search for “AppSpaceWarp” in the comments of the code).