Translucent vs Masked Rendering in Real-Time Applications
Ricky Rajani
Rendering translucent materials is a difficult problem in real-time applications, especially as many existing methods are too costly for virtual reality. Oculus Avatars in Home 2.0 for Rift and Rift S are meant to look translucent which introduces rendering challenges when using conventional translucent methods. Today we will provide solutions for some of these challenges, and show how masked rendering can be used as an alternative. This article provides implementation details and properties of translucent and masked rendering.
Home 2.0, implemented with Unreal Engine 4, uses clustered forward rendering instead of deferred rendering for materials and geometry, enabling the ability to use multisample anti-aliasing (MSAA), along with lower bandwidth costs at higher resolutions. Previously, Avatars in Home were rendered translucent, using a two-pass rendering technique. This provided a good quality solution when the Avatars did not have lighting affect them. For higher visual quality, Avatars in Home now have lighting affect their look, leading to performance and quality challenges when using translucent rendering. As a result, we switched to rendering masked Avatars, allowing you to control visibility of objects in a binary manner. This method works well with MSAA, but without MSAA, artifacts from masked rendering are apparent.
Note: Examples of how Home uses translucent and masked materials for Avatars with various anti-aliasing levels. Left to right: masked 4X MSAA, masked no MSAA, translucent no MSAA, translucent 4X MSAA. Artifacts seen in these examples will be further discussed in later sections.
Translucent vs Transparent
Throughout this post, the term “translucent” specifies the rendering method for transparent objects to be consistent with Unreal Engine terminology. Unreal Engine uses the term translucent for materials of transparent objects. For example, when choosing the “Blend Mode” and “Lighting Mode” for a material of a transparent object, we must choose the “Translucent” and “Surface Translucency Volume” options, respectively.
Due to potential confusion between the two terms, it is necessary to define “translucent”, and elaborate on why we decided to choose one option over the other. The main difference between transparent and translucent materials is the physical properties that affect the amount of light that is allowed to pass through them. Transparent materials allow the full passage of light, while objects that are on the other side of a transparent object are clearly visible with no distortion. Translucent materials allow light to enter, and then scatter based upon the material's physical properties. It allows some light to pass through them, all while distorting and diffusing light paths along with objects located behind them.
Translucent Rendering
When implementing translucent rendering for Avatars in Home, we prioritized performance and minimized visual artifacts. Outlined below are specific challenges related to depth, lighting and MSAA that arose from our implementation.
Depth Issues
Avatars in Home with translucent rendering are implemented with alpha blending. The image above shows an example of depth issues that occur with this implementation: the hand on the left shows incorrect depth sorting and the hand on the right shows correct depth sorting. For correct results, objects must be blended from back to front which is not commutative; therefore, triangles must be sorted, which is neither cheap nor sufficient for certain edge cases. The order in which surfaces are blended affects the total visibility of each surface. We could use order-independent transparency which allows for per-pixel geometry sorting, but it is a fairly costly technique that is not suitable for VR.
We were able to combat the sorting issue by leveraging Unreal Engine's Custom Depth, providing occlusion similar to that from a Z-Buffer, while achieving correct translucent sorting order. Custom Depth is a feature in Unreal Engine that uses a second depth buffer which can be used for effects like custom culling for translucency. For Avatars, we use Custom Depth for per-pixel depth occlusion.
When rendering the mesh into this depth buffer, pixels that are behind the outer shell of the mesh can be culled. We run two passes, as follows:
Render Custom Depth with DepthTest to get the nearest depth.
Read Custom Depth in the Translucency pass. Render translucency masked by the result of a depth comparison between pixel depth and custom depth at the pixel location.
This two pass technique is set up within Unreal Engine by enabling the checkbox in translucent materials to “Write to Custom Depth”, and enabling the checkbox “Render Custom Depth” for Static and Skeletal meshes. The pixel shader then reads from the Custom Depth buffer, all without any added cost for the pixel shader.
The image above depicts what would be rendered to the Custom Depth buffer in that scene on the left and what would be the resulting scene after the Translucency pass on the right. The Avatar is the only object seen on the left since it is the only mesh in the scene rendering to Custom Depth. The Custom Depth buffer used for Avatars in Home also has the same resolution and MSAA samples as the Scene Depth buffer and resulting texture.
Instead of doing a hard comparison between Scene Depth and Custom Depth, a small depth bias should be added to prevent too many pixels from being culled, including the outer shell. The image below illustrates the effects of adding a depth bias before and after. This will be discussed in-depth under the “MSAA Artifacts” section.
Lighting
The current Avatars in Home allow light to affect their look, therefore, it is important to address the issue of shadow receiving, which is the concept of materials being shadowed from their environment. For quality reasons, we wanted Avatars in Home to have this property; however, shadow receiving for translucent materials is not supported for forward rendering in Unreal Engine. Due to performance reasons, Unreal Engine approximates shadows for translucent materials which does not work with our deferred shadows method (this method will be explained in the “Masked Rendering” section).
Translucent materials do not have support to receive dynamic shadows from stationary lights using Unreal Engine's translucency lighting volume feature. Nevertheless, using this feature would add extra GPU performance cost due to the extra passes.
Another method of receiving shadows is using volumetric lightmaps. They provide dynamic objects with more high quality baked shadows when used with stationary lights. We managed to use this feature to add single sample per-object shadows. As a result, baked information fades over time, but it only works with static geometry and requires stationary lights.
In order to add dynamic shadows for translucent materials, we could employ one of the following methods that would add extra performance costs:
Cast a single ray to light sources to estimate a single in-shadow value. This can be further improved with multiple ray casts for soft shadows.
Add more CPU ray casts to create a detailed shadow texture mask. This will still only produce low resolution shadows.
Enable the translucency lighting volume to work with forward rendering which would provide higher quality shadows.
MSAA Artifacts
Since Oculus Home uses clustered forward rendering, we are able to use MSAA to improve visual quality around edges; however, with translucent rendering there is an issue of some object edges not being rendered when using MSAA. This causes artifacts of nearby and background content to “bleed” into the scene. In other words, the opaque scene around steep angles of translucent materials bleeds through, typically around edges, due to this depth error.
Our solution to minimize “bleeding” artifacts involves adding multiple depth biases, ensuring depth errors are minimized, especially with MSAA. This is done during depth rejection when determining opacity of a pixel, with no depth bias added, many fragments are culled out.
The images below portray varying levels of depth biases added during depth rejection. The first image depicts depth comparison with a small, constant depth bias, which doesn't take into account varying steepness levels around edges. The second image depicts depth comparison with a larger, constant depth bias. The third image depicts depth comparison with both a constant and gradient depth bias, which provides the best solution. (Note: this is considered a soft comparison).
The following steps outline the soft comparison process between Scene Depth and Custom Depth.
Using the Scene Depth (PixelDepth in image), we compute the nearest depth by computing its ddx and ddy, and selecting the maximum value of the two.
This value is multiplied by a pre-determined “DepthBias_Gradient” value, taking into account varying steepness of angles around edges.
The multiplied value is clamped and compared to a pre-determined “DepthBias_Constant” value. The maximum value of the two is added to the Scene Depth.
The new Scene Depth is compared with the minimum value in the Custom Depth buffer. The minimum value is what is stored in the buffer if there is no MSAA. However, if there is MSAA then we compare all MSAA samples for Custom Depth and take its minimum value.
If the new Scene Depth is greater, then opacity is set to 0.
Note: numbers in the above image are mapped 1 unit to 1 cm
Unfortunately, this method does not fix all of the “bleeding” artifacts that are caused from per-object sorting, as there are certain worst case scenarios when these artifacts are even more apparent. One such example is if a translucent Avatar is in front of a High Dynamic Range (HDR) light source. HDR light sources have a high saturation value that is not clamped before the translucency pass runs and the “bleeding” artifacts appear. Therefore, the high color values are “bleeding” through to the translucent material before the color is remapped, causing artifacts to be more conspicuous.
One method to fix “bleeding” artifacts is to increase the two different depth biases mentioned above; however, this can cause other depth-related issues. Depth biases are similar to shadow biases in their behavior. Shadow biases fix shadow “acne”, which is similar to “bleeding” artifacts, but a shadow bias that is too large can also make shadows disconnected. Similarly, a depth bias that is too large can cause translucent materials to have intersection-related artifacts.
The images below depict intersection-related artifacts with an increasing depth bias, leading to more artifacts.
Note: the top left image shows masked rendering without any intersection-related artifacts. The top right image shows translucent rendering with a constant depth bias of -0.2. The bottom left image shows translucent rendering with a constant depth bias of -0.5 and the bottom right image shows translucent rendering with a constant depth bias of -1.0. As the constant depth bias increases, the intersection-related depth issues are more prominent.
It is necessary to find a good balance between minimizing “bleeding” artifacts with intersection-related artifacts in order to get the best possible results, and we've found that adding depth biases is a great way to do so.
Masked Rendering
Home currently uses masked rendering for Avatars. The visual quality of masked materials is dependent on the MSAA level combined with a dithered pattern. We use a blue noise dithering pattern to avoid an ordered, dithered look. This is discussed in more detail in the following Oculus Blog Post on Shader Snippets for Efficient 2D Dithering.
While Home is implemented with clustered forward rendering for materials and geometry, deferred shadows are still used for opaque and masked rendering. Deferred shadows are implemented by running an Early Z pass to compute depth, which is later used to create a shadow mask used within the Base pass. In other words, deferred shadows are computed in screen space and read by all non-translucent objects in the Base pass.
The image below depicts what is rendered in the Early Z pass with masked rendering for Avatars and what is later rendered in the Base Pass.
Ultimately, shadow receiving is guaranteed for masked rendering because opaque portions of a masked material respect lighting. Sorting of objects is also correct so we are not required to use a second depth buffer.
Masked rendering does not encounter the same MSAA artifacts as translucent rendering, it uses alpha-to-coverage which “maps the alpha value output by the pixel shader to the coverage mask of MSAA.” With MSAA and alpha-to-coverage combined in masked rendering, there are plenty of sub-samples to average, which lead to higher visual quality. There is also no depth bias needed during the depth comparison for masked rendering. The images below compare masked and translucent rendering with MSAA. The image on the right shows the remaining “bleeding” artifacts from MSAA for translucent rendering that were not eliminated with the depth biases.
Masked rendering solves many of the issues that come up with translucent materials, including the ability to allow for deterministic occlusion and deferred shadows in Unreal Engine.
Performance
Performance numbers are based off of the following scene of an Avatar looking into the Avatar Editor Mirror. The scene was manually set up which adds an amount of human error. For example, the Avatar's distance from the mirror was not calculated during setup, consequently, there could be more or less translucent/masked pixels to render in each scene. This was profiled with the Medium Graphics setting which corresponds to Unreal Engine's High Scalability setting and uses 4X MSAA.
CPU performance is similar for both rendering paths, as translucent rendering requires approximately 750 draw calls per frame and masked rendering requires 720 draw calls per frame. GPU performance is evaluated by measuring relevant passes for rendering masked and translucent Avatars. The numbers in the table indicate that translucent rendering adds almost 80% more GPU time per frame when compared to masked rendering.
Masked rendering runs an Early Z pass to obtain the correct depth, it then renders in the Base Pass. Translucent rendering skips the Early Z pass and instead runs a Custom Depth pass after the Base pass has run. It then renders in the Translucency pass. The Resolve Scene Depth pass is needed for translucent rendering in order to resolve Custom Depth to the final Z when submitted to OVR to be used as a texture.
In Summary
The table below highlights the overall key differences between translucent and masked rendering. Due to the visual artifacts from MSAA, added performance cost and lack of shadow receiving that occur with translucent rendering, Home on Rift only uses masked rendering for Oculus Avatars.
The items listed in the table above, along with the performance impacts discussed in this article should help to evaluate whether to use translucent or masked rendering in real-time applications, especially when using forward rendering. Based on these findings, we updated the avatar docs for Unity and Unreal and the Unity and UE4 integrations to include an option for masked rendering.
Quarterly Developer Recap: Tracked Keyboard, Hand Gestures, WebXR PWAs and more
Unlock the latest updates for developers building 3D, 2D and web-based experiences for Horizon OS, including the new tracked keyboard, microgesture-driven locomotion, and more.
GDC 2025: Insights for Creating, Monetizing and Growing on Meta Horizon Worlds
Get insights from GDC on how Meta Horizon Worlds’ unique development path enables you to build quickly, broaden your reach, earn revenue and measure success.
All, Design, GDC, Games, Marketing, Mobile, Multi-User, Optimization, Quest
The Past, Present, and Future of Developing VR and MR with Meta
Take a journey through the past, present, and future of developing VR and MR as Meta’s Director of Games Chris Pruett shares evolving ecosystem trends and audience insights.