Medium Under the Hood: Part 2 - Move Tool Implementation

Oculus Developer Blog
Posted by David Farrell
October 3, 2017

Oculus Medium is an immersive VR experience, designed for Touch, that lets you sculpt, model, paint, and create tangible objects in a VR environment. Earlier this summer, we released an update which included the Move Tool.The Move Tool works by converting the signed distance field data to a triangle mesh, deforming that mesh using a displacement field, and converting the triangle mesh back to a signed distance field.

In our first post we discussed how the Move Tool was developed and some directions that did and didn’t work out. Our second installment describes how both the signed distance field and triangle mesh are useful surface representations, how Medium converts a triangle mesh to a signed distance field, and some ways we’ve found that applicable to other sculpting operations.


In Medium, sculpting operations work with a signed distance field. Since early versions of Medium, its renderer has used triangle meshes, generated using an algorithm called Transvoxel. Transvoxel is a Marching Cubes-like technique that creates several level-of-detail triangle meshes that can be stitched together seamlessly. The GPU renders these triangle meshes instead of rendering the implicit surface of the SDF.

When developing the Move Tool, we explored converting the triangle mesh back to the signed distance field representation. We were delighted to find that the triangle mesh to SDF conversion was higher quality than we expected. While it is a lossy conversion, in practice, the loss of data isn’t perceptible with the kinds of sculpts created in Medium.

We realized that being able to choose either the SDF or mesh representation for sculpting operations is a powerful technique. Each representation has strengths and weaknesses that complement the other:

Signed distance fields:


  • Boolean CSG operations are straightforward and robust
  • Surfaces are always watertight and never self-intersecting
  • No ambiguity about whether a point is inside or outside the surface
  • Topology modification is easy


  • Hard to move a surface in a general way

Triangle meshes:


  • The most common representation in computer graphics
  • Vertices of the mesh are easily moved about


  • Boolean CSG operations are hard to make robust
  • Self-intersecting surfaces are hard to detect and repair

After finishing the Move Tool, we discovered that we could use either method of surface representation and convert to the other. This is a powerful technique that we’ve used not just for the Move Tool also in other sculpting operations.

On the left is a sculpt represented as a wireframe triangle mesh; on the right is one slice of the signed distance field representing the same sculpt.


To convert the triangle mesh to an SDF, Medium makes an important assumption: it assumes that the input triangle mesh is two-manifold. This is true for all meshes generated by Transvoxel because surfaces represented by signed distance fields are always watertight. This assumption makes it easy to determine whether a given point is inside or outside the mesh.

The conversion is done with the following steps:

1) Rasterize the mesh into per-pixel fragment lists

For any watertight, two-manifold mesh, you can determine if a point is inside or outside the surface by casting a ray in any direction from that point and counting how many surfaces the ray crosses. If an even number of surfaces were crossed, then the point is outside the surface, and if an odd number were crossed, the point is inside. Since the ray can be cast in any direction, we use rasterization of the X axis onto the YZ plane instead of raytracing to find the surfaces. Rasterization and raytracing have a lot in common, but in this case, rasterization means we can avoid building an acceleration structure like raytracing would require.

While rasterizing, we create per-pixel lists of fragments, and record whether the fragment comes from a front- or back-facing triangle. Every rasterized fragment is recorded in the per-pixel lists; there’s no depth buffer test to reject fragments. These lists are used in the next step.

Here are the rasterized fragments. The light green fragments are front facing and the dark green fragments are back facing:

2) Generate spans of interior regions by pairing front/back facing fragments

After all triangles are rasterized, we iterate through each pixel’s list of fragments. We sort the fragments in each list by their depth coordinate, and find pairs of front/back facing fragments. These pairs represent the solid, interior regions of the object, and are stored as spans, which are the starting and ending X coordinates of the pair of fragments. With this information, we know the sign of any point in the output signed distance field.

Here are the spans rendered as magenta lines:

3) Generate unsigned distance in bounding box around each triangle

Medium stores its data in a narrow band level set, where the narrow band has a width of 2 voxels in each direction. Since we only care about data in the narrow band, the mesh to SDF conversion code forms a bounding box around each triangle, and inflates that bounding box to include the narrow band. At each grid point in that box, we find the unsigned distance from that grid point to that triangle. We update the grid point’s distance value if this triangle is the closest one we’ve found to the point (if the distance is less than the current value).

These distance values are stored in blocks of 8x8x8 voxels. Distance values outside of the narrow band are not needed, so these blocks are stored sparsely to save memory. With this information, we know the unsigned distance of any point in the output signed distance field.

Below, you can see these blocks of 8x8x8 voxels:

Putting It All Together

Once these steps are complete, we have all the information we need for the signed distance field. For any point in the 3D grid, we use the results of step 2 to see if a span covers that point, and thus if it should have a positive or negative sign. From the results of step 3, we know the magnitude of the distance of that point. Together, we know the signed distance of that point.

This shows the flow from signed distance field → triangle mesh → deformed triangle mesh → back to signed distance field


Using the two complementary surface representations is useful in other sculpting operations besides the Move Tool.

Changing Resolution

Medium 1.0 had a way to increase the resolution of a layer in a sculpt, but no way to decrease the resolution. That’s because we only store distance values up to a narrow band distance away. When increasing resolution, we points are scaled such that they are closer together, and so they stay within the narrow band. But when decreasing resolution, the points are scaled to be farther away from each other. If they are scaled to be outside the narrow band, then their distance value is clamped to the narrow band maximum, which is essentially meaningless.

We realized that we could support a Decrease Resolution operation by taking the SDF’s triangle mesh, scaling that mesh, and then converting that back to SDF. That works so well that we re-implemented our old Increase Resolution to do the same thing, just scaling by a different value. This can even be used for a resolution change that contains non-uniform scale.

Three Decrease Resolution operations followed by three Increase Resolutions operations

Imported Mesh to SDF (Copy To Clay)

Another useful sculpting operation is what we call Copy To Clay. That operation imports a 3D triangle mesh created externally from Medium and converts it to sculptable SDF data. This is straightforward with the triangle mesh to SDF code, with the caveat that the mesh to SDF technique described above assumes that the triangle mesh is two-manifold. This is not always true of meshes created outside of Medium, so to support arbitrary meshes, the Copy To Clay code rasterizes the mesh three times and create spans in three different directions: down the X axis, down the Y axis, and down the Z axis. For each point, we take a vote with the spans in the three directions. The two (or three, ideally) spans that agree are used to decide whether that point is interior to the object. This isn’t a perfect solution, but does fix many problematic meshes that we’ve seen.

On the left is a wireframe view of a mesh created outside of Medium; on the right is the same mesh imported into Medium and copied to clay

Level of Detail

When we implemented Decrease Resolution, the first thing we noticed was how flipping through Increase Resolution then Decrease Resolution made really nice level-of-detail meshes. The detail faded away in each smaller version of the sculpt. We realized it’s easy and very quick to generate level-of-detail meshes by rescanning a sculpt’s triangle mesh at a lower resolution SDF. We’re investigating this to quickly generate small preview meshes for thumbnail viewing in the Medium home screen and web page. Medium’s been able to decimate and export meshes since its 1.1 update, but generating level of detail through this method is faster than Medium’s edge collapse decimator.

The original high resolution

Lower resolution LOD mesh

However, using this technique to generate LOD meshes has two problematic issues. First, this LOD scheme leads to a uniform loss of detail, whereas a more traditional edge collapse decimation maintains finer details. Medium’s decimator uses a quadric error metric to guide edge collapses, which preserves features in the mesh but takes more time to compute. Second, downsampling the mesh can cause aliasing, leading to LOD meshes with missing geometry. A simple example is when a part of a mesh falls in between sample points on the grid. Although the distance values of those points will be correct, they will all have a positive sign (meaning they are outside the sculpt’s surface), and isosurface extraction algorithms won’t place triangles in between those samples.

On the left, the original surface on top of a high-resolution grid; in the middle, the same surface on top of a low resolution grid; on the right, the surface after isosurface extraction using the low resolution grid. Notice how the right most image loses the parts of the surface that fall in between grid points.


In summary, it is very useful to be able to take advantage of both representations of our data set and to quickly convert between them. This was first implemented for the Move Tool, and you can see how it has already helped us solve other problems. We’re continuing to work on new and interesting ways to take advantage of this to speed up existing operations and add new ways to manipulate your sculpts.

In the next part of this series, we’ll cover how Medium performs the Move Tool and other sculpting operations asynchronously so that the application runs at VR framerates.

Using Move Tool with Dominic Qwek

Using Move Tool with Glen Southern