All Oculus Quest developers MUST PASS the concept review prior to gaining publishing access to the Quest Store and additional resources. Submit a concept document for review as early in your Quest application development cycle as possible. For additional information and context, please see Submitting Your App to the Oculus Quest Store.
There are a few things that make hands a unique and unprecedented input modality. They’re less complicated than controllers, whose buttons and analog sticks can require a learning curve for newer users. They don’t need to be paired with a headset. And by virtue of being attached to your arms, hands are constantly present in a way that other input devices can’t be.
But they also come with some challenges: Hands don’t provide haptic feedback like controllers do, and there’s no inherent way to turn them on or off. Here are the principles that helped us come up with solutions to the unique design challenges that hand tracking presents.

Hands don’t come with buttons or switches the way other input modalities do. This means there’s nothing hinting at how to interact with the system (the way buttons hint that they’re meant to be pushed), and no tactile feedback to confirm user actions. To solve this, we strongly recommend communicating affordances through clear signifiers and continuous feedback on all interactions.
For example, our system pointer component affords the ability to make selections. Its squishable design signifies that you’re meant to pinch to interact with it. Then, the pointer itself provides feedback as soon as the user begins to pinch, by changing shape and color. Once the fingers touch, the pointer provides confirmation with a distinct visual and audible pop.
You can see more about signifiers and feedback in our Best Practices.

Hands are practically unlimited in terms of how they move and the poses they can form. This presents a world of opportunities, but too much possibility can lead to a noisy system.
Throughout these guidelines, you’ll find places where we recommend limitations. We limit which of your hand’s motions can be interpreted by the system, for more accurate interactions. We snap objects to two-dimensional surfaces when rotating and scaling, to limit their degrees of freedom and provide more control. We created components like pull-and-release sliders, to limit movement to just one axis and enable more precise selection.
These limitations help increase accuracy, and actually make it easier to navigate the system or complete an interaction.

We envision a future where people can use the right input modality for the right job in the right context. While hands offer unique new interaction possibilities, they are also an important step toward that future.
With other input devices like controllers and computer mice, batteries can die or you may need to put them down to pick something else up. In those scenarios, you can use your hands for an uninterrupted experience. So while you’re designing experiences for hands, consider the connective capabilities that this input modality can make room for.
It’s very tempting to simply adapt existing interactions from input devices like the Touch Controller, and apply them to hand tracking. But that process will limit you to already-charted territory, and may lead to interactions that would feel better with controllers while missing out on the benefits of hands.
Instead, focus on the unique strengths of hands as an input and be aware of the specific limitations of the current technology to find new hands-native interactions. For example, one question we asked was how to provide feedback in the absence of tactility. The answer led to a new selection method, which then opened up the capability for all-new 3D components.
It’s still early days, and there’s still so much to figure out. We hope the solutions you find guide all of us toward incredible new possibilities.