Hands are a promising new input method, but there are limitations to what we can implement today due to computer vision and tracking constraints. The following design guidelines enable you to create content that works within these limitations.
In these guidelines, you’ll find interactions, components, and best practices we’ve validated through researching, testing, and designing with hands. We also included the principles that guided our process. This information is by no means exhaustive, but should provide a good starting point so you can build on what we’ve learned so far. We hope this helps you design experiences that push the boundaries of what hands can do in virtual reality.
People have been looking forward to hand tracking for a long time, and for good reason. There are a number of things that make hands a preferable input modality to end users.
There are some complications that come up when designing experiences for hands. Thanks to sci-fi movies and TV shows, people have exaggerated expectations of what hands can do in VR. But even expecting your virtual hands to work the same way your real hands do is currently unrealistic for a few reasons.
You can find solutions we found for some of these challenges in our Best Practices section.
To be an effective input modality, hands need to allow for the following interaction primitives, or basic tasks:
These interactions can be performed directly, using your hands as you might in real life to poke and pinch at items, or they can be performed through raycasting, which directs a raycast at objects or two-dimensional panels.
You can find more of our thinking in our Interactions section.
Today, human ergonomics, technological constraints and disproportionate user expectations all make for challenging design problems. But hand tracking has the potential to fundamentally change the way people interact with the virtual world around them. We can’t wait to see the solutions you come up with.