There are several factors we’ve experimented with when it comes to designing interactions. The Interaction Options section breaks down some of the different considerations you might face, depending on the kind of experience you’re designing. Then, the Interaction Primitives section breaks down the options that work best for specific tasks, based on what we’ve experimented with.
The best experiences incorporate multiple interaction methods, to provide the right method for any given task or activity.
Here we outline the different options to consider when designing your experience.
Near-Field Components The components are within arm’s reach. When using direct interactions, this space should be reserved for the most important components, or the ones you interact with most frequently.
Far-Field Components The components are beyond arm’s reach. To interact with them, you would need to use a raycast or locomote closer to the component to bring it into the near-field.
Note: A mix of near-field and far-field works for many experiences.
Direct With direct interactions, your hands interact with components, so you’d reach a finger out to “poke” at a button, or reach out a hand to “grasp” an object by pinching it. This method is easy to learn, but it limits you to interactions that are within arm’s reach.
Raycasting Raycasting is similar to the interaction method you may be familiar with from the Touch controllers. This method can be used for both near- and far-field components, since it keeps users in a neutral body position regardless of target distance.
Poking Poking is a direct interaction where you extend and move your finger towards an object or component until you “collide” with it in space. However, this can only be used on near-field objects, and the lack of tactile feedback may mean relying on signifiers and feedback to compensate.
Pinching Pinching can be used with both the direct and raycasting methods. Aim your raycast or move your hand toward your target, then pinch to select or grasp it. Feeling your thumb and index finger touch can help compensate for lack of tactile feedback from the object or component.
Note: Using this method also makes for a more seamless transition between near- and far-field components, so the user can pinch to interact with targets at any distance.
Absolute With absolute movements, there’s a 1:1 relationship between your hand and the output. The output can include cursors, objects, and interface elements. For example, for every 1° your hand moves, the cursor will move 1° in that direction. This feels intuitive and mirrors the way physical objects behave, but it can be tiring and often limits the interaction possibilities.
Relative With relative movements, you can adjust the ratio of distance between your hand’s movement and how much the output moves. For example, for every 1° your hand moves, the cursor will move 3° in that direction. You can make the ratio smaller for more precision, or increase the ratio for more efficiency when moving objects across broad distances.
Note: For even more efficiency, you can use a variable ratio, where the output moves exponentially faster the more quickly you move your hand. Another option is an acceleration-based ratio, which is similar to using a joystick. If a user keeps their hand in a far-left position and holds it there, the object will continue moving in that direction. However, this makes it more difficult to place an object where you want it, so it’s not recommended for experiences where precise placement is the goal.
Abstract Gestures Abstract gestures are specific gestural movements that can be used to navigate the system and manipulate objects. You may have seen futuristic versions of this from sci-fi movies and TV shows. While they can be useful for providing abstract function, in reality abstract gestures come with a few drawbacks:
We recommend using abstract gestures sparingly, and to instead use analog control to manipulate and interact with most virtual interfaces and objects.
Our System Gesture is the only abstract gesture we implemented in our system.
Analog Control When using analog control, your hand’s motion has an observable effect on the output. This makes interactions feel more responsive. It’s also easy to understand — when you move your hand to the right, and the cursor, object, or element moves to the right. Raycasting and direct interactions are both examples of analog control.
These interaction options work together in different ways, depending on the circumstances. For example, if your target objects are in the far-field, your only available interaction method is raycasting (unless you bring the object into the near-field). Or for direct poking interactions with near-field objects, your only option for hand-output relationship is absolute.
This chart helps lay out the available options for different circumstances.
As we said in the Introduction, hands need to allow for 3 interaction primitives, or basic tasks, to be an effective input modality.
As you’ll see, some of the above interaction options may work better than others for your specific experience.
There are two kinds of things you might select: 2D panel elements, and 3D objects. Poking works well for buttons or panel selections. But if you’re trying to pick up a virtual object, we’ve found that the thumb-index pinch works well, since it helps compensate for the fact that virtual objects don’t provide tactile feedback. This can be performed both directly and with raycasting.
If the target is within arm’s reach, you can move it with a direct interaction. Otherwise, raycasting can help maintain a neutral body position throughout the movement.
Absolute movements can feel more natural and easy, since this similar to how you move and place items in real life. For more efficiency, you can use relative movements to move the object easily across any distance or to place them in precise locations.
If you’re looking for an intuitive rotation method and aren’t too worried about precision, you can make objects follow the rotation of a user’s hand when grasped.
A more precise method of rotation is to snap the object to a 2D surface, like a table or a wall. This can limit the object’s degrees of freedom so that it can only rotate on one axis, which makes it easier to manipulate. If it’s a 2D object, you can similarly limit its degrees of freedom by having the object automatically rotate to face the user.
Uniform scaling is the easiest way to resize an object, rather than trying to stretch it vertically or horizontally.
Similar to rotation, an easy method for resizing is to snap an object to a 2-dimensional surface and allow it to align and scale itself. However, this limits user freedom, since the size of the object is then automatically determined by the size of the surface.
To define specific sizes, you can also resize objects using both hands. While your primary hand pinches to grasp the object, the second hand pulls on another corner to stretch or shrink the object. We found this to be problematic for accessibility reasons, as people may have difficulty with hand dexterity, or their second hand might be occupied. Plus, this method increases the likelihood of your hands crossing over each other, which leads to occlusion.
Handles Another method for manipulation is to attach handles to your object. This provides separate handles that control movement, rotation, and resizing, respectively. Users pinch (either directly or with a raycast) to select the control they want, then perform the interaction.
This allows users to manipulate objects easily regardless of the object’s size or distance. Separating movement, rotation, and scale also enables precise control over each aspect, and allows users to perfect the object’s positioning and change its rotation without the object moving around in space. However, having to perform each manipulation task separately can become tedious, particularly in contexts where this level of precision is not necessary.