Object Interaction Part 4: Constrained Interactions
Oculus Developer Blog
Posted by Eric Cosky
June 2, 2017

This is the fourth and final of a series of articles that reviews common usage patterns, technical issues, trade-offs and pitfalls that are important to consider when implementing a VR interaction system.

The fundamental goal of any interaction system is that it is as easy to learn as possible, with consistent rules for the users so they can anticipate how to interact with anything in the virtual world, regardless of any special behavior an object might have. When an accurate physical simulation is combined with logic that coordinates input, animation, and scripted behavior, the result is an experience where objects consistently behave as expected during player interactions.

Improving the Feel of Constrained Interactions

Interactable objects with constrained motion such as levers can be problematic if the user is required to maintain exact hand positions while interacting with objects. Due to the lack of physical contact with the virtual object, the user can easily move their hand away from the constrained object while using it and it is useful to consider how to respond to this situation. If the object moves more slowly than the user’s hand then the problem is even more obvious. It often feels good to allow a wide margin of error when tracking hands on interactable objects to make it easier for users to communicate their intentions instead of requiring them to be precisely accurate. For example, levers might continue tracking even if the hand is outside the original activation distance. Approximations like these can be useful for responding to what the user is clearly trying to do without requiring precision in their movement. Depending on the situation, it might be useful to stop rendering the hand so the user will focus on the motion and will be less concerned with how the hands are not exactly on the object. Another option is to render a ghost-version of the hands at the motion controller positions, while the solid, normal version of the hands remain on the interactable object.

Object Placement and Sockets

Some designs require placing objects into carefully authored positions. These usually include placing objects on other objects in the environment, holstering weapons or attaching other equipment on the player. The user moves the object into the approximate position and releases it, and the object will then snap into place. The snap might occur instantly, or there may be an animation involved to align it while moving it into the final position.

If the application has a variety of objects with different types of attachments, there needs to be some way to ensure objects are only able to attach to appropriate targets. One way to do this is to define socket receiver volumes which define the attach points, and a socket trigger volume on the attachable objects. Each of these volumes will have a socket type identifier to ensure triggers only detect matching receivers. When a trigger volume overlaps a matching receiver volume, the socket’s attachment logic will activate. The simplest attachment logic will transfer control of the held object to the socket if the user presses a button. More advanced logic might have a pre-release state to provide feedback to the user indicating what the object will attach to if the user releases it. For instance, the controller might vibrate while the screen shows a ghosted version of the object where the object would be attached.

Interaction With Distant Objects

While the best experience is likely to occur when the player has enough room to fully explore the virtual space, with motion controllers that can reach everything and sufficient personal mobility to actually move about within the space, not everyone has these things. If a longer play session is a goal it may be also be desirable to minimize player fatigue by reducing the need for full body movement. Supporting interaction with distant objects will help reach these goals and is worth considering unless it is fundamentally incompatible with the design goals.

Where a game might have had strict requirements on play areas or adaptive object placement in order to ensure users can reach everything, adding the ability for users to interact with objects at a distance can be useful because the size of the play area does not have to affect the visual design or introduce the need for locomotion. Players can continue to move about in the play area that fits their room and interactions can still occur even if an interactable object is out of reach. This can be especially useful for users with mobility constraints.

Orientation Remotes, Gamepads, And Gaze-Based Interfaces

Motion controllers provide interaction options that extend the functionality of traditional gamepad controllers in obvious ways. The capacitive touch inputs on the Oculus Touch controllers provide even more options for the interaction system. With all the great features that motion controllers provide, it can be easy to assume that a project that relies on them will never be playable on systems where an orientation remote, gamepad or trackpad is the only option. It turns out that many projects can support non-tracked input devices without breaking the design goals by including features that allow the user to interact with distant objects. Doing this does require some compromises on the behavior of held objects but, depending on the project, these may be acceptable.

Taking the time to plan a flexible interaction model that supports the design across the broadest set of input devices can affect the success of a project because, put simply, more people can play the game. If you want to reach the largest audience, you should consider a solution that supports a pointer mechanism where the HMD or orientation remote aims a cursor and the trackpad or gamepad buttons triggers interaction.

Wrapping Up

You may have noticed these articles are devoid of specific recommendations for libraries or other resources. There are some fantastic libraries that support most or all of these features described above. One major factor in choosing a library, aside from engine compatibility, is that each comes with their own limitations and in some cases functionality that extends beyond the scope of interaction. Not every project is suitable for the integration of systems that redefine the behavior of locomotion, for instance. If you do need to create your own interaction system to meet specific needs, taking the time to study how previous efforts have approached these problems will almost certainly improve your results.