One of the most common pieces of feedback I give to developers is to be more 'generous' with interactions. Studios coming from traditional PC, console, or even mobile environments have been used to working with interaction platforms that support discrete interactions, have high levels of precision, and full visibility of the interface.
In VR applications, users have significantly more space to keep track of and a much wider range of possible positions from which to interact. Accuracy may be lower in general, especially for newer users, and interactions may occur at the edge of, or even outside of, the visual field.
The concept of interaction generosity in real-time applications isn't a new one. The goal is to provide appropriate ranges and buffers around some action so that users can be 'close enough,' and in more complex implementations, predict player intent so that they're less likely to perform an action they don't expect.
A common example: video game jumping. Anyone who's played the original Prince Of Persia is familiar with the punishing precision required to successfully jump across a gap in the 1989 classic. Time the jump too early, and you'll fall short. Time it too late, and you'll be falling before you can jump! Modern platformers add a 'tolerance' to platform jumps, where players have a few extra frames from when their character is 'grounded' in which to execute their jump. This helps compensate for display refresh times, allow players to 'anticipate' the jump, and also literally offset the time it takes for nerve signals to travel. While the implementation is less precise, it feels more accurate to the player.
Some games will take this even further and factor in player intent based on context. If a player starts their jump a bit early, but it's clear they're intending to cross to the other platform, you can 'nudge' the trajectory a bit to ensure a smooth landing. This kind of behavior is especially important in games where the focus isn't heavy on the platforming: the less of a core pillar a 'skill- based’ mechanic is, the more frustration a lack of generosity can introduce.
In VR, users can interact in a 360 space within a distance of at least their arm reach, or farther for room scale experiences. If most of your interactions occur far apart from each other, the first step you can take to increase interaction generosity is simply increasing the area a user's interaction can occur in. This is especially important for objects a user may not be looking directly at when they try to interact. Have a belt mounted tool on the user's left side? Don't ignore any interactions just because their hand isn't actually touching it! If their arm is down and a grab input is detected, just let them have it! In some cases, this type of interaction can be purely directional. If the object is in 3D space, particularly if it's not anchored to the player, increasing the range of the trigger volume for an object being 'selected for interaction' achieves the same effect.
Note that while this can also work well for distance-based interactions, if you add a discrete fixed interaction visual, like a 'laser beam' for a pointer, you'll want to keep volumes tight, as users will have increased precision. However, if you allow for a 'flexible' selection at distance, it can have the same benefits as near field interactions.
Let's introduce some more density to your experience. Assume you have several objects, either in the near field or at a distance, that your user might want to interact with, which are all relatively close together.
It's important to make sure that a user doesn't accidentally move off of their intended selection. To accomplish this, you can give more ‘weight’ to the interaction volume (or angle band for directional selection) a user is currently in, until they are clearly selecting the neighboring object. That object becomes the new selected one, and is weighted more strongly against any other interaction volumes. By only switching when it’s very clear a user is trying to interact with a different object, the interaction will “stick” to the last one the user moved to.
For additional polish, you can also trigger a change based on the movement delta of the user's controller or hand. If an object is selected and the deltas seem small, stay in the selected object. See a big delta? That's probably the user trying to select something else, and you can reduce the amount of additional weight applied.
We can take this design philosophy even further by trying to predict what the user is trying to do, for which we can rely on game state. Imagine you have a puzzle game where the user needs to enter a passcode into a realistic number pad. The numbers are all spaced closely together, so the 'stickiness' described above can be used to make it easier to go between each one. Before the player knows the right code, there's no way to tell what button they're trying to press, so factoring for intent isn't important.
Later, we know the player looked at the code scrawled on a chalkboard or on a piece of paper they found in a desk drawer. They return to the keypad to enter it! Since we know they saw the code, we can give additional weight to buttons in its sequence. After the first button is pressed, we can increase the weight on the second, then the third and so on until the sequence is completed.
Let's imagine we have a first person shooter with immersive interactions. Inventory is managed physically, and weapons need to be reloaded and readied manually. Next, let's break down player interaction attempts and their outcomes without and with generosity applied over a two second 'fire and reload' sequence. Without generosity, this common 'fire and reload' sequence can be difficult and frustrating for a player, especially if they are in the middle of intense sections that can lead to failure. With generosity, the player is more successful in executing the routine 'fire and reload' sequence over and over and can focus their attention on the incoming targets.
The player needs to aim and fire the gun. Without generosity on their hand transforms, if their hands are in the wrong position or orientation for the gun, their aim could be very tricky, or in some implementations, the leading or trailing hand could detach from the weapon. By using the vector between their two hands as your aim vector, aligning the gun between them appropriately, and by allowing tolerance on hand distance and rotation, you respect the player's intent to hold the weapon. If the trailing hand is out of alignment and the trigger is pulled, the gun can still be fired with confidence in the player's intended action.
The player needs to expend the spent casing from the chamber by lowering and pulling back the bolt lever, and they need to do it quickly. Without tolerances around the handle's grab volume and around the bolt's travel distance, they are likely to fail by either missing the handle, not lowering it fully, or not fully pulling the bolt back. For such a rapid action, if the grab is 'close enough' and they roughly lower and pull back their hand, we can compensate and complete the action for them. More weight here can be assigned to the bolt handle to avoid the player accidentally grabbing the rifle itself, or any other controls on its surface.
The player needs to grab ammunition. They reach down towards their right waist to grab some, but their hand is actually a bit closer to a flashlight. By default, they would grab the flashlight at a bad time. Since we know their chamber is spent, and they aren't looking towards their waist, we can be generous by putting more weight on the ammunition interaction volume so they can successfully grab a fresh round.
To reload the gun, they bring the round up to the chamber and release. Without tolerance here, if the bullet is in the wrong orientation, or not properly positioned, it could fall to the floor. As long as the bullet is close enough, we can compensate for facing and position offset, properly placing the round in the chamber.
Finally, they go to push the bolt forward and lock it to ready the weapon. To do this, they grab the handle, move their hand forward and up, and release to grab the trigger. Just as in the ejection step, we can be generous here around the grab and the motion, so that the player doesn't fall just short of properly reading the weapon.
Reviewing the example above: without generous interactions, there are lots of ways this straightforward interaction can fail. The weapon may be difficult to aim or may fall out of the hands, the chamber may not properly eject the round, ammunition may not be grabbed or may not be loaded, and the weapon may not be readied for firing, despite the player feeling like they’ve done everything off. By being generous with the various steps involved, players are much more likely to accomplish the intended action, and avoid frustration with the experience.
Regardless of your application type, designing with generosity in mind can significantly reduce player frustration, perception that an interaction is broken, and the need to repeat attempts at performing an action. In experiences that demand the execution of a sequence of actions in a short time frame, failure to be generous will result in the perception that the experience is flawed, whereas allowing some more imprecision will actually feel more precise.
There can be cases where being strict about an interaction can strengthen your experience or mechanic. If precision is a key pillar of your experience, players come into it expecting to need significant coordination and accuracy. If a particular action is detrimental, such as a spaceship self-destruct button or a UI element that ends the session, lowering the weight on them can help prevent accidents. Use your best judgement, and remember that any interaction generosity can be tuned to what makes sense for your experience.
Make it optional! Especially in cases where having a range of interaction precision doesn't massively change the quality of the experience, allowing users to select the required precision can be valuable. This is especially true in skill-based experiences where demanding better performance from the user can add to the challenge or immersive feel. By letting users select this on their first session, you also self-select users who prefer more precision-demanding interactions, and who will therefore be less likely to blame the experience on any failed actions they make. Keep in mind however that for multiplayer experiences, it's important to make sure different tolerances don't provide a competitive advantage to a subset of players, or clearly display a user's setting for transparency to other participants. A traditional example of this can be seen in balancing for games that support both controller and keyboard, using features such as aim assist and target stickiness.
Don't go overboard. Making your interactions too tolerant can leave things feeling floaty or mushy, or introduce more unintended interactions. Set aside time to iterate on any dynamic weighting logic and to tune tolerances in a way that makes sense for your experience.
When using any sort of interaction assistance, there's always the chance for a false positive. Remember to think holistically about what interactions a user needs to perform and when, and make sure to lay out interactive volumes accordingly. No attempt at predicting player intent will ever be perfect, but with testing and tuning, you can significantly reduce the chance that a user will perform an unintended action.
An additional step you can take when implementing generous interactions can be to support multiple interaction methodologies. For example, you could have a fully immersive mode with manual reloading and inventory management, while also providing a button-based mode following PC or console approaches, allowing people to choose what they're comfortable with on their first session and allowing them to switch at any time. Bonus—this approach also increases accessibility for users who have limited mobility or small play spaces or who are new to VR.
We're looking forward to seeing how you implement generous interaction design into your app to avoid player frustration and create more accessible and satisfying experiences. If you have any questions, feel free to leave a comment or join the conversation in our forum.