Interactions are a crucial aspect of making immersive VR experiences. These interactions must be natural and intuitive to improve retention and enhance people’s overall experience. To make it easy for developers to build immersive experiences that create a realistic sense of presence, we released
Presence Platform, which provides a variety of capabilities, including
Passthrough,
Spatial Anchors,
Scene understanding,
Interaction,
Voice, and more.
Interaction SDK lets developers create high-quality hand and controller interactions by providing modular and flexible components to implement a large range of interactions. In this blog post, we’ll discuss the capabilities, samples, and other resources available to you to help you get started with Presence Platform’s
Interaction SDK. It’s important to us that the metaverse continues to be built in the open, so we created an open source sample featuring the Interaction SDK to inspire you to innovate in VR along with us.
Introduction to Interaction SDK
What is Interaction SDK?
Interaction SDK is a library of components for adding controllers and hand interactions to your experiences. It enables you to incorporate best practices for user interactions, and includes interaction models for Ray, Poke, and Grab as well as hand-centric interaction models for
HandGrab,
HandGrabUse, and
Pose Detection. If you’re looking to learn more about what these models and interactions entail, see the “What Are the Various Interactions Available?” section below.
To get started with Interaction SDK, you’ll need to have a supported Unity version and the Oculus Integration package installed. To learn more about the prerequisites, supported devices, Unity versions, package layout, and dependencies,
check out our documentation.
We want to make your experience using Interaction SDK as smooth as possible. New features are regularly added, along with bug fixes and improvements. To learn more and keep up to date with the latest features and fixes, check out
our documentation on Upgrade Notes where you can find the latest information on new features, deprecations, and improvements.
How Does Interaction SDK Work?
All interaction models in the Interaction SDK are defined by an
Interactor-
Interactable pair of components. For example, a ray interaction is defined by the RayInteractor-RayInteractable pair of components. An Interactor is the component that acts on (hovers over, selects) Interactables. An Interactable is the component that gets acted on (can be hovered over or selected) by Interactors. Together, the Interactor and Interactable components make up interactions.
To learn more about the Interactor-Interaction lifecycle, check out the documentation on
Interactables. You can also choose to coordinate multiple Interactors based on their state by using InteractorGroups. Our documentation on
InteractorGroups goes over what an InteractorGroup is, how to use them, and how they enable you to coordinate multiple Interactors to create InteractorGroupMulti.
If you’re looking to make one interaction dependent on another ongoing interaction, you can link interactions with each other in a primary-secondary relationship. For example, if you’d first like to grab an object and then use it, the grab interaction would be the primary one and the use interaction would be secondary to the grab. Check out our documentation to learn more about
Secondary Interactions and how to use them.
What Are the Various Interactions Available?
Interaction SDK allows you to implement a range of robust and standardized interactions such as grab, poke, raycast, and many more. Let’s dive into some of these interactions and how they work:
Hand Grab
Hand Grab interactions provide a means of grabbing objects and snapping them to pre-authored hand grab poses. It uses the
HandGrabInteractor and the
HandGrabInteractable for this interaction.
Hand Grab interactions use per-finger information to inform when a selection should begin or end by using the
HandGrabAPI, which indicates when the grabbing starts and stops, as well as the strength of the grabbing pose. While the HandGrabInteractor searches for the best interactable candidate and provides the necessary snapping information needed, the HandGrabInteractable indicates whether an object can be grabbed, how it moves, which fingers perform the grab, handedness information, finger constraints, how the hand should align, and more. Check out the documentation on
Hand Grab Interaction, where we discuss how this interaction works and how to customize the interactable movements and alignments.
Poke
Poke Interactions allow users to interact with surfaces via direct touch, such as pressing or hovering over buttons and interacting with curved and flat UI.
PokeInteractables can also be combined with a PointableCanvas to enable direct touch Unity UI. Read about how to integrate Interaction SDK with Unity Canvas in our documentation on
Unity Canvas Integration.
Hand Pose
Hand Pose Detection provides a way to be able to recognize when tracked hands match shapes and wrist orientations.
The SDK provides six example poses as prefabs that show how pose detection works:
- RockPose
- PaperPose
- ScissorsPose
- ThumbsUpPose
- ThumbsDownPose
- StopPose
Based on the patterns defined in these poses, you can define your own custom poses. To learn more about what the Hand Pose prefabs contain, check out our documentation on
Pose Prefabs.
This interaction uses ShapeRecognition, TransformRecognition, VelocityRecognition, and other methods to detect shapes, poses, and orientations. ShapeRecognizer lets you specify the finger features that define a shape on a hand, and Transform gives information about the orientation and position. Check out our documentation on
Hand Pose Detection where we discuss how
Shape,
Transform, and
Velocity Recognition work.
Gesture
Interaction SDK combines the use of
Sequences and Active State to enable you to create gestures. Active states can be chained together using Sequences to create gestures, such as the swipe gesture.
When specified criteria such as shape, transform feature, velocity, and others are met, a state can become active. For example, one or more
ShapeRecognizerActiveState components can become active when all of the states of the finger features match those in the listed shapes—or in other words, when the criteria of a specified shape is met.
Since Sequences can recognize a series of active states over time, they can be used to create complex gestures. These gestures can be used to interact with objects based on which direction the hand swipes in. To learn more about how Sequences work, check out our
documentation.
Distance Grab
The Distance Grab interaction allows a user to select and move objects that are usually beyond their hand reach. For example, this can mean attracting an object towards the hand and then grabbing it. Distance Grab supports other movements as well.
Touch Hand Grab
Touch Hand Grab lets you use hands to grab objects based on their collider configuration and dynamically conforming fingers to their surface. This means users can grab the interactables by simply touching the surface between their fingers—without needing to fully grab the object.
This interaction uses the TouchHandGrabInteractor and the TouchHandGrabInteractable. The TouchHandGrabInteractor defines the touch interaction properties, while the TouchHandGrabInteractable defines the colliders that are used by the interactor to test overlap against during selection. To learn more about how these work, check out the documentation about
Touch Hand Grab interaction.
Transformers
Transformers let you define different ways to manipulate Grabbable objects in 3D spaces, including how to translate, rotate, or scale things with optional constraints.
For example, you could use the Grab interaction, Transformers, and Constraints on objects to achieve interactions such as moving an object on a plane, using two hands to change its scale, rotating an object, and much more.
Ray Interactions
Ray interactions allow users to select objects via raycast and pinch. The user can then choose to interact using pose and can pinch for selection.
RayInteractor and
RayInteractable are used for this interaction. The RayInteractor defines the origin and direction of raycasts and a max distance for the interaction, whereas the RayInteractable defines the surface of the object being raycasted against. When using hands, theHandPointerPose component can be used for specifying the origin and direction of the RayInteractor. Check out our documentation on
Ray Interaction, where we discuss how RayInteractor and RayInteractable work and how Ray Interaction works with hands.
If you’d like to read more about the features mentioned above, check out our documentation on
Interactions. There you’ll find information on Interactors, Interactables, Debugging interactions, Grabbables, the different types of interactions available, and more.
Trying Out the Interaction SDK Samples
To make it easy for you to try out these interactions, our team has created the
Interaction SDK Samples app that showcases each of the features discussed above. The Samples App can be found on App Lab, and it highlights each feature as a separate example to help you better understand the interactions.
Before you try the samples on your headset, make sure that you have hand tracking turned on. To do that, go to Settings on your headset, click on “Movement Tracking,” and turn on Hand Tracking. You can leave the Auto Switch Between Hands And Controllers option selected so that you can use your hands if you put the controllers down.
You’ll begin the sample in the welcome scene that shows you all the sample interactions available for you to try. This sample contains examples for interactions such as Hand Grab, Touch Grab, Distance Grab, Transformers, Hand Grab Use, Poke, Ray, Poses, and Gesture interactions.
Let’s take a look at each of these samples:
The Hand Grab example: This example showcases the HandGrabInteractor. The Virtual Object demonstrates a simple pinch selection with no hand grab pose. The key demonstrates pinch-based grab with a single hand grab pose. The torch demonstrates curl-based grab with a single hand grab pose. The cup demonstrates multiple pinch and palm-capable grab interactables with associated hand grab poses.
The Touch Grab example: This example shows how you can grab an object by just touching it and not actually grabbing it. Touch Grab uses the physics shape of the object to create poses for the hand dynamically. Here you’ll find an example of kinematic as well as non-kinematic pieces to try with this interaction. Kinematic demonstrates the interaction working with non-physics objects, whereas physical demonstrates the interaction of a non-kinematic object with gravity, as well as the interplay between grabbing and non-grabbing with physical collisions.
The Distance Grab example: This example showcases multiple ways for signaling, attracting, and grabbing distance objects. For example, Anchor at Hand anchors the item at the hand without attracting it, so you can move it without actually grabbing it. Interactable to Hand shows an interaction where the item moves toward the hand in a rapid motion and then stays attached to it. Lastly, Hand to Interactable shows an interaction where you can move the object as if the hand was really there.
The Transformers example: This example showcases the GrabInteractor and HandGrabInteractable with the addition of Physics, Transformers, and Constraints on objects. The map showcases translate-only grabbable objects with planar constraints. The stone gems demonstrate physics objects that can be picked up, thrown, transformed, and scaled with two hands. The box demonstrates a one-handed rotational transformation with constraints. The figurine demonstrates hide-on-grab functionality for hands.
The Hand Grab Use example: This example demonstrates how a Use interaction can be performed on top of a HandGrab interaction. For this example, you can grab the water spray bottle and hold it with a finger on the trigger. You can then aim at the plant and press the bottle trigger to moisturize the leaves in an intuitive and natural motion.
The Poke example: This example showcases the PokeInteractor on various surfaces with touch limiting. It demonstrates poke interactions and pressable buttons with standalone buttons or Unity canvas. Multiple visual affordances are demonstrated, including various button depths as well as Hover on Touch and Hover Above variants for button hover states. These affordances also include Big Button with multiple poke interactors like the side of the hand or palm, and Unity Canvases that showcase Scrolling and pressing Buttons.
The Ray example: This example showcases the RayInteractor interacting with a curved Unity canvas using the CanvasCylinder component. It demonstrates ray interactions with a Unity canvas and hand interactions that use the system pointer pose and pinch for selection. Ray interactions use the controller pose for ray direction and use the trigger for selection. It also showcases curved surfaces that allow for various material types: Alpha Cutout, Alpha Blended, and Underlay.
The Poses example: This example showcases six different hand poses, with visual signaling of pose recognition. It has detection for Thumbs Up, Thumbs Down, Rock, Paper, Scissors, and Stop. Poses can be triggered by either the left or right hand. It also triggers a particle effect when a pose starts, and hides the effect when a pose ends. In the past, hand poses needed to be manually authored for every item in a game, and it could take several iterations before the pose felt natural. Interaction SDK provides a Hand Pose Authoring Tool, which lets you launch the editor, reach out and hold the item in a way that feels natural, record the pose, and use the pose immediately.
The Gestures example: This example demonstrates the use of the Sequence component combined with ActiveState logic to create simple swipe gestures. For example, the stone shows a random color change triggered by either hand. The picture frame cycles through pictures in a carousel view, with directional change depending on which hand is swiping. Also note that the gestures are only active when hovering over objects.
To learn more about how these samples work and find reference materials to help you get started with them, check out our documentation on
Example Scenes and
Feature Scenes.
These examples showcase just a small subset of the library of interactions made possible with Interaction SDK. You can play around and build your own versions of these interactions by downloading the SDK in Unity. We’ll discuss how to do that later in this blog.
Trying Out First Hand
Our team has also worked on a showcase experience called
First Hand, which aims to demonstrate the capabilities and variety of interaction models possible with Hands. This is an official demo for Hand Tracking built with Interaction SDK.
First Hand can be found on App Lab.
The experience starts with you in a clock tower and facing a table that presents various objects you can interact with. You can see your hands being tracked, and you’ll notice a light blinking on your left giving you an indication that it can be interacted with. It tells you that a delivery package is waiting for you to accept. You can grab the lift control and push the accept button. This demonstrates the HandGrab and Poke interaction.
After the package is delivered, you’ll be prompted to pull the handle. Pulling the handle provides power, and the package opens up. This demonstrates the HandGrab interaction with a one-handed rotate transform for rotating the lever. Next, you’ll also have to type in the code written on the number pad to allow the box to unlock. This uses the Poke interaction.
Next, a wheel is presented in front of you so you can use your hands to HandGrab and Turn. This will open up the box, and you’ll be presented with an interactable UI that you can interact with to create your own robotic gloves.
You can use your hands to scroll through the UI and poke to select which section of your hand you want to create first. This demonstrates the Poke interaction. Select the first section and it will present the section of the glove in front of you. You are presented with three color options. Swipe to choose the color you’d like your glove section to be. This demonstrates the Swipe gesture detection feature. You can pick the piece up, scale it, rotate it, or move it around. This interaction uses the Touch Grab interaction and the Two Grab Free Transformer. Once you’ve selected a color, you can push the button to build it for you. You can move on to the next section and create the remaining parts of your glove in a similar manner.
Once your glove is ready, it presents you with three rocks that you can choose between and add to your glove. You can choose one of them by grabbing a rock, and then you’re able to crush the rock in your hand by squeezing it hard to reveal the crystal–a Use Grab interaction. Once the crystal is ready, you can place it on your glove to activate the crystal. You now have super power gloves!
An object starts flying around you and tries to attack you with lasers! You can use your super powers to shoot into the targets and to save yourself from getting hit. To shoot, open your palms and aim. To create a shield, fold your fingers and bring your hands together. This shows how Pose detection can be used for even two-handed poses! Based on the right pose, it performs the appropriate actions.
Finally, you’ll be presented with a blue button on your glove which you can Poke to select. It then teleports you to a world where you can see the clock tower that you were in, and several potential objects you could interact with, leaving the players with the imagination of what we could unlock with these interactions at our disposal.
Building great Hands-driven experiences requires optimizing across multiple constraints—first, the technical constraint like tracking capabilities, second, the physiological constraint, like comfort and fatigue, and finally, the perceptive constraint, like hand-virtual object interaction representation. This demo shows how the Interaction SDK unlocks several ways that can make your VR experiences more intuitive and immersive, without you needing to set up everything from scratch to get it all to work. It showcases some of the Hands interactions that we’ve found to be the most magical, robust, and easy to learn but that are also applicable to many categories of content.
To make it really easy for you to follow along and use these interactions in your own VR experiences, we’ve created an
open source project that demonstrates the use of Interaction SDK to create the various interactions seen in the
First Hand showcase app. In the next section, you’ll learn about the Interaction SDK package in Unity and how to set up a local copy of the
First Hand sample.
Setting Up a Local Copy of the First Hand Sample
Setup
To have a local copy of the sample, you’ll need to have Unity installed on a PC or Mac and install the Oculus Integration package. We’ll be using Unity 2021.3.6f1 for setup in this blog, but you can use any of the supported Unity versions. You can find more information on supported devices, Unity versions, and other prerequisites in our
Interaction SDK documentation. Oculus Integration SDK can be downloaded from the
Unity Asset Store or from the
Developer Center.
You should also make sure that you have turned on Developer Mode on your headset. You can do that from your headset as well as from the Meta Quest mobile app. To do it from the headset, go to Settings → System → Developer, and then turn ON the USB Connection Dialog option. Alternatively, if using the Meta Quest mobile app, you can do it by going to Menu → Devices → Select the headset from the list, and then turn ON the Developer Mode option.
Once you have Unity installed, open a new 3D scene. To install Interaction SDK, go to Window → Asset Store and look for the Oculus Integration package, and click on “Add to my Assets.” To install the package, open Package Manager, click on “Import,” and Import the files into your project.
If the system detects that a newer version of OVRPlugin is available, it is recommended to use the newest version. Go ahead and enable it.
If asked to use OpenXR for OVRPlugin, click on “Use OpenXR.” New features such as the Passthrough API are only supported through the OpenXR backend. You can switch between legacy and OpenXR backends at any time from Oculus → Tools → OVR Utilities Plugin.
It will confirm the selection and may provide an option to upgrade the Interaction SDK and perform a cleanup to remove the obsolete and deprecated assets. Allow it to do so. You can also choose to do this at any time by going to Oculus → Interaction → Remove Deprecated Assets.
Once installed, Interaction SDK components can be found under the Interaction folder under Oculus.
Since you will be building for Meta Quest, make sure to update your build platform to Android. To do that, go to File → Build Settings → Select Android and choose “Switch Platform.”
What Does the Interaction SDK Package Contain?
Interaction SDK components can be found under the Interaction folder under Oculus. The SDK contains three folders: Editor, Runtime, and Samples. The Editor contains all the editor code for the SDK, the Runtime contains the core runtime components of Interaction SDK, and the Samples contain the scenes and assets used in the Interaction SDK samples demo that we discussed earlier in the blog.
Under Samples, click on “Scenes.” Here you will find the Example Scenes, Feature Scenes, and Tools. The Example Scenes directory includes all the scenes that you saw earlier in the Interaction SDK samples app from App Lab. The Feature Scenes directory includes scenes that are dedicated to the basics of any single feature, and the Tools directory includes helpers like a Hand Pose Authoring Tool. Let’s open one of these samples, the HandGrabExamples, and check out how the scene is set up, scripts and other components, and how these are used to create the interactions you saw in the samples.
When opening any of the sample scenes, you may be asked to import TMP essentials. Go ahead and do that, and your scene will open.
Now, let’s look at some game objects and scripts and understand how they work:
- OVRCameraRig: This prefab provides the transform object to represent the Oculus tracking space. It contains a TrackingSpace game object and the OVRInteraction game object under it.
- TrackingSpace: This prefab allows users to fine-tune the relationship between the head tracking reference frame and your world. Under Tracking Space, you’ll find a center eye anchor, which is the main Unity camera, two anchor game objects for each eye, and left and right hand anchors for controllers.
- OVRInteraction: This prefab provides the base for attaching sources for Hands, Controllers, or HMD components that source data from OVRPlugin via OVRCameraRig. It contains the OVRControllerHands with left and right controller hands under it and the OVRHands, with left and right hands under it.
To learn more about the OVRIntegration prefab and how it works with OVRHands, check out our documentation on
Inputs.
There are two main scripts attached to the OVRCameraRig prefab: OVRManager and the OVRCameraRig.
OVRManager is the main interface to the VR hardware and exposes the Oculus SDK to Unity. It should only be declared once. It contains settings for your target devices, performance, tracking, and color gamut. Apart from these settings, OVRManager also contains Meta Quest-specific settings:
- Hand Tracking Support: This allows you to choose the type of input affordance you like for your app. You can choose to have Controllers Only, Controllers and Hands to switch between the two, or only Hands.
- Hand Tracking Frequency: You can select the hand tracking frequency from the list. A higher frequency allows for better gesture detection and lower latencies but reserves some performance headroom from your application’s budget.
- Hand Tracking Version list: This setting lets you choose the version of hand tracking you’d like to use. Select v2 to use the latest version of hand tracking. The latest update brings you closer to building immersive and natural interactions in VR without the use of controllers and delivers key improvements on Meta Quest 2.
As discussed earlier, each interaction consists of a pair of Interactor and Intractable. Interactables are rigid body objects that the hands or controllers will interact with, and they should always have a grabbable component attached to them. To learn more about Grabbables, check out our
documentation.
This sample demonstrates the HandGrabInteractor and HandGrabInteractable pair. The HandGrabInteractor script can be found under OVRInteraction → OVRHands → Choose LeftHand or RightHand→ HandInteractorsLeft/Right → HandGrabInteractor.
The corresponding interactable can be found for the rigid body objects under Interactables → choose one of the SimpleGrabs → HandGrabInteractable, which contains the HandGrabInteractable script. HandGrabInteractables are used to indicate that these objects can be interacted with via Hand-Grab using either hands or controllers. They require both a Rigidbody component for the interactable and a Grabbable component for 3D manipulation, attached as shown.
Now that you have a basic understanding of how a scene is set up using the OVRCameraRig, OVRManager, OVRInteraction, and how to add Interactor-Interactable pairs, let’s learn how you can set up a local copy of the First Hand sample.
Cloning from GitHub and Setting Up the First Hand Sample in UnityThe First Hand sample GitHub repo contains the Unity project that demonstrates the interactions presented in the
First Hand showcase app. It’s designed to be used with hand tracking. The first step to setting up the sample locally is to clone the repo.
First, you should ensure that you have Git LFS installed by running:
Next, clone the repo from GitHub by running:
git clone https://github.com/oculus-samples/Unity-FirstHand.git
Once the project has successfully cloned, open it in Unity. You may receive a warning if you’re using a different version of Unity—accept the warning, and it will resolve the packages and open the project with your Unity version. It might throw another warning that states Unity needs to update the URP materials for the project to open. Accept it, and the project will load.
The main scene is called the Clocktower scene. All of the actual project files are in Assets/Project. This folder includes all scripts and assets to run the sample, excluding those that are part of the Interaction SDK. The project already includes v41 of the Oculus SDK, including the Interaction SDK. You can find the main scene under Assets/Project/Scenes/Clocktower. Just like before, import TMP essentials if you don’t already have it, and the scene should load up successfully.

Before you build your scene, make sure that Hand Tracking Support is set to Controllers and Hands. To do this, go to Player → OVRCameraRig → OVRManager. Under Hand Tracking support, choose “Hands and Controllers or Hands only.” Set the Hand Tracking Version to “v2” to use the latest and the Tracking Origin Type to “Floor Level.” Make sure the platform you’re building for is Android. To build, go to File → Build Settings and click on “Build.” You can choose to click on “Build and Run” to directly load and run it on your headset, or you can install and run it from Meta Quest Developer Hub (MQDH), which is a standalone companion development tool that positions Meta Quest, Meta Quest 2, and Meta Quest Pro headsets in the development workflow. To learn more about MQDH, visit our
documentation.
If you choose to use MQDH, you can drag the apk into the application and click “Install” to install and run it on the headset.
When the sample loads, you’ll see the objects that you interacted with in the First Hand showcase app. You’ll also see an option to enable Blast & Shield Gestures, and the option to enable Distance Grab. You’ll be able to interact with the UI in front of you using the Poke interaction. You can also interact with the pieces from the glove like you saw in the First Hand showcase app. Note that this sample only showcases the interactions and not the complete gameplay of the First Hand demo.
Below you can find a list of the objects you can interact with in this sample and the kind of interaction they demonstrate:
This was a quick walkthrough of the
First Hand sample GitHub project that demonstrates the various interactions created using the Interaction SDK that you saw in the
First Hand showcase app. The SDK provides you with these interactions to make it even easier for you to integrate them into your own apps. Apart from this, our team has created documentation, tutorials, and many other resources to help you get started using Interaction SDK. Let’s discuss some resources and best practices that you can keep in mind when using the SDK to add interactions in your VR experiences.
Best Practices and Resources
Best Practices
For hand tracking to be an effective means of interaction in VR, it needs to feel intuitive, natural, and effortless. However, hands don’t come with buttons or switches the way other input modalities do. This means there’s nothing hinting at how to interact with the system and no tactile feedback to confirm user actions.
To solve this, we recommend communicating affordances through clear signifiers and continuous feedback on all interactions. You can think of this in two ways: signifiers, which communicate what a user can do with a given object, and feedback, which confirms the user’s state throughout the interaction.
For example, in the First Hand sample, visual and auditory feedback plays an important role in prompting the user to interact with certain objects in the environment; the lift control beeps and glows to let the player know they should press the button, the glowing object helps the player identify when objects can be grabbed from a distance, and color changes for UI buttons when selected confirm the user’s selection. Apart from this, creating prompts to guide the player through First Hand also worked especially well.

Hands are practically unlimited in terms of how they move and the poses they can form. This opens up a world of opportunities, but it can also cause confusion. By limiting which hand motions can be interpreted by the system, we can achieve more accurate interactions. For example, in the First Hand sample, the player is allowed to grab certain objects, but not all. They can also grab, move, and scale some of the objects, but this is limited to some objects—they can grab and scale the parts of the glove but it doesn’t allow them to scale it only in one direction to prevent deforming the object.
Snap Interactors are a great option for drop zone interactions and fixing items in place. You’ll see this used extensively in First Hand, especially during the Glove building sequence where “Snap Interactors” trigger its progression.
Interaction SDK’s Touch Hand Grab interaction can come in handy for small objects, especially objects that don’t have a natural orientation or grab point, without restricting the player with pre-authored poses. You’ll see this being used in First Hand for the glove parts, which the player can pick up in whatever way feels natural to them.
For objects out of a user’s reach, raycasting comes in very handy when selecting objects. Distance Grab is also another great way to make it easy for users to interact with the object, without needing to walk up to it or reach out to it. This makes your experience more intuitive—and more accessible.
Distance Grab is easy to configure and is implemented by having a cone extend out from the hand that selects the best candidate item. You can save time when setting up an item for Distance Grab by reusing the same hand poses that were set up for regular grab. In the First Hand sample, you can see how Distance Grab was used to grab objects from a distance and how the visual ray from the hand confirms which object will be grabbed.
Tracking and ergonomics also play a huge role when designing your experiences with hand tracking. The more of your hands the headset’s sensors can see, the more stable tracking will be. Only objects that are within the field of view of the headset can be detected, so as a rule of thumb you should avoid forcing users to reach out to objects outside of the tracking volume.
It's also important to make sure the user can remain in a neutral body position as much as possible. This allows for a more comfortable ergonomic experience, while keeping the hand in an ideal position for the tracking sensors. For example, in the First Hand sample, the entire game can be experienced sitting or standing, with hands almost always in front of the headset. This makes the hand tracking more reliable while improving accessibility.
These are some of the best practices that we recommend keeping in mind when designing interactions using hands in VR. Interaction SDK provides you with modular components that can make implementing these interactions easy, so you don’t have to start from scratch when building your experiences.
Resources
To get started with Interaction SDK in Unity, check out our documentation on the
Interaction SDK overview, where we go over the setup, prerequisites, and other settings. We’ve added several sample scenes in the SDK to help you get started with developing interactions in your apps. To learn more about the examples, check out our documentation that goes over the
example scenes and the
feature scenes and the interactions they showcase. Check out
our blog to learn from the team who built
First Hand and see their tips to get started with Interaction SDK.
To learn more about the OVRInteraction prefab, Hands and Controllers, and how to set them up, check out our
documentation for Inputs. To learn more about how Interactors and Interactables work and the various kinds of interactor-interaction pairs available, check out our
documentation on Interactions where we go over
Interactors and its methods,
Interactables and the lifecycle between both,
Grabbable components, and learn more about the various kinds of interactions and how they work.
About Presence Platform
This blog supports the video
“Building Intuitive Interactions in VR: Interaction SDK, the First Hand Sample, and Other Resources.” In this blog, we discuss Presence Platform’s Interaction SDK and how it can drastically reduce development time if you’re looking to add interactions in your VR experiences. To help you create virtual environments that feel authentic and natural, we’ve created
Presence Platform, which consists of a broad range of machine perception and AI capabilities, including Passthrough, Spatial Anchors, Scene understanding, Interaction SDK, Voice SDK, Tracked keyboard, and more.