With the launch of
Meta Quest Pro, we unlocked
full-color mixed reality and introduced a new pillar to
Presence Platform’s core stack with social presence. The developers of
ShapesXR have already begun seeing the added benefits of these new capabilities. We sat down with Head of Business Gabriele Romagnoli to learn more about ShapesXR and how other VR developers have started using the application for their work.
Tell us a little about yourself and how you started working on ShapesXR.
Gabriele Romagnoli: I jumped into VR back in 2016, and I was immediately hooked by the incredible creative opportunities. I always had a creative heart, but 3D tools felt very overwhelming. I was under the impression that I needed a huge time commitment in order to get some initial results. VR, on the other hand, really gives everyone creative superpowers. Everything feels more intuitive, and you’re fully immersed in your creation. While exploring various creative tools, I stumbled upon ShapesXR. I gave the team some feedback, they answered my questions, and before I knew it, I became part of the XR revolution.
What is ShapesXR?
GR: ShapesXR is a collaborative design and prototyping tool for XR. It has a powerful creative toolset that lets you assemble complex scenes in VR without prior 3D experience. Unlike other 3D creation tools, it has a “storyboard” system that allows you to prototype the flow of the whole immersive experience or more detailed user interactions. After you’ve got some 3D sketches, you can invite anybody in VR to truly align on the design and vision, share what you’ve created via web, and even get it straight to Unity with a handy plugin. Being together in VR truly adds a new level of understanding to the design process and cuts back significantly on iterations and unnecessary rework.
Several VR developers are using ShapesXR as part of their development process, right? What does that look like? Is it different for every studio?
GR: Absolutely!
FitXR,
TriggerXR and
TRIPP are some of the studios already using ShapesXR as part of their design process. Another example is the
Nanome team. They used ShapesXR to
accelerate their ideation and design process in order to meet strict deadlines for their launch on the new Meta Quest Pro. The design team could work autonomously and produce better designs with very limited support from the dev team much faster. They would sketch various options in MR and share them early so that the whole team could align on a design decision based on something everybody truly understood. Options that weren’t technically feasible were also checked early and discarded without committing any development time to test out the idea. As a result, the time needed to go from an idea to a design prototype that could be shared with users and teammates in VR was cut from 10 days down to one.
With the launch of Meta Quest Pro, you’ve integrated stereoscopic color Passthrough, as well as Meta Avatars with eye and face tracking. How has that changed the application since you first set out? GR: The addition of color Passthrough dramatically broadened the use cases for ShapesXR, making it possible to design and prototype mixed reality applications directly on Meta Quest 2 and Meta Quest Pro. And it’s much more than just adding a “passthrough” switch. Creatives can use a special “Passthrough material” to recreate portals, bring in their room setup as reference, switch seamlessly between VR and MR, and much more. And the addition of avatars and face tracking made being in VR or MR remotely together a much more fun and engaging experience. Being able to see what other people are looking at even when in silence or see their lips move realistically helps tremendously to make communication feel more real and be more effective.
Did you face any challenges while integrating those features?
GR: Unlocking Passthrough was rather easy. The majority of the challenges were related to the fact that ShapesXR is a creative tool and we wanted our users to use “Passthrough as a material” for objects and assets created in the scene. The off-the-shelf shaders didn’t immediately work, and we had to find our own solution.

We also encountered some cases in which materials “conflict” with each other. It would be great to have access to Passthrough textures from any shader solving several of these problems and opening up opportunities for more experimentation. When we looked at avatars, we had to deal with one main challenge: performance. ShapesXR supports up to 12 concurrent users co-creating in real time, and we wanted the experience to remain smooth even with the extra “weight” avatars add to the scene. For this reason, we implemented a logic that helps to switch between the Meta Avatars and our more minimalist avatar based on users’ relative size and the numbers of Meta Avatars in the scene. To illustrate this concept, we’ve created
a space in ShapesXR that can be visited on the web or in VR.
How do you think Presence Platform’s mixed reality and social presence features will change the way we work together?
GR: The ability to have others “join your physical space” virtually is truly unique. Uniqueness turns into magic when there is a common element both participants share, like a table or a wall. Suddenly you can start adding 3D assets, UI elements, or any form of digital content to your space, and you feel truly connected. The other aspect I’m personally excited about is the ability to share a representation of your space with others. This would allow you to bring the creative process anywhere and have anyone join you at any time. If you’re designing an MR app to help in the manufacturing process, you could just wear the headset from your home office and join an SME onsite who’s also wearing a headset. You’d be able to see how the machine looks, decide on the various steps you want to be MR-assisted, and where the UI should be—all while the person onsite validates the decisions in the actual physical space. The possibilities are truly endless.
What advice would you give to developers who want to start creating their own game or application?
GR: Iterate fast. Starting in Unity right after some 2D sketches or Figma prototypes is a big leap and leads to a lot of back and forth. We’ve experienced that ourselves, and right now all our early designs are done spatially in ShapesXR. This has several advantages:
- Designers feel empowered to ideate and experiment autonomously. Even if they’re not skilled in 3D tools or game engines, they can ideate spatially in ShapesXR within a few hours of practice (and helped by our in-app video tutorial series).
- There’s a lot of time freed up for developers who can now focus on a few designs that have already been validated in 3D.
- The final product feels much more spatial because the ideation itself was done in VR. You’d be surprised how our brains think and can troubleshoot spatial problems when in VR.
What does 2023 hold for ShapesXR? Any future plans you can share?
GR: We have a lot of stuff cooking. Our priority for the first half of 2023 is to make ShapesXR more powerful and integrated. After listening closely to our users, we’ve decided to work on an interactivity system that would allow creatives to add triggers and branch the storyline like you would with other 2D design tools like Figma. This will open up so many possibilities and make ShapesXR the go-to tool for any spatial design interactive prototype. With regards to integration, we’re well aware that there are some things that are just better done in 2D like designing UI panels or 2D graphics. We want to facilitate the process of visualizing and validating those design ideas in VR with a simple workflow that sees Figma integrated in the process. We’re currently showing our concept to our users and inviting them for interview sessions. Guess how: Directly in ShapesXR 😋
You can
get a sneak peek at the various functionalities we’re evaluating for the Figma integration
here.