All Oculus Quest developers MUST PASS the concept review prior to gaining publishing access to the Quest Store and additional resources. Submit a concept document for review as early in your Quest application development cycle as possible. For additional information and context, please see Submitting Your App to the Oculus Quest Store.
Most VR professionals can facilitate a playtest, but it takes a great deal of practice to really specialize in this role and assure you maximize the value of your results. Here are a number of recommendations for how best to facilitate + moderate a VR playtest, followed by best practices to analyze and report on your test results.
The individual assigned to facilitate your playtest (sometimes called the moderator) will be in charge of guiding the tester through the playtest from start to finish. The environment, whether the test is remote versus in-person and the goals of test will have a great deal of impact on how to best execute your playtest. Communication is key, while there are a number of other tactics that will ensure that your tester provides the feedback you need.
While the information provided in the research planning phase will help a great deal, it’s highly recommended that you test your test. Do a few practice sessions with your script, and if possible, do so with a teammate or peer. This will enable you to deliver the script and manage the conversation with confidence.
As noted in the script writing section, be sure that you have the tester review + approve your NDA, or any other legal consent forms prior to kicking off your test. This step cannot be understated.
In order to acquire valuable feedback, you want your tester to be generally at ease throughout the test. Start the test by noting that there is no such thing as bad feedback, the game is WIP, and they should speak freely throughout.
While you should practice your script, do your best not to read from it verbatim at any time. Be calm, casual, and never critical.
Along with your introduction, it may help to provide them an overview of the Oculus hardware. Guide them through each piece of hardware, how they work, and how to hold/wear each of them.
If the test is in-person, we recommend the following steps to help introduce new VR users to the hardware:
If your app has the flexibility, it is recommended to start a newer VR player in a calm environment rather than one that has high intensity in the first 2-3 minutes. This will give them a chance to get a feel for the overall visual design, locomotion style, and how they are embodied in the virtual environment.
Even if your app is generally linear, enable the tester to experience your app however they believe it should be experienced. If the user asks how they should be navigating your app, do your best to let them think for themselves. Avoid corralling them into the path you had intended as you may discover important learnings about what they consider fun, their expectations for interactivity, and numerous other insights that can only be discovered with free play.
While encouraging your tester to play with a certain amount of freedom, you should encourage testers to think aloud as they play your app. The team at MIRAGESOFT made a valuable discovery through an offhand comment when testing their game, Reel VR Fishing:
During the playtest, a user remarked that if the UI system could be turned off that it would add to the immersion. Throughout development our team had never thought of it this way, and was in awe by the different points of view that helped us to further improve the game.
Enable the tester to speak freely whenever possible, always avoid interruption and get comfortable with long pauses, collectively these tactics will help to get your tester to speak openly and honestly.
If your tester is simply quiet and does not have the volume of feedback you’re looking for, ask follow up questions to what they’ve already provided you:
Finally, internalize the practice of answering questions with a question. For example:
Tester: What am I supposed to do here? Researcher (You): What do you think you are supposed to do there?
This will help your tester more openly think through their experience, their challenges, and assumptions.
During your playtest, always be looking for visual and audible cues that might help you learn even more about the testers experience. These include, but are not limited to:
Following the playtest, you will want to discuss the testers’ high level thoughts on the experience, this is where your survey/script will come in handy. Be sure to remember the recommendations above for efficient communication throughout user research.
While talking through your questionnaire, do your best not to take notes for every response. You should be recording this process, while avoiding detailed note-taking assures the tester that you’re giving them your undivided attention. Once the full interview is complete and they have left the premises, then feel free to take down any other notes that you had in mind.
If you are doing multiple tests, be sure to wipe down the headset and controllers with hypoallergenic wipes. It’s also good to change out the headset padding if you have a spare available.
You’ve completed your tests, you have notes, survey answers, recorded audio/video, and potentially some testing automation data, time to dive in, find, record and share your insights. This phase is not to be taken lightly, there is plenty of work to be done post-test, and while this phase (like the others) will be determined by your goals + available resources, here are a few best practices that will help to streamline your process.
Be prepared to prioritize: You will have bugs, UX and gameplay feedback and a finite amount of resources, be sure that you and your team are prepared to analyze your data and prioritize where to focus your time and effort going forward.
Review your results as a team: If possible, get the full team together to analyze the results and key takeaways. This is a great opportunity to share personal notes, review specific recordings, and ensure that your full team understands why you will be spending the time to make certain adjustments to your app going forward.
A single comment does not have any less impact: While receiving feedback more than once can add to its validation, do not attribute less impact to a single comment. You put in the effort to recruit your target audience, so all comments should be taken into consideration, especially in the early phases of your testing process.
With the example data set visualized below, our XR User Research Team has provided the following points to help guide your playtesting data analysis + visualization:
Tips for when your data set is <25 testers: Don’t use averages if you have a sample size below 20. You may want to use some sort of score to track, but if it’s usability, you need to think more in terms of people succeeding or failing. Use terms like some and most. Remember, one fail out of five is significant. If you recruited correctly, that could mean thousands or millions of people failing.
Tactics to effectively record and analyze open ended questions: Open ended questions should be qualified and coded. A simple way to do this is to review each piece of feedback, and assign a numerical value for each comment in a new document. As an example:
User feedback: He ran really fast, I was able to beat enemies really well.
This would be recorded as two pieces of feedback: #1: Running felt fast. #2: Fast running helps players be successful.
Keep a tally as more playtesters mention these points, and sort them by the attributed digits. If you notice certain feedback trending throughout, be sure to save specific quotations so you can include it in the final report.
Layout should be constructed based on your goals/objectives: The visualization of your results should be laid out based on the different aspects you wanted to test. Go through all of your notes and transfer them into sections into a separate document. From there, you can organize and pull out your key findings and action items.
You may want to visualize your data in a flow from round to round: In the example below, notice how the data is visualized on the horizontal axis. The testers’ comments were used to qualify how they felt about the experience after each 2 rounds of playing, we then assigned the following attributes: Fun, Challenge, and Frustration based on their open ended responses.
Visualization is essential to communicating your results to your cross functional teams, ensuring your research is actionable. If you are new to data visualization, you may want to do some independent research before reporting your findings.
If you haven’t already, be sure to review the first guide in this series on MRC, which covers app design best practices including locomotion, environmental design and more.