Navigating the maze of testing virtual reality – Part 1

Robin AnglandQA Analyst

Lets start at the beginning

We are currently working on a virtually reality project for the Oculus Rift. The scope is a set of mazes to be built where the user will be given the task of successfully navigating through various different routes, whilst finding objects within the maze. Results are to be recorded for time, accuracy and route recall.

There will also be a desktop app to display Facilitator screens which will run parallel, where a controller can coordinate the levels for the user.

Sounds like fun and something new to get our teeth into, but the main question is how do we go about testing it when ‘Current Oculus Rift testing experience within our whole QA team = 0?’.
We did some research (googled it) and found a lack of information and experience on the internet and our usual sources. We then decided to write this blog to hopefully provide some guidance for other testers in similar situations.

We decided that we would use Agile for this project with several iterations. There is a high priority focus on the user experience as well as functionality during QA. The requirements would be based around user stories with QA test scripting being driven by the acceptance criteria from those stories.

Lets see what we’ve got to work with

We were able to use our existing tool set on this project. Requirements and defect tracking stored in Jira.

Test scripting, planning and execution are done within Smartbear QA Complete.
As the Oculus is restricted to development and use on Windows machines, both the development and testing of the project was completed on a machine running Windows 10.

Oculus VR headset with Xbox one controller

Before providing some estimates we did our homework. We were given Wireframes which were extremely detailed. We were very lucky as our stakeholder was heavily involved with the project from the start which gave us the advantage of even more detailed requirements and less assumptions. As virtual reality is very new to us, it was very important to take on board as much information as possible in the planning phase as we didn’t have previous experience to rely to predict defect patterns.

Another bonus was the developer was able to do a demonstration of ‘What was built so far’ before any test scripting was undertaken. This not only gave us direction for our test scripting, but also a general idea of potential execution timings. We also had access to some of the mini demos on the Oculus which including ‘Showdown’ and ‘Invasion!’ which gave us some knowledge of knowing what it was like setting up the Oculus and wearing the headset.

Planning and estimation

Now we were aware of the hardware and software that we were to use and we had read all of the documentation available we were ready to start the estimation process. We used story points for a bottom up approach. Points of consideration for estimating:

  • Figuring out how to design the tests
  • Use of a headset and Xbox controller (Rather than keyboard and monitors)
  • Visual walk through of each maze level – We deemed this as high priority due to the User Experience
  • Opinions of experts
  • Average run times

As part of the planning phase we designed the acceptance criteria for each user story and this assisted us in doing initial estimates for the testing. We evolved this as the project developed and were able to refine our estimates to gauge test size and potential execution times.

Potential risks and obstacles during planning and our solutions to mitigate them

How do you ensure physical safety of the user?

It was obvious that we required a separate area for testing. It just so happened that we had a small office away from the hustle and bustle of the main open plan working area which was perfect.

Operating system

As the program was developed on a Windows PC, the setup was already covered in our new ‘Office’. The obstacle was actually us testers getting used to using Windows again when we are very accustomed to the Linux Ubuntu operating system. As standard we use Linux so we needed to re familiarise ourselves with Windows.

Where do you start when writing the test scripts?

This was just a concern. Like other projects, test scripting evolved around what was to be delivered and then broken up into each maze level. The choice to split it into each level was down to the fact that we thought it would flow better rather than splitting into roles. This is exactly how we would have started had the project been 2D.

How do you even run a test script whilst wearing a headset?

It was decided that the best approach to make the test steps short and easier to remember so that we could reduce the number of times we would have to take the headset off to refer to the script. Updating of the actual run of the steps would have to be done retrospectively.

Planning releases

As there were many ‘unknowns’ in the early stages, releases were heavily based on when each development phase was completed rather than working towards specific dates. It worked well for us as it meant functionality was delivered in one piece rather than having to leave bits behind in another sprint. In the future we would plan the release dates especially as we now have the experience to know how long each phase will take us. The Agile approach allowed flexibility to realign what was being delivered per release.

Test Scripting

Even though the user experience was a high priority focus for QA testing, the functionality was still our main priority.

Did each level do exactly what it was required to do? 

Lydia wearing a VR headset

The user experience could be tested from our own personal perspective, but you must remember that each person is different and it would be impossible to test all aspects for every user. We decided it would be advisable to stick to testing the most obvious and basic user actions whilst running through the mazes.

Test coverage was led by the Acceptance Criteria which was driven from the user stories. However, we did need to consider a few extra factors:

  • Audio instructions
  • Background audio
  • Speed/smoothness of movement
  • Visual – Is it realistic?
  • Visual – Has the entire viewable space been covered (As in no missed areas)
  • How the controller works for movement and button presses – Could a novice use this without any prior training?

Whilst test scripting we used the following testing techniques:

  • Exploratory testing
  • User case and scenario based testing
  • Equivalence partitioning
  • Boundary value analysis
  • Negative testing

Development

As we were using Agile, there was a lot of Developer/Tester interaction. A project for virtual reality just wouldn’t work in a waterfall methodology as the developer and tester relationship is more collaborative. The project was released to QA iteratively. Defects were resolved and released back to QA for retest.

One piece of advice is to have more than one headset as due to our specific scenario we had one between us meaning we could not have development and QA happening at the same time.

Coming up in Part 2…

In Navigating the maze of testing virtual reality – Part 2 I look at test execution, lessons learnt during testing; as well as unexpected issues we encountered during the testing phase.

Join our mailing list

We don't send many emails, but when we do you'll want to read them.
Make sure you're on the list.