Tag Archives: Testing

Navigating the maze of testing virtual reality – Part 2

This is the second and final post about QA testing in virtual reality. If you didn’t read the first post, Navigating the maze of testing virtual reality – Part 1, then you should read that first.

Test Execution

It is hard to describe what it is like wearing the headset for the first time. It’s a mixture of disorientation, fascination and being a little inundated with trying to remember what the test script was that you were following, using an Xbox controller for movement and still testing the actual functionality at the same time. It did take a good day of testing to get into the routine of following the below process:

  • Read test script
  • Put headset on
  • Trigger level as facilitator
  • Navigate through the maze
  • Remove headset
  • Mark test steps as pass/fail

Highlights and lessons learnt during the testing phase

High level test scripts Initially these were written quite detailed, but we soon realised that they needed to be written more high level. This saved time when writing the test scripts compared to having to write detailed step by step test scripts. The key was to keep things general within the test scripts. Although the tester needs to be aware to raise issues that are cosmetically incorrect, they do not have to include the specific details to check. The test scripts being general will also reduce the amount of maintenance required for them.

2D Map
The addition of a 2D map view of exactly what was being shown to the Oculus user was an advantage during the testing phase. This actually comes with the Oculus and we did not have to build this ourselves. This was the only view that allowed us to take screenshots as we couldn’t do this within the Oculus itself. We couldn’t take screen shots of everything but this could be used for at least half of the found issues. It also allowed some of the tests to be run without the headset as it wasn’t essential to be 3D tested which gave us a break from the headset.

Facilitator map view
This was a requirement, which we found really helpful during testing and would recommend this to be built for future projects. This allowed us to have an overview of each level as a whole including the following:

  • Maze starting point
  • Maze end point
  • Location of objects within the maze
  • Location of the subject as they move around the maze in real time 

Two testers are better than one
During the testing cycle we were able to parallel test on the project. Obviously this does mean higher costs than a traditional project, however, a lot of time is saved by the first tester not having to run through the test scripts beforehand or having to repeatedly take the headset off and therefore losing their current position in the maze. We found that the second tester worked well to support the first tester and had advantages which should be seriously considered during estimation and scoping on any virtual reality project:

  • Read through and mark pass/fail on the test scripts while the first is wearing the headset
  • Guide the first tester through the maze using the facilitator map view
  • Second pair of eyes on the maze using the 2D map view
  • Take screenshots of defects using the 2D map view
  • Write detailed defects as the first tester described them during the execution. This ensures defects are more accurate as you remember all of the information.

Glasses v Contact Lens

The Oculus can accommodate people wearing glasses, although taking the headset on and off can be really annoying. We found this helpful as using the Oculus really dries your eyes out which is a lot more noticeable and uncomfortable when wearing contact lens.

Rift Face
We found that after extensive use ‘Rift Face’ markings would appear on your face outlining exactly where the Oculus had been sat.

Unexpected issues we came across during the testing phase

Time
Test runs took lot longer than estimated. Being unable to skip to a specific part of a maze meant tests relating to the end of a maze for example would require us to work through the beginning part of the maze as a prerequisite. Testing each level wasn’t just the functional testing of the maze, it included several other items:

  • Recording of the results (As in timings, turns taken)
  • Audio (Instructions, background)
  • Controls
  • Visual effects

Scheduled break times needed to be planned in as the recommendation for using an Oculus is 10-15 mins of break time after every 30 minutes which means for each hour of testing you perform you would need to allow 1 hour 20 minutes!

Adverse Effects 
We found that virtual reality affects each person differently. One of us had a few minutes of nausea which improved with each use, and disappeared after using the Oculus for a couple of hours. The other one of us had much worse nausea, experiencing around an hour of feeling ill after their first half hour of testing on the Oculus. This only eased slightly over the course of the entire testing cycle. Other side effects experienced were dizziness and fatigue. These were all despite the fact that guidance around motion sickness had been taken into account during development.

Monotony
After the initial excitement of testing virtual reality wears off, the repetitiveness soon becomes challenging. Having to re-run one specific level ten times in a row can get extremely dull. We found we had to be vigilant so as to not subconsciously rush through the level as this could easily result in missing defects.

Recreating Defects
Although we had the use of a 2D map, is was extremely difficult to recreate defects for the developer. When working alone, the tester would have to lift their headset to take the screenshot which often meant the view had changed and screenshots were not as accurate as we would have liked. To overcome this we had to put as much details into our defect tickets as possible.

User Acceptance Testing

As mentioned previously our client was heavily involved in the whole process. They have their own Oculus Rift which meant that specific QA/UAT times did not have to be scheduled which reduced time pressures. It also meant that whilst the clients were testing one phase we were able to progress to the next phase of development without affecting the UAT environment. Each time we released to the client they worked on an ad hoc basis testing each individual ticket rather doing it all at once. This allowed the client to test based on priorities.

The agile methodology worked really well as we were constantly getting feedback regarding user experience. This meant we could address these areas quite early on rather than having to rip out chunks at a later stage.

How do we regression test … ?

This so far is not yet tried and tested for us. In an ideal world there would be an automated test suite created to handle this. As the project is still in flight we are not in a position to automate this yet and will review it at a later date.

Conclusion

In conclusion working with the Oculus Rift has been a fun yet challenging project. We have really only dipped a toe in the water in relation to virtual reality testing. It has been overall a positive experience which we have learnt a great deal from.

Until we gain more experience and develop our approach it will be more time consuming and cost more than testing on non virtual reality software. We have adapted our usual practices for the whole of the test phase and even though we are pleased with our new process for this type of project we are not out of the maze yet and it is still very much a work in progress…

Written by Test Analysts Lydia Hewitt and Robin Angland.

Navigating the maze of testing virtual reality – Part 1

Lets start at the beginning

We are currently working on a virtually reality project for the Oculus Rift. The scope is a set of mazes to be built where the user will be given the task of successfully navigating through various different routes, whilst finding objects within the maze. Results are to be recorded for time, accuracy and route recall.

There will also be a desktop app to display Facilitator screens which will run parallel, where a controller can coordinate the levels for the user.

Sounds like fun and something new to get our teeth into, but the main question is how do we go about testing it when ‘Current Oculus Rift testing experience within our whole QA team = 0?’.
We did some research (googled it) and found a lack of information and experience on the internet and our usual sources. We then decided to write this blog to hopefully provide some guidance for other testers in similar situations.

We decided that we would use Agile for this project with several iterations. There is a high priority focus on the user experience as well as functionality during QA. The requirements would be based around user stories with QA test scripting being driven by the acceptance criteria from those stories.

Lets see what we’ve got to work with

We were able to use our existing tool set on this project. Requirements and defect tracking stored in Jira.

Test scripting, planning and execution are done within Smartbear QA Complete.
As the Oculus is restricted to development and use on Windows machines, both the development and testing of the project was completed on a machine running Windows 10.

Before providing some estimates we did our homework. We were given Wireframes which were extremely detailed. We were very lucky as our stakeholder was heavily involved with the project from the start which gave us the advantage of even more detailed requirements and less assumptions. As virtual reality is very new to us, it was very important to take on board as much information as possible in the planning phase as we didn’t have previous experience to rely to predict defect patterns.

Another bonus was the developer was able to do a demonstration of ‘What was built so far’ before any test scripting was undertaken. This not only gave us direction for our test scripting, but also a general idea of potential execution timings. We also had access to some of the mini demos on the Oculus which including ‘Showdown’ and ‘Invasion!’ which gave us some knowledge of knowing what it was like setting up the Oculus and wearing the headset.

Planning and estimation

Now we were aware of the hardware and software that we were to use and we had read all of the documentation available we were ready to start the estimation process. We used story points for a bottom up approach. Points of consideration for estimating:

  • Figuring out how to design the tests
  • Use of a headset and Xbox controller (Rather than keyboard and monitors)
  • Visual walk through of each maze level – We deemed this as high priority due to the User Experience
  • Opinions of experts
  • Average run times

As part of the planning phase we designed the acceptance criteria for each user story and this assisted us in doing initial estimates for the testing. We evolved this as the project developed and were able to refine our estimates to gauge test size and potential execution times.

Potential risks and obstacles during planning and our solutions to mitigate them

How do you ensure physical safety of the user?
It was obvious that we required a separate area for testing. It just so happened that we had a small office away from the hustle and bustle of the main open plan working area which was perfect.

Operating system
As the program was developed on a Windows PC, the setup was already covered in our new ‘Office’. The obstacle was actually us testers getting used to using Windows again when we are very accustomed to the Linux Ubuntu operating system. As standard we use Linux so we needed to re familiarise ourselves with Windows.

Where do you start when writing the test scripts ?
This was just a concern. Like other projects, test scripting evolved around what was to be delivered and then broken up into each maze level. The choice to split it into each level was down to the fact that we thought it would flow better rather than splitting into roles. This is exactly how we would have started had the project been 2D.

How do you even run a test script whilst wearing a headset ?
It was decided that the best approach to make the test steps short and easier to remember so that we could reduce the number of times we would have to take the headset off to refer to the script. Updating of the actual run of the steps would have to be done retrospectively.

Planning releases

As there were many ‘unknowns’ in the early stages, releases were heavily based on when each development phase was completed rather than working towards specific dates. It worked well for us as it meant functionality was delivered in one piece rather than having to leave bits behind in another sprint. In the future we would plan the release dates especially as we now have the experience to know how long each phase will take us. The Agile approach allowed flexibility to realign what was being delivered per release.

Test Scripting

Even though the user experience was a high priority focus for QA testing, the functionality was still our main priority.

Did each level do exactly what it was required to do? 

The user experience could be tested from our own personal perspective, but you must remember that each person is different and it would be impossible to test all aspects for every user. We decided it would be advisable to stick to testing the most obvious and basic user actions whilst running through the mazes.

Test coverage was led by the Acceptance Criteria which was driven from the user stories. However, we did need to consider a few extra factors:

  • Audio instructions
  • Background audio
  • Speed/smoothness of movement
  • Visual – Is it realistic?
  • Visual – Has the entire viewable space been covered (As in no missed areas)
  • How the controller works for movement and button presses – Could a novice use this without any prior training?

Whilst test scripting we used the following testing techniques:

  • Exploratory testing
  • User case and scenario based testing
  • Equivalence partitioning
  • Boundary value analysis
  • Negative testing

Development

As we were using Agile, there was a lot of Developer/Tester interaction. A project for virtual reality just wouldn’t work in a waterfall methodology as the developer and tester relationship is more collaborative. The project was released to QA iteratively. Defects were resolved and released back to QA for retest.

One piece of advice is to have more than one headset as due to our specific scenario we had one between us meaning we could not have development and QA happening at the same time.

Coming up in Part 2…

In Navigating the maze of testing virtual reality – Part 2 I look at test execution, lessons learnt during testing; as well as unexpected issues we encountered during the testing phase.

Written by Test Analysts Lydia Hewitt and Robin Angland.

Generating sample unicode values for testing

When writing tests, there’s one thing we’re now sure to do because it’s caught us out so many times before: use unicode everywhere, particularly in Python.

Often, people will just go to Wikipedia and paste bits of unicode chosen at random into sample values, but that doesn’t always make for good readability when the tests you forgot to comment break two months later and you have to revisit them.

I’ve found a simple way to generate test unicode values that make sense, is to use an upside-down text generator, or some other l33t text transformer which produces unicode. Using text detailing whatever the sample value is supposed to represent, it’s still pretty legible at a glance and you’ll hopefully flag up those pesky UnicodeDecode errors quicker.

There’s a handy list of different text transformations and websites that will perform them for you on Wikipedia.