Remote Usability Testing
“What Happened to Remote Usability Testing? An Empirical Study of Three Methods” by Morten Sieker Andreasen, Henrik Villemann Nielsen, Simon Ormholt Schrøder, & Jan Stage
Andreasen et al.'s (2007) research was concerned with evaluating and analyzing three methods for remote usability testing and systematically and empirically comparing them with traditional lab-based usability testing methods. The three remote methods included a remote synchronous testing method and two remote asynchronous test methods. Andreasen et al.’s (2007) study included testing a total of 24 participants made up of usability experts and general users, who participated in the four different conditions of usability testing methods. Participants were tested on nine separate tasks concerning the Mozilla Thunderbird email system. The major results of their study demonstrated that there was very little difference between the task completion time and amount of usability problems identified in synchronous remote usability testing vs. traditional testing methods. Furthermore, both asynchronous testing methods were more time consuming and both conditions identified fewer usability problems compared to the lab and synchronous remote methods. However, Andreasen et al. (2007) argue that asynchronous methods should be further evaluated because they provide evaluators with an effortless testing method and the ability to gather data from more participants.
Andreasen et al.’s (2007) research and literature review effectively demonstrated the benefits that remote usability testing methods provide both researchers and participants of usability tests. However, one of my main critiques of this study was their data collection methods for their two asynchronous testing conditions compared to the remote synchronous and lab testing methods. I believe that the questionnaire method may not have been the most effective method for comparing the results of these different remote testing scenarios. I’m curious how the results of the asynchronous testing sessions would compare to the lab and moderated remote sessions if the evaluators had each participant record their individual sessions and had them participate in the same think aloud techniques.