ux research and bad data – part one: recruitment

ux research and bad data – part one: recruitment

  • Comments 0

One of the main role of the researcher, is to provide quality, reliable data to the product team. However, it is not quite as easy as it may seem. Researchers, for lack of appropriate information or skill, may be at risk of generating data lacking in quality. There are multiple strategies that can be taken to ensure the highest quality of the data being collected, including careful review of the research protocol and a general skeptical approach to one’s research methodology. In this post, I’ll introduce the underlying reason why a good researcher should always initially cast a critical eye on the data he is being fed, and apply this to his own data. I will then discuss some common threats to good data collection I have encountered in recent weeks, and in particular the threats of bad recruitment.

i’m a researcher, not a believer

One of the paramount of academic research is skepticism. In short, the underlying philosophical principle of modern research is that nothing is really true (theoretically), it has just not been disproved. This concept is also called falsifiability. As a consequence, when doubts creeps in, the tendency of a researcher is to want to challenge the data he is being exposed to. What could be perceived as a pessimistic perspective on the world is in fact a healthy way to ensure a reasonably good quality of research insight. In fact, if you read the discussion on the lack of validity and errors in medical research (here, thanks Jonathan!), what else can you do than think that you and everybody else can do a better job ensuring collected data and derived insights are accurate. Most of the flaws in research are called “threats to validity”, that means threats that affect the validity of the data and associated insights. We as researchers, must be wary of threats of validity in other people’s data and insights, but also in our own research.

In recent weeks, I have been faced with various problems, in my research and that of others outside the company, that have (or could have) compromised the data being collected. They include issues with recruiting the right people for the lab studies, using the right methodology right, and the push for quantitative methods for the wrong research questions. To start this discussion, I will discuss the impact of recruitment issues on data collection. It may be obvious to you (it was to me), but as you will read this, you may realize that this does not necessarily prevent you from having issues.

threats by recruitment

One of threats for the data you collect is the recruitment of the study’s participants, where bringing certain profiles (accidentally or not) can turn the insights one way or the other. Consider for example an hypothetical study exploring the usability of a certain game-controller targeted towards casual gamers with limited to no prior experience manipulating such controller. Imagine now that the researcher forgets to recruit an equal variety of right- or left-handed person, and that the pool of participants accidentally ends up being exclusively left-handed. The researcher runs his study and gets an overwhelmingly positive response: user experience rocks. Only when put to market, the right-handed people are bothered by the placement of the control, and the experience ends up lacking. All of this just because the researcher overlooked which was the dominant hand of the participants.

While this example seems trivial and easily avoidable, I ran in another case of inappropriate recruitment not long ago when conducting a study of one of my product team’s feature. In this case, the initial discussion with the team led me to recruit participant of a general profile, considering that those would be equally valid for the general scenario being used. The study eventually confirmed the direction of the feature, revealed a couple of minor usability issues, and outlined some directions for future releases. However, after a closed pre-beta release, the product team started receiving aggressive feedback from users. In teasing out the unexpected reaction, I came to understand that the product team was dealing with two main scenarios (not one as initially outlined), potentially at odds with one another. As a result, I conducted a second study bringing in the right audience in the lab and gaining new insights into what would and would not work in the feature for this population. Had I not teased out the team’s particular population intended for this tool, I would have not been able to collect reliable insights to help improve the feature’s appropriateness.

Last but not least, you can never really control for the personal biases that the participants bring with them in the room. When we bring people in the lab for a usability study at Microsoft, there are strong chances that they already have at least a slight bias to favor Microsoft technologies. While what you want as a researcher is open, critical feedback, you can end up with a participant has a tendancy to please you, which means they are more often than not trying to infer what kind of feedback you are looking for. While the contrary is also true (you can find yourself with an anti-Microsoft, it is significantly more rare).

In general, understanding who you are designing for can only improve your ability to ensure appropriate data is collect in usability studies. Being on the lookout for biases introduced by who you end up studying should be a reflex for any researcher. You cannot always avoid those biases, but you can limit them, and be aware of their existence when reporting study. To prevent these biases, a couple of precautions can be taken. One is to make sure there are no conflict between the various audience of the tool, and if more than one is considered, ask for the team’s priorities. Teasing who the audience is, and why they are more a target for this feature than other potential audiences will hopefully expand into what specific characteristics of the population is most useful to bring to the lab for a more accurate data collection. Additionally, it does not hurt to ask a colleague to do a critical review of the study protocol, looking for how the recruitment criteria could bias the study results.

next

In my next blog post, I will be talking about the threats by data collection, or how you design or run your study, design your survey, etc. can lead to poor or even wrong data being collected.

Leave a Comment
  • Please add 6 and 2 and type the answer here:
  • Post