One of the main role of the researcher, is to provide quality, reliable
data to the product team. However, it is not quite as easy as it may
seem. Researchers, for lack of appropriate information or skill, may be
at risk of generating data lacking in quality. There are multiple
strategies that can be taken to ensure the highest quality of the data
being collected, including careful review of the research protocol and a
general skeptical approach to one’s research methodology. In this post,
I’ll introduce the underlying reason why a good researcher should
always initially cast a critical eye on the data he is being fed, and
apply this to his own data. I will then discuss some common threats to
good data collection I have encountered in recent weeks, and in
particular the threats of bad recruitment.
UX methods are abundant. They don’t always work in the way they are
outlined either. In reality, methods are guidelines for collecting data
on users during a point in the development cycle. In other words
methods are rarely applied exactly the same way, at the same time.
Given this, much leeway exists for when and how to apply the methods.
Recently I had a challenge.
I had to support 9 sprint teams covering 6 scenarios and many, many
user stories. I am one researcher. My big tenet for this release was to
ensure the teams were able to test their designs early and often.
Testing early and often allows designs to converge and expand and
converge to a product that is closer to what will be useful and usable
for the developers who work everyday with Visual Studio. Making their
jobs easier and more fun helps Visual Studio be a better tool. Back to
my challenge. Supporting multiple sprint teams is difficult because
they work incredibly fast making pieces of working software every three
weeks. I had to find a way to jump in early and get user feedback to
the teams before they built software that I could only validate and
make some smaller changes. I wanted to ensure that they had design
direction early from users. I also wanted maximum participation from
the teams. I also needed a way to conduct these studies quickly with
not much overhead in analysis.
I knew of several methods that could accomplish this. For test early
and often, any usability study that tests early concepts or low
fidelity prototypes will do. So prototype or concept testing would
work. For maximum coverage of the teams, I chose to test concepts and
prototypes at the scenario level rather than the user story level to
get at the end-to-end experiences that many user stories make up. For
participation, I used the philosophical approach of participatory
design where stakeholders are empowered to give input into design. For
conducting quickly, I used concepts from the RITE (Rapid, Iterative,
Testing, and Evaluation) method where decisions about what to change is
made right after 1-3 participants have used an early build of a
product. Because I was working with early design concepts and low
fidelity prototypes I used the ideas from the RITE method, namely
uncovering issues and finding their solutions; key decision makers are
present; and resources are available to make changes to get early
I’ve run 3 of these sessions and they all are a little different
depending on the fidelity of prototypes, the team members involved,
what the teams want to know, and where they are in their sprint. I call
the new method Fast Iteration Studies (FIS), and at this point need a
better name, so if you can think of one I am open to ideas, especially
catchy, marketable ideas. A FIS has the following procedure:
With this process you get maximum participation with the team and two iterations on the design, all within a sprint.
I can’t go into much more detail at this point, and once the product
is released next year, I’ll post another blog that reflects how early
decisions made in these studies helped shape the final product.