One of the main role of the researcher, is to provide quality, reliable
data to the product team. However, it is not quite as easy as it may
seem. Researchers, for lack of appropriate information or skill, may be
at risk of generating data lacking in quality. There are multiple
strategies that can be taken to ensure the highest quality of the data
being collected, including careful review of the research protocol and a
general skeptical approach to one’s research methodology. In this post,
I’ll introduce the underlying reason why a good researcher should
always initially cast a critical eye on the data he is being fed, and
apply this to his own data. I will then discuss some common threats to
good data collection I have encountered in recent weeks, and in
particular the threats of bad recruitment.
In my last post,
I discussed how using the right level of fidelity could ease design
reviews and help the design process, especially in multi-disciplinary
environments. This topic is somewhat related to the one I want to
discuss today, which is about how your design can provide features
misleading the users.
CAD puts too much emphasis on the pixels in the early stages of design
Let me guide you through one of the occurrences of this phenomenon.
Back in 2002, I was hired to conduct a ethnographic field study of
collocated design practice in two architecture firms. The goal was to
identify key physical interactions that participated in the sound
collaborative practice of architects. One insight we identifies was the
shift of practice from pen-and-paper based drawing, and CAD based
drawing. I remember having a fascinating conversation with one architect
explaining to us that he felt CAD was getting in the way of his real
job, problem solving and design, by putting too much emphasis on the
pixel right from the beginning. Because the software allowed users to
create straight lines, to-scale drawings, he felt pressured at all times
to deliver on such promises when using the tool. Yet the early stages
of the design did not require that level of precision, and he
consequently often found himself spending too much time prettifying
things that would be thoroughly iterated on anyway and changed
This ties back to the concept of affordances,
coined by ecological psychologist James J. Gibson. Real world
affordances are ways in which an object in the real world provides cues
as to how it can be used in our lives --e.g. I could use the hammer to
hit something, because it has a flat hard surface and a long handle,
and a nice weight to it--. (For those of you design savvy, you may be
familiar with Don Norman's
book who talks, amongst other things, about the role of affordances in
design: The Design of Everyday Thing). In 2001, Bill Gaver derived
this concept to the digital world by talking about technology
affordances, or how widgets can tell users what they can do, or help
them learn it. In this post, I want to talk about a slightly different
kind of affordance, a design affordance if we must label it. It is
about how, by fashioning an object and placing it, people will be
tempted to use it, with or without an understanding of what it is good
for, or how to use it. The mere fact that it is there says two things:
By being there it is useful for something, and by not using it, you
make a conscious decision not to use it.
"Because the ruler allows you to get very precise, you feel like you have to."
To go back to the hammer analogy, imagine being in a carpenter's
shop. You see the hammer, and you already have a physical understanding
of how you can use it, added to a cultural understanding of other more
complex things it may be able to do (pull nails out, serve as lever,
etc). Nothing new here. Imagine now that you find a nice big ruler,
that provides precision own to the 64th of an inch (Yaya!). If you are a
visitor passing by, you will be impressed by the minutia of the work
of the carpenter. If you are a carpenter apprentice on your first day,
there are chances you will feel like you need to upscale your technique
to reach 64th-of-an-inch precision. Because it allows you to do it
somewhat implies that you should. I would bet that many senior
carpenters have seen overeager beginners starting to measure everything
with the precision provided by the tool, and have made their quest to
teach their pupils that in many cases, it's about getting the right
level of detail, regardless of what the tool allows you to achieve.
Which sometimes means using an already cut piece of lumber to measure
where to cut subsequent ones, even if it is precise only to the 8th of
an inch. In other words, because the ruler allows you to get very
precise, you feel like you have to.
How does this translate in the software world? In a paper
last year, I started discussing how social media websites like Twitter
and Facebook provided you with mechanisms that set the expectations of
people in a relationship of how they can communicate. In twitter, you
will never expect a letter about the latest in the other's life, due to
the limitation in the number of characters. On Facebook, you are
provided with a text box for entering comments which affords you typing
many characters, so this is what people do. I sometimes feel pressured
to use this box to its full extend. What if I only do a smiley where I
could have written a more complex text? I feel pressured: the choice
of not using this feature to its full capability can be perceived as a
lack of engagement ("I could not be bothered giving you more than that, even though I could").
I strongly believe this is why, not so long ago, Facebook has
introduced the "like" button. Now you can express yourself easily,
without having to engage in a conversation. This is accepted, since this
is a mechanism supported by the interface.
I am sure by now you can derive this concept to many interfaces.
Please do not hesitate to share those with me, as I work my way around
understanding and better defining this phenomenon.