Peer-Testing: The Quickest Way to Insights

It’s the critical moment Lab teams work towards every session.  Intense energy builds as facilitators call out the 15 minute warning. Teams work feverishly to cut out another button or draw out a paper screen.  “Okaayyyy!  Time’s up!”  Teams gather round a large table to begin our peer-testing process.

Peer-testing is an essential element of our process and iterative ethos.  Often we find that some of our ideas don’t necessarily translate to the people who will be using them.  We overlook a step in the process, misplace a button, or forget to include another form of feedback or prompt.  Through internal peer-testing, we can make these gaps visible.  “Eating our own dog food” allows teams to receive quick feedback around what is and is not working within the prototype during initial development.  These data points allow teams to reflect and refine their ideas as they create iterative versions of their apps.

When we peer-test a prototype, we focus on three criteria: the interface, the interaction, and the learning. Here are just a few of the questions we ask ourselves to evaluate them:

  • Can users navigate around it to execute a task?
  • Is the mechanic, or main action, effective and engaging?
  • How does the interface/interaction support and align with the learning goals?
  • What types of content can live in the app?
  • How will learners enter and exit out of the experience?
  • Do learners understand why they are using it?

Reflecting on all of these questions prepares us for the second stage of learner field testing – but that’s a whole other blog post!

 

A simultaneous evaluation of the interface and the learning is one of the unique challenges of our process.  We have found that an explicit focus on both is essential from the beginning.  During the concepting phase, we speak mainly of the learning experience and goal.  In initial paper prototyping, we test for basic interactions and interface.  Towards the end of paper prototyping, we revisit the learning to question if it is working.  Once we go digital, teachers and technologists focus on their realms of expertise, but solicit feedback from each other often.  By the time we reach the second stage of learner field testing, teams have a solid prototype.

Here are some guiding principles to get you started:

Assign a human computer if you are working on a lower fidelity prototype.  This person acts as “wizard behind the screen” who responds to the actions of the tester.  If you are playing the human computer, remember to stay in character.  It can be tempting to guide your tester if you see them doing something wrong.  Chances are, however, you will learn less than if you let them continue until the end.

Think aloud.   An important part of peer testing is to make invisible cognitive processes visible so its creators can refine what isn’t working.  Testers should verbally walk through each part of the process, reading aloud any content and reasoning aloud about specific choices.  The more detailed you are about what’s running through your mind, the better!

Reflect and discuss as a group.   Teachers will see things that technologists won’t and vice versa.  Everyone should have a chance to give insight and offer constructive feedback.  Be sure to video document and/or to have one team member serve as a scribe!

 

 

Leave a Reply