How Our Product Design Team Conducts Usability Tests Every 2 Weeks
User research is a core component of a successful product design process. It gives you an understanding of user behavior and the problems users have and allows you to test out your solutions or validate a hypothesis.
The goal of user testing is to identify any usability problems, collect qualitative and quantitative data, and determine the participant’s satisfaction with the product. Usability testing lets the design and development teams identify problems before they are coded. The earlier issues are identified and fixed, the less expensive the fixes will be — Usability.gov
I think we can all agree that usability testing is a highly valuable practice. However a lot of us tend to not do it. Why is that? Some of the reasons I hear:
- Time: The time it takes to recruit, schedule, and run the tests.
- Money: Paying participants incentives or paying 3rd parties to manage and run tests for you.
- Fear: Fear of being proved wrong or fear of introducing an influx of changes late in the development cycle.
No doubt it takes up a lot of time. The thing that deters me the most from usability testing is the time and effort it can take to recruit and schedule.
I joined Mesosphere in October and worked with the team over the first couple of months to implement a system that would help automate and encourage (or force) us to run usability tests every 2 weeks. This is how we set it up.
tl;dr
- Heavily inspired by Steve Krug’s Rocket Surgery Made Easy
- Set up an opt-in email list and form, and start recruiting today
- Use a tool like Intercom, Mailchimp, or Pardot to help manage and email your contacts
- Use a tool like PowWow to help schedule sessions in your calendar
- Offer incentives in the form of gift vouchers ($50+)
- Be flexible and run both in-person and remote tests
- Keep your reports to the point and call out the top flaws so they’ll be fixed
Recruiting Customers and Users
Mesosphere is B2B. We have an enterprise product (DC/OS) with several large customers and we have a few open source products with several thousand users. So this is a good place to start.
First, we set up a Google Form enabling people to “opt-in” to UX Research. The idea here is if they opt-in, we have permission to email them about every upcoming user testing session. We can also reach out to them for other user research needs like surveys, prototype feedback, interviews, and field visits
The form captured people who were interested, but we had to let them know about this form somehow. We setup a CNAME for uxresearch.mesosphere.com that pointed to the form. Then we set up a series of inbound links from the following sources:
- GitHub readme
- Social media (Twitter, Facebook, LinkedIn)
- Our website
- Meetups
- Public Slack channels
- Zendesk/Support follow up emails
We worked with other teams at Mesosphere (Sales, Support, Marketing, PMs) to refer contacts and fill this pipeline
The most effective trigger we set up was with Intercom. Intercom is a service for communicating with users of your website or product. For every user that signs up and uses the product at least three times, we send them an email that looks like this.
If you have an established user base, this works great. What if you don’t? Some recommendations:
- Social media is still your friend
- Ask weekly newsletters
- Ask friends
- Craigslist/TaskRabbit
- Relevant Slack groups
- Facebook groups
- UserTesting.com
Evolving our Recruiting Form
The Google Form and spreadsheet worked well for a the first couple of hundred entries, but after a while it got hard to maintain: we needed a better way to manage emails and lists.
Our Sales and Marketing team are big users of Salesforce, so we worked with them to integrate with it. We now use Pardot to keep track of all our “UX Research Leads” via segment lists. This is a really powerful tool as it enables us to tap into all the information Salesforce stores about our customers.
Switching from Google Forms to Pardot:
- Create a new “UX Research” list in Pardot
- Imported all prospects from the spreadsheet
- Created a new capture form that is submitted to Pardot
- Changed the CNAME to point to the Pardot form
Now we can use the data already available to us in Salesforce and create new segments, e.g. filter where city = “San Francisco”. Then we can use Pardot to send emails to these lists to let them know about upcoming sessions.
Using Salesforce makes life a lot easier for us as it is a service our team was already using. Here are some alternatives:
- Keep using Google Forms and Google Sheets
- Use Intercom to manage your users, segments, and emails
- Use other ESPs like Mailchimp to email users
The full recruiting flow now looks like this.
The design team has agreed to set aside every other Thursday to run user tests. We have 5 slots, each lasting one hour: 10am, 11:30am, 1pm, 2:30pm, 4pm.
These are set and give us a 30 minute buffer between each test to reset and prepare for the next. We believe in Nielsen’s law of diminishing returns. 5 tests should be enough to highlight the main issues.
We use PowWow to help us schedule the tests. PowWow is great as it enables users to choose a time that suits them, cuts out the back and forth of emailing to figure out a time, adds the slots to your calendar as well as following up with email reminders.
We’ve found that there is still a considerable drop off rate and it’s good practice to follow up with a personal confirmation 2 days before.
Our goal is typically to have 5 users on site. If we can’t do that we then target remote customers. Given that there are always no shows or last minute cancellations (at least always 1 of 5) we have an internal backup list of interested teammates. So if someone cancels last minute we’ll put the word out on Slack so that the time doesn’t completely go to waste.
We then document the schedule in our wiki and maintain a #user-testing Slack channel for full transparency.
Using a Co-ordinator
Recruiting and scheduling takes up a lot of time. After our design team did this a few times, we asked for help from one of our Executive Assistants. Thankfully they were able to allocate some of their time each week to running this for us, and even optimized the process further.
I can’t recommend this enough as it is less for us to think about and we get to spend more time on the designs and script. They take care of:
- New recruiting channels and working with other teams to fill the pipeline
- Scheduling each of the available slots every 2 weeks
- Confirming users are still going to show up
- Welcoming them on the day
- Following up after with gift cards
- Making sure our user testing lab was reserved (fancy word for a meeting room)
Preparing for the Tests
Each user testing day has a “Test Lead”. It is their responsibility to run the tests, but they can (and are encouraged to) have others help. Either by having another designer run a session, or by having a developer or PM sit in.
The test lead usually spends some time with each of the designers or PMs in the build up to the test to outline what it is they want tested or major questions they need help validating.
- Test work that has been built (dev &production)
- Test work in progress (prototypes)
- Test hypothesis and value (interview)
The product designers help put together a script for each of the features they want tested and the test lead makes sure they understand it so they can run it. The script should not be seen as word tasks. Think of it as more of a guide. It is definitely ok to jump around and skip questions. We never get through everything in a test.
Note: Preparing a prototype for user testing often requires a bit of additional work to help with the story and tasks you are setting. So it helps to think about the story and flow when you begin the project.
Running the Tests
Day of we greet the user, have them sign a waiver, and bring them into the testing lab.
In the lab, we’ve already set up a MacBook that has tabs open for everything we plan to test. We use ScreenFlow to record the sessions and hook it up to a big screen so we can watch. We have no more than 2 people in a room: the test lead and either another designer, PM, or developer. Personally I love when developers join as they get to see problems IRL and they are then sold on fixing them.
We start off by making the participant feel comfortable. Some things we’ll say.
- We’re not testing you, we’re testing our designs, so nothing you do is wrong
- Think out loud as much as possible so we can understand your thought process
- What websites to you like to visit daily? What news did you read this morning?
Remember, most people haven’t done this before and don’t know what to expect. Focus on behaviors not opinions, and answer questions with questions
Finish by asking them if they have any questions, and if they could wave a magic wand and have any feature today, what would it be? This helps us build a list of number one feature requests, and makes the user feel good that we’re listening to their feedback, without going down a rabbit hole of feature request after feature request.
Post test, remember to take a photo of the user, say thanks, send a gift card (we typically offer $50-$100 Amazon gift cards depending on whether it was remote or on-site). This is also a good time to ask if any teammates would be interested in coming next time, or to schedule a field visit at their office.
A Note on Remote Tests
Remote tests are also valuable. We prefer in-person as there’s less reliance on technology and easier to get to the comfortable tipping point of the interview.
Some things we’ll do differently for remote tests:
- Send a waiver to sign via RightSignature
- Conduct the test via Google Hangouts
Analysis and Report
Take your raw notes, create a report, try to keep it brief, and highlight the main issues. Our reports outline the top issues, and we assign a priority to them.
- Critical bug that needs fixed now (e.g. form doesn’t submit)
- High priority (e.g. no one understands how to do something)
- Mid priority (e.g. took more effort than expected to do something)
- Low priority (e.g. you notice a hover style missing for a button)
Then we share this report in our wiki. It includes:
- Who was tested (photos, names, titles, companies)
- What was tested (links to prototypes/designs)
- Topic, description of issue, recommendation on how to fix, priority
- Other notes
- Links to videos (stored in Google Drive)
We don’t create a highlight reel, but I’ve seen this work well before. Realistically no one has time to watch all the tests. So if there’s something in particular you want to make sure is communicated and fixed, it helps to go back over the videos and pull out small clips of the user struggling or the bug you noticed.
Share the report to those who are interested. Product team. Engineering team. Email. Slack. Record everything in the wiki so discussion happens through comments on the wiki between designers, developers and PMs. We also add tags to our reports so that they show up in current and future project requirement docs.
Follow Through
The most important thing is to take action on what you discover. There is no point doing all this if you don’t actually make the changes to make the product better.
If you can show a clip of someone struggling, it is an easier sell to developers, stakeholders, or whoever it is needs convincing. Also like I mentioned before, having developers and PMs sit in on a test helps them have empathy.
Conclusion
Testing certainly isn’t straight forward. It takes time and effort. But every session we run brings so much value. As your company and design team grow you will likely have dedicated User Research roles to help handle this, but it is always beneficial for all Product Designers to be involved in the process.
—
Lee Munroe is Product Design Manager at Mesosphere. You can follow @leemunroe on Twitter. Mesosphere are hiring UX Researchers and Designers in San Francisco.