What it is
Usability is how easily someone can use something. That ‘something’ can be a digital product or service, like a website or an online application form.
It can also be a tangible thing like a book or a tool.
Usability is about:
- Effectiveness. Can users complete tasks, achieve goals with the product, and do what they want to do?
- Efficiency. How much effort do users require to do this? This is often measured in time or cognitive load (confusion).
- Satisfaction. What do users think about the product’s ease of use? How satisfied are they with the experience?
These elements are affected by:
- The users. Who’s using the product? Are they highly trained, experienced users, or novices?
- Their goals. What are users trying to achieve with the product? Does it support what they want to do with it?
- The context of use. Where and how is the product being used?
Why it's useful
Understanding and testing usability, and actively working to make services easier to use makes a difference to our users and to the service’s success.
For the user
- Good usability makes life easier for the user, so they can get things done.
- Good usability can increase confidence and trust in a service. This is important as we know it’s sometimes scary for users to interact with government.
For the service
- Good usability improves the service.
- Improves the likelihood that the user will complete the service or task without abandoning it.
- Drives uptake through positive reviews and word of mouth.
For the agency
- Usability testing helps us make informed decisions for service improvements.
- Lowers operating and (re) development costs: If a product or service is not helpful or intuitive, users won't adopt it (channel shift).
- Good usability builds loyalty, trust and a good reputation.
- Usability helps us to put customers at the centre of a service – it builds empathy and understanding of customer needs.
When to do it
- Testing an existing website or application to get an understanding of the current state and identify where to focus design improvements.
- Paper-based prototype test. This catches usability issues early in the design process and tests your assumptions.
- Click-able mock-up test. This gives you a clearer understanding of how the product will function for the user.
- Finalised functional product test. This can catch last minute usability issues to refine, after deployment.
Text description of the 4 types of user test for each stage of a project
Continuum from start of a project to the deployment of a project, indicating different types of user tests for each stage.
- Computer icon indicates that testing the current website or application is often most useful at the start of a project.
- Paper icon indicates that paper-based prototyping is next on the continuum and is useful early in the project.
- Mouse icon indicates that clickable mock-up tests are next on the continuum, useful in the later stages of the project.
- Computer icons indicate that finalised functional product tests are best prior to deployment to catch any final issues.
How to do it
A usability test has many elements. For a large in-depth test, allow approximately six to seven weeks for planning, execution, analysis and reporting back. For smaller iterative tests you can achieve all of these things in an afternoon. It will depend on the objectives and scope.
- Overview and scope: outline your scope, explain why the usability test is needed. Briefly describe the product or service being reviewed.
- Planning and setup - Set up the testing environment, recruit participants. Get all the paper work and tools ready to go. Test your test on colleagues to prepare.
- Do the tests. Make sure there’s a rest period between each test and debrief after each session to try and avoid inconsistencies.
- Analysis, conclusions and recommendations. As a group (facilitator and note-taker), analyse the results, draw conclusions about the product or service and how to improve it.
Each member of the team is important to to running the tests smoothly and getting useful results. Below is a description of each role.
The facilitator should lead in the tests. It’s up to you to oversee all aspects of the test. Make sure everyone understands their role, all of the tools and equipment are ready, and participants are coming. The facilitator puts the user at ease and guides them through the test. It’s important to be patient and quiet. Remember to smile and use your participant’s name.
Make sure you:
- Set the right context. Make the participant comfortable and confident enough to engage in the session by welcoming them into the space and giving them a friendly introduction to what is going to happen during the test.
- Prompt the participant to ‘think out loud’ - verbalise their thought process, questions and concerns as they’re interacting with the service. Some participants will naturally find this easy. Others will find it hard to verbalise anything at all. Don’t force them to comment, as it might not be in their nature and can lead to tension and distorted behaviour. From time to time it helps to ask: ‘What are you thinking now?’ which will likely lead to a few great insights.
- Manage the flow of the session, without influencing the participant's views. Participants may take a long time, or try to speed through tasks. Allow the participants to drive the pace of the test as much as possible. If it’s really important, and time is running short, the facilitator can prompt: “I’m just being mindful of the time, we may have to move on”. Remember to note where the participant got stuck as there will be a recommendation here. In most cases it might not be important to finish all tasks. Being gentle and conversational works best when ‘steering’ participants and the session.
- Wrap the session up in a closing interview and ensure that participant leaves with a positive feeling.
Typically, there are three key lines for a facilitator in a usability test:
- “What are you thinking now?”
- “What would you do next?”
- “How come?” (Avoid “why?” to ensure your language is less aggressive).
The less the facilitator says during the test, the better. Be engaged, but be careful not to lead the participant to think they are doing ‘the right thing’ by nodding or making encouraging sounds.
The note-taker’s role is recording as much behaviour, quotes and actions as possible, in a neutral fashion. The quality of your notes determines how easy is it to analyse and report on your findings, and determines how unbiased your analysis can be. Be careful not to write a transcript as you will be doing other recording. The facilitator and note-taker should agree what’s required before the test.
For example, is it more important to record:
- Direct quotes
- Action details such as: ‘first clicked on x, then clicked on y’
- General reactions and expressions?
- Let the facilitator introduce you, but do say “hi”.
- Don’t speak during the test.
- If you have questions, note these down. At the end of the test, pass the facilitator your questions, or talk about them after the test.
- Take detailed notes on the significant actions, emotions and reactions.
- Sit yourself in the participant’s peripheral view or out of view.
- Make sure that you’re not in-front of the participant.
- If it’s a small meeting room and you have the technology to do so, it’s ideal to have the note-taker in a separate observation room.
Observers are optional. The facilitator needs to decide if having observers is valuable or not. It can be powerful to have business owners, developers and designers observe usability testing – especially if they’re not typically in touch with their customer base. It can help them build empathy for the customer, see the value in the work and be more willing to make changes when necessary. However, it might not be appropriate to have observers if they’ll be in the same room as the participant.
If the observers need to be in the room with the participant:
- Invite only one observer into the room with a participant at a time
- Get the participant to confirm they are comfortable with the observer’s presence
- Be clear with them what the rules of engagement are for the test.
Observer rule: It’s not appropriate for observers to speak or ask questions during the test. Give them a note-taking sheet and ask them to note down their thoughts and ideas to pass to the facilitator later on.
If you have a separate observation room, you’ll need an observation facilitator. They are just as important as the session facilitator.
The role of the observation facilitator is to support the observers to understand the full context and develop empathy for the participants.
To set the background for observers before the test:
- Introduce usability testing. What is usability, why you’re doing a usability test, and what you hope to uncover.
- Identify the scope and objectives.
- Share the value of observing unbiased behaviour.
- Verbally run through rules again with the observers on their arrival, before the participant has arrived.
Usability tests are traditionally ‘lab-based’ which means dedicated rooms are set up for running test sessions in. You’ll need at least one room for the participant, facilitator and note-taker. The ‘lab’ should be a comfortable and private space as participants will behave differently if they feel exposed. Make sure that there is enough space for you to set up the equipment and comfortable chairs. Most meeting rooms can be turned into a usability ‘lab’ for a few days.
Venue rule: Make sure observers and note-takers understand their role clearly before the participant arrives to do the test. If you have the appropriate space and technology, another room can be set up for observers and more note-takers. This room should have a projector or monitor so you can stream the session for the group of observers. Make sure the rooms are far enough apart that no sound from the observation room can be heard in the participant’s room.
For some tests it’s okay not to set up a lab. Sometimes it can be better, with permission, to enter into your participant’s environment. This could be their home, work, a cafe, or on the street. This depends on the type of test and the length of the test. This will enable you to note and understand other constraints the participant might be facing. You’ll also get a more useful picture of the environment in which the participants and other users are likely to be using your services.
If you run a web-based usability test, on an existing website or a clickable prototype, it’s best to make the testing environment as similar as possible to the user’s likely situation at home or at work. Check available statistics on what the most commonly used device(s), screen sizes, and browsers etc are.
It pays to record your session, either with:
- Dictaphone, a camera on a tripod and a note-taker or
- Recording software and a web-cam on your computer and a note-taker.
What you’re trying to capture with your recording is:
- Screen: what your participant clicks on or types into (do).
- Speech: what your participant says (think).
- Actions: how your participant behaves, or emotions they experience (feel).
There are many different recording software packages, so check what’s available with your technology department.
Check your agency guidelines for keeping raw recordings and data. Recordings should be kept secure and are usually deleted once the project is finalised. Make sure you have your participant’s consent to record.
- Each test has clear and achievable outcomes.
- Participants must be representative of real users.
- Tests are conducted with one participant, one facilitator and one note-taker.
- Note takers and observers are silent.
- Participants do real tasks.
- Participants are recorded or detailed notes are taken.
- The data is collected and analysed to produce actionable recommendations.
- Recommendations are implemented or re-designed.
- Nielson Norman Group
- Donald Norman [Book]: Design of everyday things 2002 (revised and expanded 2013)
- User Centred System Design: New Perspectives on Human-computer Interaction 1986
- Steve Krug [Book]: Don’t make me think 2006
- Rocket Surgery Made Easy: The Do-It-Yourself Guide to Finding and Fixing Usability Problems 2009
- Last modified: