Usability checklist: Blending the designer’s and the tester’s view

Usability will not be resolved with technology. On the contrary, with the advance in technology, new usability challenges are being created. Continuously; new technical possibilities, a wider range of applications, a fast growing group of developers.

In a far past usability of computer applications was constraint to actions performed on devices that had a key-board. The user group was small, typically technically high skilled, special interest groups. Like the ancient western movies; the guy with the white head is the hero. Simple. Then they introduced color. With computers it was the same; the device with the mouse. Nowadays computer devices have a vast amount of possible input devices. Not only key-boards and mouses, but also pens, touch pads and touch screens and so forth, and soon, with the advance of wearable technology and garnent integrated sensors, even your cloth become (an extension of) a computer. The user group also exploded, from a selected few computer fanatics, to everybody. Most people have at least one computer but with over 90% of phone users having a smart-phone and the huge success of the tablet computer, a large number of people no doubt have several. Computing is ubiquitous.

Usability remains problematic. New developers need to be trained, new technologies need to be understood, new domain knowledge needs to be obtained.

Luckily we have guidelines for the designers and check lists for those who evaluate. Despite all the guidelines and check-lists available, usability is still perceived as subjective. Check-lists and surveys seem to be a goal in itself, not a means to an end, leading to better understanding. Typically, I prefer task performance tests and wizard of Oz studies; relatively simple to apply and with a verifiable good/better/best/worst outcome. But, ok. Checklists and surveys as any tool provide valuable results in the hands of those who know how to work with them.

Basis for this dispute may be the apparent disconnection between design/development and test/evaluation. Although both have the same objective (the best usable product for the end-user), their reference, their golden standard differs. Product design is driven by guidelines, preferably as few as possible not exceeding 10. For example 5 timeless usability principles, Donald Norman’s design principles (or see here), Schneider’s 8 golden rules (or check here) or Nielsen’s 10 usability heuristics (or check here). On the other hand, test and evaluation is driven by checklists, preferably as much as possible. For example here, a checklist with 247 usability guidelines. Impressive. But useful?

If the objective is the same, why then is there no clear relation between guidelines and checklist?

If usability guidelines (top-down) were organized in a way to highlight their relation with checklist (bottom-up), the designer and the tester/evaluator have a shared reference of what is good, and can concentrate on making the product’s usability better. Most guidelines and checklists do cover all aspects, but what is missing is – IMHO – a structure that support our top-down goal-oriented thinking. There is some irony in the fact that check lists lack in this respect since top-down and goal-oriented thinking you will find both as usability guidelines.

Check out the check-list below. I will briefly describe the individual areas.

users

Users – One of the first things to realize is to identify whom the user actually is. Are we talking about a novice or expert? Are we targeting a business environment or more casual? How old is the user or target group? How important is gender? Differences between customers make a difference on how questions are evaluated.

device

Device – Often carefully overlooked is the device the software actually is for. Quite trivial and difficult to overlook, right? I mean, how can you mistake software developed for a smart phone with a web site. You are absolutely correct that is never overlooked. But you’ll be surprised how often meetings occur where software is created for one device and then carelessly assumed to be an equally perfect fit for another device. Designed for a tablet computer? Well, then it will run equally great on a smart phone, right? No.

analyze

Analyze – Put a device in someone’s hands and he or she will go through three distinct phases. No, although similar, I am not referring to the phases a civilization passes through; the ‘How, Why and Where phases’ (see Douglas Adam’s Hitchhiker’s Guide to the Galaxy).”For instance, the first phase is characterized by the question How can we eat? the second by the question Why do we eat? and the third by the question Where shall we have lunch?” The phases a person will go through are similar; where am I, why am I here (or what can I do?) and where to next. The first step is not about what button to click or where to swipe. No, it starts with the basics; defining where you are, what you can do here and how to do that. The user needs to know where am I now, where can I go to, and how do I proceed. For example, I want to go to the center of Amsterdam, then it is quite mandatory to first know where I am now, the options I have and how to use these options. I do not want to know what the price of the ticket of the train going from Utrecht to Amsterdam is. I need to know first that I actually am in Utrecht (and not for example in Zürich). You get the drift.

Choose

Choose – after a first analysis the user has to choose an action, ranging from a specific action to move forward or – worst case – to hit the home button and get out of here. To motivate the user to take the next step, the application has to be clear about it’s purpose, transparent in what you can do, and pleasant to look at. Call for action is often mentioned. To frame the action and to make results predictable the application has to be clear about the current status. Users find it easier if actions are framed in clear goals and related tasks. It helps to split tasks is sequence of actions. Knowing users will make mistakes, it is imperative to for a system to be robust and be error tolerant. Last, people do not like to read, and certainly do not like to rely on their memory; make sure to make full use of their ability to recognize.

Transition – The next level is transition; executing the actions to full fill the software’s purpose, and to arrive at the actions it is calling out so loud for.

The next few images show how the checklist covers the various guidelines (Norman, Schneider and Nielsen). Interestingly, the topics ‘clear purpose’ and ‘goal oriented/task driven’ seem to have less focus by the design guidelines. Also interestingly, the different guidelines appear to have their own focus, from more holistic (Norman: where am I) to very practical (Nielsen: what next?).

Donald Norman Schneiderman Nielsen

The ‘enabling users to ACT’ check-list aims to support a both a top-down approach (aiming at the design process, from left to right) as well as bottom-up approach (aiming at the verification process, from right to left). In addition, it the user and device area help to keep in focus both the target user and the environment in which he or she will operate. Try it. Let me know.

ACT

Product Ratings are great…. Right?

Ratings are great. They help to highlight a product’s individual performance on specific criteria compared to its peers. Unless the rating is based on user voting, the underlying mechanism is ‘best in class’. Ignoring this makes the rating counter productive as tool for supporting customer decision taking.

A nice example of using fact sheets with ratings in a less than perfect way is demonstrated by a local Swiss (mobile) phone provider. All phones on display have a small fact sheet. This seems part if the new branding, without doubt aimed at making the shop (and company) hip and modern. There is a coffee machine and there is a boll with sweets to make waiting more pleasurable. Overall, the shop offers a ‘green’ – good for the ecology – look. Tables are rustically wood. Offers are drawn on the wall as menus in restaurants, illustrated with comic versions of app icons. Employees do not stand behind a counter but walk around ‘free’, looking for customers to serve. If needed, there are discussion points scattered around the shop. They also walk around with tablet computers tide to their left hand. Personnel with a tablet fixed to their left hand, which invariably indicates modernity and hipness, right? Regretfully, the tablets still run made-for-desktop software, which is useable on a tablet except for simple tasks, which is even more regrettable since simple tasks are those for which you permit yourself to call the hotline and probably do not come into the shop. So you find yourself waiting in line for one of the employees with a desktop PC to become available while the tablet-boys are standing around chatting away about soccer waiting for the more simple tasks to arrive. Yes, I have a coffee, please.

While waiting, you are confronted with the phones on display. The phones are accompanied by a small – A8 size – fact sheets, printed on ‘green’ paper highlighting main characteristics. The idea is that you can tear off the fact sheet and take it with you. Great idea but clearly not been tried; it takes certain practice before you can tear off a fact-sheet without tearing it.

<img class=”aligncenter size-full wp-image-565″ alt=”Factsheey_Offer” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/Factsheey_Offer.png” width=”500″ height=”705″ />

The fact sheet describes the phones’ qualities textually followed by a quantitated list of specific qualities and an environmental impact indication. Important to note is that these are two channels. Some customers will read the text, other customers will glance over the ratings and a few will do both. There is a very basic rule to adhere to when disseminating the same information through different channels; independent from the channel the information should be the same. It is highly recommended to check.

A very basic phone is descripted as (translated from German) ‘Exceptionally user friendly, with an uncommon simple user interface, excellent sound quality and utilities particularly useful to fulfill the needs of older users, such as alarm functions, reminders, memory teasers etc.’ So-far so good.

<img class=”aligncenter size-full wp-image-566″ alt=”phone_Emphoria” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/phone_Emphoria.png” width=”500″ height=”320″ />

The four qualities that are highlighted and quantified are ‘calling’, ‘camera’, ‘screen’ and ‘music’, each rated on a five-point scale.

Calling is listed with a 3; an average.

This completely baffles me. Here we have a phone that is clearly and without any doubt made to make phone calls. That’s it. You can make phone calls. Nothing more. Sure, it has a few add-on features, but these are without doubt simply thrown into the mix to fill up the menu structure with some content, nothing more than placeholders. And according to the fact sheet, this phone performs ‘average’ when it comes to making a phone call.

If the phone truly has an average calling performance, it has no business being offered. It should not be here. They should not have wasted the recycled paper the fact sheets are printed on.

Now I am curious. Is there a phone that received a 5 for calling? Interestingly enough phones that clearly are created for the single purpose of calling all receive an average. Phones that have been created to do much more than just calling, i.e. smart phones, do not even have calling as a feature mentioned.

<img class=”aligncenter size-full wp-image-567″ alt=”phone_Samson” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/phone_Samson.png” width=”500″ height=”320″ />

Form the fact sheets I cannot but conclude that none of the phones offered are excellent, if the only thing I want to do is make a call. Maybe there is no market for this? Not true. A study shows that 56 percent of all US adults now have smartphones. With more than 90% of US adults having a mobile phone, this means that the smartphone penetration is now about 60 percent. It also means that 40 percent has a ‘normal’ phone, for which making a phone call still is in the top 3 of most used functions (after checking time and sending SMS).

Smart phones are recommended based on surfing, camera, navigation &amp; roadmaps, and the ability to text. They are not rated by their quality in calling, which probably has become a nice-to-have feature.

Now I am poking a bit at this specific situation, but to me it illustrates two issues; you need to know whom you are selling to and you need to understand the purpose of rating scales. The example suggest a lack in both.

Using the product design canvas, let’s compare the convention phone with a smart phone.

The canvas for a mobile phone may look like this:
<img class=”aligncenter size-full wp-image-569″ alt=”ProductDesignCanvas_phone” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/ProductDesignCanvas_phone.gif” width=”500″ height=”333″ />

The canvas for a smart phone may look like this:
<img class=”aligncenter size-full wp-image-570″ alt=”ProductDesignCanvas_smartphone” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/ProductDesignCanvas_smartphone.gif” width=”500″ height=”333″ />

&nbsp;

So, what are the main differences between the phones that you need to high-light? This mainly depends on the clients you address and the features you expect they will find important. What is interesting about this case is that there are distinct groups within one large target segment; a more sophisticated one in search for a personal digital communicator and a more conservative one looking for a mobile phone. However, there will also be cross-overs; the more conservative but progressive interested in trying a personal communicator or a more sophisticated user settling for a simple mobile phone.

The information and ratings have to be presented such that they are comparable. There are situations where the distinct groups within the target segment recognize themselves as belonging to one of the groups. For example with cloths. Being more business-oriented you may not frequent the jeans section, or as a male you are less likely to look for something in the female section (the reverse is more likely to occur). But we openly accept that we are male, or that we are casual rather than formal. As a shop employee you can ask if the customer is looking for something more formal or more sportive. In the case of the mobile phone, you cannot approach the customer asking him how old he or she is, or how hip. This means that the information presented on the factsheets must address the full customer segment. Concretely this means that also the smart phone should be rated on it’s ability to make a phone call.

Ratings used on the fact sheets often are not absolute, but relative to the portfolio presented. They indicate ‘best in class’. The purpose of the rating is to support in selecting from amongst the products you are offering. Regarding the products you are offering, for each of the criteria used, one of the products performs best. For example, even if all phones basically offer a lousy calling experience, one of the phones has the best performance or if you will, the least lousy performance. This one should receive the highest mark. More generally, for each of the criteria, at least one of the products of the portfolio must be rated as good (i.e. best of class within the portfolio). Must. Naturally, based on you customer understanding, you have selected criteria on which all products perform excellent and avoided including criteria on which all perform weak (even though both for criteria there will be one ‘best in class’).

The environmental factor is a special case because the ‘environmental’ is not well defined. What is an ‘environmental’ factor? What is being measured? What does it mean to have ‘3 leaves’? Does it indicate the results of a total life-cycle assessment, or merely show in a fancy way the expected battery life? The fact-sheets counts ‘leaves’ but does not tell you how they are counted, which makes it more decoration than information. Also, the counted numbers appear contra intuitive.

&nbsp;

<img class=”aligncenter size-full wp-image-571″ alt=”Nokia_LCA_500″ src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/Nokia_LCA_500.png” width=”499″ height=”133″ />

&nbsp;

<a href=”http://www.nokia.com/global/about-nokia/people-and-planet/sustainable-devices/products/products/” title=”Nokia – LCA on Mobile phone” target=”_blank”>Nokia</a> nicely illustrates the relative impact of the various steps in the total life-cycle of a mobile phone. It shows that the majority of the sustainability impact is contributed during production, and that the ‘use’ part contributes only 10%. Based on this breakdown expected is that a simpler phone (i.e. one that is easier to produce) is greener compared to a smart phone, which is more complex to produce. Or, in short, expected is that a traditional handy is greener than a smart phone. This comparison is not completely fair as the smart phone offers functions typically performed on a computer. Therefore, the environmental impact cannot be contributed in full to the phone functionality which makes the comparison invalid. But that is the point. What does 3.5 leaves mean?

Factsheets are good. They give a quick overview of the product’s properties and are instrumental in guide the customer in selection from amongst the offering. If you want your customers to take the fact sheets (and you) serious, make sure to take the definition and creation of the fact sheets serious.

From the above you can deduce four guidelines:
<li>Understand the customers for whom you are creating the fact sheets. Understanding you customers helps you to address facts that they find important.</li>
<li>If you communicate your facts through different channels, e.g. describe in text as well as highlight through a scale, make sure that all channels transmit the same information.</li>
<li>If ‘facts’ refer to an external source, make sure to be transparent in what source they refer to (e.g. environmental factor).</li>
<li>If the fact merely highlight the difference within the portfolio on display (i.e. within the class), make sure to take the ‘best in class’ approach.
</li>