Design Thinking, what is core?

the-flower-of-design-thinkingCore to design thinking is prototyping & testing. If you browse around the web you find many aspects linked to design thinking, one more fancy than the other. But central to all; prototyping and testing.

The reason is clear; designers think differently then the rest of us (let’s say the engineer). What differentiates a designer from an engineer is that an engineer creates a solution to a carefully analysed and understood problem, whereas a designer creates many solutions as a means to understand the problem. For a designer, a prototype is a hypothesis. Well, that is how I see it.

The image shown aims to illustrate my view on Design Thinking.

It tries to bring across 3 main points:

  1. Core to design thinking is prototype and test, serving the design process, of which evaluation (learning) is a core element. It does not say anything about time to market (speed to get it out there) nor about the quality of the prototype. This is a skill in itself. Talk to a designer to get a feel for this.
  2. Contextual to design thinking are the many tools and methods that you can apply to get a better understanding of the problem space and help you to identify the solution space. These tools are not core, and often distract from what design thinking aims to achieve, which is to make ideas tangible to find out why they do not work (which is a different way of saying “do I understand the problem”). Some even go as far to say that design thinking by itself may not be enough anymore. For example, “In the era of Living Services, Fjord have created their own design system – Design Rule of 3 – which consists of design thinking, design doing and design culture.” Yes, I agree. The larger the (scope of the) project, the more difficult it is to understand the problem space, so you need to scale up on your exploration, and manage expectations concerning time to market.
  3. Basis of any design project, at least in my experience, is a business opportunity and/or a business model. Only in rare situations, a design project is performed completely outside any business context. For that reason, I place the business model canvas as a leave of the root of the flower.

OK, I admit. I just wanted to create a nice illustration and immediately came up with a flower, so I tried to fit Design Thinking to the idea that I had and make it work. I think it does, more or less. What do you think?

 

Street-value: the (perceived) value of products

Pricing – or the street-value – often is still cost based, Apple being one of the rare examples of companies who understands that the cost of production is not related to the amount of money people are willing to spend. Cost is what you invest to create value. Sales-price is the value created in the eye of your customers). (production) Cost and (sales) price are not related.

A few years back I had the privilege to work for a major fashion company at one of the product development sites, driving innovation and coordinating a small innovation team as a internal service provider. The director at the time recognized employees working in operation capacities had little time to look beyond the immediate product and production needs. He initiated the innovation team, with the task to bridge operations with academies and institutions where new ideas are born and developed. Remember, this was the early 2000s, way ahead of the innovation curve.

Naturally, I could not help but do a few small experiments. Just to check if I was wrong. As it turned out, I was not. Well, not in this case.

knitwear_iconObserving some of the product review meetings, what surprised me was the process of determining the price-tag; a multiple of the production cost. Pricing a product appeared to be mainly based on the raw material used, production costs; cost of goods, transport etc. Although these are important to consider since you do not want to sell at a loss, I wondered what they had to do with the sales price.

When Apple introduced a new gadget pro, I often was the first to spent whatever they were asking to have it. Production cost never entered the process, I did not care. The new gadget pro was beautiful, innovative and appealing and I just had to have it.

With some of the fashion products I had the same (with some shame I have to admit that I discovered a more than average interest in shoes). Knitwear was one of the products. Some sweaters just feel great, nice, comfortable. And some of these sweaters were priced very low, which I though was totally wrong.

The value of a product is in the eye of the customer. The customer judges the quality of a products, and – probably related – the value of the product. But they do this based on what they see and experience. This is very direct in the case of sweaters and other garments; customers value based on what they see and feel. Apple understand that in a good way, it is luxury – no dependency – and they manage to price their products at a level that people are willing to pay for it. The pharmaceutical industry also has understood this (see e.g. here), but maybe not in a good way as there may be life depending needs underlying the purchase intent. Nevertheless, they seem to target a pricing level they can get away with.

So, it the only thing that seems to count is the perceived value of products.

To evaluate this, I performed a small experiment. I would give a person a sweater and asked them to answer 3 simple questions (one a scale from one to five); do you like it, do you think it has quality and do you think it is expensive. For this experiment, I used twelve sweaters from the then current collection, randomly collected to cover the full range of price offerings. Participants were members of the companies, but not working in the knitwear area. Just to make sure they did not know or could not estimate the production costs. 7 persons participated.

perceivedvalue_01

A simple regression showed the relation between the 3 parameters (like, quality, cost). See images below. It suggests that people do not differentiate well between quality and expense, meaning, in the eye of the consumer, (perceived) quality and (expected) cost are the same. Most variation was found with like, suggesting subjects do know what they like, but that each person likes something different – as expected. More interestingly, like did not have a high correlation with quality. So, people seem to be able to recognize quality, even if they do not like it themselves personally.

perceivedvalue_02
expected price
Conclusions from this that the perceived quality is the main driver for the expected cost, and that both quality and expected expense appear independent from personal preference.

What does that mean for pricing? Yes, you should check cost of goods, but this basically just helps you to decide whether you should consider having the item in your catalogue/portfolio. It does not help to decide the price. To decide, just borrow the eyes of your customers and evaluate the (perceived) quality.

 

7. Standardise, If everything else fails?

Don’t you hate it when old problems find new ways to sneak upon you? Donald Norman solved it decades ago, with his 7 design guidelines, and by now you would expect it to be absorbed in the fibre of interaction design. I doubt it is. The invention of new technology seems to invariably re-invent old problems.  But maybe this problem was never really solved.

The problem is ‘standardise’. It first hit me when discussing part of a interface of an iPad App, specifically how to close a pop-up window that only partly covers the screen? In an attempt to avoid a ‘windows like’ interface a cross to close the window was discarded and instead the user simply had to the tap anywhere (except on the pop-up window). The close button is the absence of a button, which is not yet standard therefore nobody recognises it and every user you see searching for the way to close the window.

Futjisi camera_OnOff
Image Source

 

I recognised the standardisation problem when, soon after discussing an iPad App interface, I had the pleasure of playing with the Futjisi X-camera. The discussion was about whether or not a pop-up window should have an x to close, i.e. adhere to a standard. In this context, the Futjisifilm camera is relevant because it does not have an on-off switch, but has the act of powering on the camera elegantly integrated in the act of usage. Elegantly beautiful. I love it. The Fuji X camera is switched on by simply opening/extending the lens, i.e. setting the zoom and focus. Turning the lens out switches the camera on. Returning the lens in, switches the camera off. This innovative approach did not survive the new versions, which again have a clear and recognisable on-off button. Zooming the lens has no impact on the status of the camera. An example of an innovative approach that did not match up to commodity. By the way, it has been discarded and new models of the Futjisi X-camera have an on-off button.

The_Design_of_Everyday_Things_(cover_1988)I feel this is the result from how we indoctrinate designers by considering standardisation is something ugly, something bad, to considered it as a ‘last resort’. We can trace this back to Donald Norman, who in his famous book abbreviated as POET, offers a list to guide interaction design. Number seven reads “when everything else fails, standardise” [POET, Page 189]. This has always  always bothered me, and I often referred to it in a mocking manner.

No, it does not merit a 5th postulate kind of debate, but I do like to challenge the ‘when everything else fails’ part of it.

Standing in the wasteland of interaction design, heuristics, formal or informal standards are not ‘a last resort’; they are a fundamental starting point. Referring them as ‘a last resort’ makes they are rendered invisible, ignored right form the start. Already at the start of the design process, standardisation is a fundamental constraint. John Flach formulated it for me like this; “the key is to make the constraints/opportunities/affordances/consequences visible to the user. This requires understanding both the functions of the device and the expectations of the user.” True. Absolutely, but this is a generic definition of what constitutes a good design process. I agree. But now what? A guideline must be like a road-map; how to get there.

Furthermore, standardisation is often not something you as designer can realise, which is suggested by by formulating the guideline as “when everything else fails, standardise”, with ‘standardise’ as an act, as something you as designer should be able to establish. That this seldomly true. Standardisation probably to happen in only a few situations, namely by 1) first movers, i.e. setting the standard,  2) by commodity; a grown heuristics that elevates itself to commonly accepted standard, or 2) by those in power, who can drive standardisation in a top-down manner.

  • An example of a standard set by first movers, is the side of the tracks most trains still ride. The first trains were developed in Great Britain, and in Brutian it is common to drive on the right side of the street, which tracks back to narrow London streets and most man being right handed. When trains were first introduced on to the main land, British engineers were asked for the implementation, which naturally resulted in the first trains running on the left track. As a consequence, even today, many trains drive on the lift side of the tracks. The power of the knowledgeable set the standard.
  • An example of a standard from commodity is the 12-volt plug in a car. Arbitrary at best [See for example ‘Little dongly things’ from Douglas Adams – Salamon of Doubts, page 142]. At the time, with more and more mobile electrical devices such as coolers and large radio players, were looking for power, the cigarette lighter revealed itself as opportune solution. Nowadays, you even find cigarette lighters in the rear of a car to offer a power to appliances if needed, unless of course the smokers are not only banned to the outdoors, but also to the boots of the cars (…seriously doubt that ). Microsoft’s based PCs also developed ground up. Due to relative low cost, and the modularity of components, Microsoft operated PCs became available in abundance, the power by the masses. Examples like Apple and Acorn, where hardware and software are much closer if not tightly linked, had a much more difficult time to survive. Eventually Apple did, but only after Bill gates committed to continuing MS Office for the mac development.
  • Examples for a standard set by those in power, there are a few. Of those who are (were) in the positions to define and set a standard, Steve Jobs is one of the most known ones. Not only with the introduction of the iPhone, but also with Pixar, revolutionising the animation industry.With every innovation, it is a balance between added value compared to the status quo versus the firmness with which the common way of working (commodity standard) is established. If there is no commodity standard, we are in the situation of the first mover; whomever grasps the largest market share in the shortest possible time-frame sets the standard. The more a commodity standard has been established, the more radical improving the innovation must be to become the new standard. The iPhones touch interfaces wiped away 20 years of telephone industry, almost overnight. USB ports are finding its way into the automobiles, but although there are many devices that power on USB ports, not all. The cigarette lighter may survive a bit longer as standard.

If you want to be successful as product designer, leaving the decision to adhere to a standard ‘only if everything else fails’ is not good advice. Although following standards does not guarantee a good product, standards in itself are not bad as they form a common language between users, and they offer recognition points for your device to gain acceptance. Standards (implicit or explicit), must be considered as primary constraint, right from the start. The problem with implicit or explicit standards is that hey have become part of our behaviour. And behaviour is difficult to change.

I propose to change the 7th guideline into something like: “To avoid everything fails, improve standards, or at least stick to them.” DO you have a better suggestion?

 

References

Norman, D. A. (2002). The design of everyday things. New York: Basic Books. (The re-issue, with a new preface, of The psychology of everyday things.)

A day in our digital life

In our digital life. we are connected constantly, but not all the time in the same way. To reach clients we need to be omni-present in the client’s digital life, whenever or however they connect .

Channel is a concept often used when talking about reaching clients.  A channel is an means to transmit data and information. Channels typically are connected to a platform of some sort. The newspaper is a channel. The television is a channel. The tablet is a channel. but are they really? How can the tablet be considered a channel if the newspaper is also a channel, since the newspaper is made available on paper, on the desktop computer and on the tablet. Or is the paper version of the newspaper a different channel compared to the digital version? As a reader, this is not how we expect them to work. We expect that if I start reading an article in the printed version, I can continue reading the same article on my smartphone, should the situation call for it.

The concept of channels, as information channel towards a (potential) client, may shift away further from the actual tool or technology used to establish the data and information connection, and become a specific aggregation of tools and technologies through which the target segment can be characterised.

At the moment, a channels’ definition is linked to the technology used. We refer to a channel as a single technological infrastructure, such as for example a tablet computer or a smart phone. Wikipedia explains; Historically, communicating data from one location to another requires some form of pathway or medium. These pathways, called communication channels, use two types of media: cable (twisted-pair wire, cable, and fiber-optic cable) and broadcast (microwave, satellite, radio, and infrared). Cable or wire line media use physical wires of cables to transmit data and information. Twisted-pair wire and coaxial cables are made of copper, and fiber-optic cable is made of glass. Nowadays, in information theory, a channel refers to a theoretical channel model with certain error characteristics. In this more general view, a storage device is also a kind of channel, which can be sent to (written) and received from (read), like for example a smart phone.

The way we use technology, this traditional definition of channel as technological platform does to suffice. In former times, people were either watching the television, or reading a book or listening to the radian. That where the simple times. To communicate to your target segment you had to select a (technology) platform, and maybe a time frame (e.g. run the advertisement in the morning or in the early afternoon? On sunday maybe?). Reaching your target segment meant pushing the information or message through the selected channel, i.e. platform. Placing an add on the television. Running an interview won the radio etc.

In current times, we have the i-family of products that we use imexchangably, and worse, sometimes all at the same time. At the moment I am writing this on a laptop while following twitter on my smartphone. In half an hour I will be on the train home, and probably first browse my mail on my tablet. Reaching your target segments in current times means sending the same message via different platforms that your target customers are using. A channel used to be a single platform, but tends to grow into a specific constellation of platforms that are used by the target segment in combination to basically perform the same activities.

OurDigitalDay

This is illustrated with the image above; Nowadays,we are are connected throughout the day with the same source of information, but depending on the time of day and on the kind of activity, the platform and tool used to access this information differs even if our activities are the same or guided by the same interest. Take Jack for example. Jack lives in the city and commutes to work by tram. When he wakes up he first glances over our smartphone to check latest messages and news snippets. During breakfast he reads the news on a tablet. Commuting to work he again browses his smartphone, while at work it is his (desktop) PC that rules. Commuting back home he plays a game on his smartphone, and in the evening he watches a movie while chatting with friends and liking Facebook posts on his tablet. Then just before going to sleep, he glance a last time over his walls on his smartphone.

The point is that in order to reach Jack, in order to get his attention, it is not sufficient to address a single platform. To reach Jack, we need to send small notes or ramified information to his smartphone, maybe full texts with multi media to his tablet, and again small notes to his desktop (since this goes in parallel to ‘working’). The channel to Jack in fact consist of a smartphone, a tablet and a desktop pc. Now, maybe you are not Jack, maybe you are more like Peter. Peter has a smartphone and spends his time commuting reading the newspaper he subscribed too, on his tablet. The channel to reach Peter is different, not only the platforms it consist of, but also the manner in which Peter interacts with them, and probably also the amount of time spent on them.

‘channels’, as connections to transmit information and data, have grown into an aggregation of platforms that function in full symbiose, based on the behaviour of the target segment. In reaching this target segment we need to identify the platforms the channel is constructed from.

redefining what are channels, forces us to re-think how we reach our target segment, and invites us to move away from pushing data over individual platforms towards orchestrating information over various platforms in with the purpose of becoming more effective in informing and reaching the target segment throughout our digital and ’always connected’ day.

Usability checklist: Blending the designer’s and the tester’s view

Usability will not be resolved with technology. On the contrary, with the advance in technology, new usability challenges are being created. Continuously; new technical possibilities, a wider range of applications, a fast growing group of developers.

In a far past usability of computer applications was constraint to actions performed on devices that had a key-board. The user group was small, typically technically high skilled, special interest groups. Like the ancient western movies; the guy with the white head is the hero. Simple. Then they introduced color. With computers it was the same; the device with the mouse. Nowadays computer devices have a vast amount of possible input devices. Not only key-boards and mouses, but also pens, touch pads and touch screens and so forth, and soon, with the advance of wearable technology and garnent integrated sensors, even your cloth become (an extension of) a computer. The user group also exploded, from a selected few computer fanatics, to everybody. Most people have at least one computer but with over 90% of phone users having a smart-phone and the huge success of the tablet computer, a large number of people no doubt have several. Computing is ubiquitous.

Usability remains problematic. New developers need to be trained, new technologies need to be understood, new domain knowledge needs to be obtained.

Luckily we have guidelines for the designers and check lists for those who evaluate. Despite all the guidelines and check-lists available, usability is still perceived as subjective. Check-lists and surveys seem to be a goal in itself, not a means to an end, leading to better understanding. Typically, I prefer task performance tests and wizard of Oz studies; relatively simple to apply and with a verifiable good/better/best/worst outcome. But, ok. Checklists and surveys as any tool provide valuable results in the hands of those who know how to work with them.

Basis for this dispute may be the apparent disconnection between design/development and test/evaluation. Although both have the same objective (the best usable product for the end-user), their reference, their golden standard differs. Product design is driven by guidelines, preferably as few as possible not exceeding 10. For example 5 timeless usability principles, Donald Norman’s design principles (or see here), Schneider’s 8 golden rules (or check here) or Nielsen’s 10 usability heuristics (or check here). On the other hand, test and evaluation is driven by checklists, preferably as much as possible. For example here, a checklist with 247 usability guidelines. Impressive. But useful?

If the objective is the same, why then is there no clear relation between guidelines and checklist?

If usability guidelines (top-down) were organized in a way to highlight their relation with checklist (bottom-up), the designer and the tester/evaluator have a shared reference of what is good, and can concentrate on making the product’s usability better. Most guidelines and checklists do cover all aspects, but what is missing is – IMHO – a structure that support our top-down goal-oriented thinking. There is some irony in the fact that check lists lack in this respect since top-down and goal-oriented thinking you will find both as usability guidelines.

Check out the check-list below. I will briefly describe the individual areas.

users

Users – One of the first things to realize is to identify whom the user actually is. Are we talking about a novice or expert? Are we targeting a business environment or more casual? How old is the user or target group? How important is gender? Differences between customers make a difference on how questions are evaluated.

device

Device – Often carefully overlooked is the device the software actually is for. Quite trivial and difficult to overlook, right? I mean, how can you mistake software developed for a smart phone with a web site. You are absolutely correct that is never overlooked. But you’ll be surprised how often meetings occur where software is created for one device and then carelessly assumed to be an equally perfect fit for another device. Designed for a tablet computer? Well, then it will run equally great on a smart phone, right? No.

analyze

Analyze – Put a device in someone’s hands and he or she will go through three distinct phases. No, although similar, I am not referring to the phases a civilization passes through; the ‘How, Why and Where phases’ (see Douglas Adam’s Hitchhiker’s Guide to the Galaxy).”For instance, the first phase is characterized by the question How can we eat? the second by the question Why do we eat? and the third by the question Where shall we have lunch?” The phases a person will go through are similar; where am I, why am I here (or what can I do?) and where to next. The first step is not about what button to click or where to swipe. No, it starts with the basics; defining where you are, what you can do here and how to do that. The user needs to know where am I now, where can I go to, and how do I proceed. For example, I want to go to the center of Amsterdam, then it is quite mandatory to first know where I am now, the options I have and how to use these options. I do not want to know what the price of the ticket of the train going from Utrecht to Amsterdam is. I need to know first that I actually am in Utrecht (and not for example in Zürich). You get the drift.

Choose

Choose – after a first analysis the user has to choose an action, ranging from a specific action to move forward or – worst case – to hit the home button and get out of here. To motivate the user to take the next step, the application has to be clear about it’s purpose, transparent in what you can do, and pleasant to look at. Call for action is often mentioned. To frame the action and to make results predictable the application has to be clear about the current status. Users find it easier if actions are framed in clear goals and related tasks. It helps to split tasks is sequence of actions. Knowing users will make mistakes, it is imperative to for a system to be robust and be error tolerant. Last, people do not like to read, and certainly do not like to rely on their memory; make sure to make full use of their ability to recognize.

Transition – The next level is transition; executing the actions to full fill the software’s purpose, and to arrive at the actions it is calling out so loud for.

The next few images show how the checklist covers the various guidelines (Norman, Schneider and Nielsen). Interestingly, the topics ‘clear purpose’ and ‘goal oriented/task driven’ seem to have less focus by the design guidelines. Also interestingly, the different guidelines appear to have their own focus, from more holistic (Norman: where am I) to very practical (Nielsen: what next?).

Donald Norman Schneiderman Nielsen

The ‘enabling users to ACT’ check-list aims to support a both a top-down approach (aiming at the design process, from left to right) as well as bottom-up approach (aiming at the verification process, from right to left). In addition, it the user and device area help to keep in focus both the target user and the environment in which he or she will operate. Try it. Let me know.

ACT

Fashion Shoes – Developing an a questionnaire to validate fitting

A questionnaire is a ‘tool’, a structured approach, for collecting data about a particular issue of interest.

The questionnaire was invented by Sir Francis Galton. Questionnaires have advantages over some other types of surveys in that they are cheap, do not require as much effort from the questioner as verbal or telephone surveys, and often have standardized answers that make it simple to compile useful data. However, such standardized answers may frustrate users. Questionnaires are also limited by the fact that respondents must be able to read the questions and respond to them, which means you have to take special care about construction and wording.

Personally, I take a sceptically view of questionnaires, mainly because they deal with what people say, and not with what people do, which for all kinds of reasons, may be different.

If you do find yourself resorting to questionnaires, remember the most fundamental rule for developing questionnaires; don’t.

Do not develop a new questionnaire unless you absolutely have to. Developing one takes expertise time and effort to make sure that it is reliable and valid. You are better off using an existing one, that has been used before, than developing a new one from scratch. If you end-up developing a questionnaire, make sure to read the many hints and guidelines on what to consider when developing and conducting questionnaires available on the web. Also, make sure to test it. A good questionnaire will include some questions that permit a validation of the questionnaire itself.

Two aspects to test a questionnaire for are reliability and validity. Both concepts are illustrated in Fig. 33. ‘Validity’ has to do with how well you are able to measure what you set out to measure. For example, if your questionnaire evaluates the public opinion of JUMBO do you measure the opinion about the toy maker or will the results say something about the newly born elephant in the local Zoo? ‘Reliability’ has to do with the consistency of the results. For example, when you sample the same group multiple times using the same questionnaire under conditions where the opinion has had no reason to have changed, to what extend will you arrive at the same results?

Most importantly, testing means checking that the questionnaire delivers actionable results. Can you do something with the data collected? The purpose of a questionnaire is collecting data based on which you can draw conclusions and define actions. This is what you have to test. Therefore, testing means using the questionnaire to collect information and trying to draw conclusions, and checking if the questionnaire gives you the answers you are looking for. The next example is based on my experience when Rucky Zambrano asked me to validate the impact of the insole on the performance of a golf shoe. At the time Rucky was developing a new Golf shoe, and based on feedback on fitting problems was considering different insoles. He had a few ones lined up, had tried them, but had no empirical basis on which to base his choice. Did it matter? What difference does an insole make? Would you even notice the differences while swinging a golf club?

We decided to check whether a user (golfer) could perceive the difference between the selected insoles considered. What we needed were test subjects (able to play golf) and metrics to compare fitting of shoes. Following the previously stated guidelines, and knowing within the shoe department that regular fitting tests were being performed, we first checked if within the company there was already a check list or questionnaire to support fitting tests. Inspection revealed that, yes there was a check-list, but no, it did not appear to be appropriate. It looked like the checklist was used to document the result from fitting tests, but that the collected data were not further analysed. As a first step, we developed the check-list. This involved updating the list based in input from shoe experts on what parameters to survey to judge quality of fitting. Next, we used the updated check-list in a fitting test; a number of persons trying and evaluating a large set of prototype shoes. The collected data was analysed, for consistency and reliability, as well as for detecting differences in fitting and comfort. Analysing the data collected – thus, testing the check-list – revealed a number of issues and weaknesses, and it took two more fitting tests before we were confident the check-list was reliable and accurate, at least reliable and accurate enough.

Armed with the improved form, a set of prototype golf shoes and a set of differently dimensioned insoles, we headed to the green and performed a small experiment, and collected information needed to select the most appropriate insole to be used. We learned that during swinging, the difference of a few millimetres in insole height was noticeable. We also learned that we were performing the wrong test; swinging and hitting the ball may not be the most important part of playing gold, it covers only a minor proportion of the time spent on the green. Most time is spent walking, going from one hole to the next. Therefore, primarily, the shoe has to be comfortable while walking. Based on the data collected, both via the check-list as well as by talking to the subjects, the most comfortable insole was selected and later used in the final shoe.

You can find a large number of guidelines on the internet, helping you to develop a good questionnaire. They hint at the structure of the questionnaire, at the wording, the type of questions and even the lay-out. Without doubt, all important. They also hint at the importance of testing the questionnaire. I came across one illustrated guideline, which I immediately fell in love with. No, not because of the nice first image, but because of the very last remark; ‘ After you test, do a trial number crunch to ensure you can properly collect the data you need; if not, make more adjustments to the questionnaire.‘ Reading it a second time my initial love turned into a short lasting crush, because it was too casually added, like an afterthought. ‘Oh, yes, and you should also….’  Understanding whether the questionnaire allows you to collect the data based on which you can draw your conclusions is the single most important aspect of designing questionnaires. Testing and number crunching must be at the basis of every iteration. Like we did testing the golf shoes.

This experience with the golf shoe showed the value of properly testing the check-list/questionnaire before using, and also the importance of talking to the end users in person, and not only via the checklist or questionnaire. At least, until you know the limitations of the check-list you are using.

Safe-cracker proof tea

safe-cracker teaThis example I encountered when visiting friends who offered me a cup of tea. As always, I was delighted. Even more because of the teapot they used (see Fig.). No doubt with the purpose of preventing the top from falling off when poring tea, the top had two additional features. First, as a common solution to this problem, on one part of the inside the top had a small lip. In addition, as a second feature, the rim was a bit oval to make sure that, unless it was in the absolute correct orientation, it would not fall out. Although this prevents the top from accidentally falling out, at also makes it virtually impossible to take the top off intentionally. Especially, as the outsider was perfectly circular. Like a professional safe-cracker, you have to slowly turn the top and try if it comes off. The teapot does not present its action possibilities in such a way that the user is assisted in the simple task of removing the top.

This example is almost as bad as the round Apple mouse, delivered with the first iMac series. Not quite as bad, but almost.

See workshop handout chapter 2.

Heuristic evaluation

Eat your PizzaHeuristic evaluation is a (usability) inspection method originally developed for computer software. The idea is to have experts discuss the software and try to identify problems in the user interface based on their knowledge and experience; i.e. “heuristics”. (see Nielsen & Molich, 1990). Research indicated that a group of 5 experts reveal 75% of issues (Nielsen & Landauer, 1993).

The advantage of Heuristic evaluation is that you can apply it to the initial stages of the development process, i.e. having the experts discussing early prototypes or product sketches. All it takes are a few experienced users, a room, some good pizza and drinks.

Product Ratings are great…. Right?

Ratings are great. They help to highlight a product’s individual performance on specific criteria compared to its peers. Unless the rating is based on user voting, the underlying mechanism is ‘best in class’. Ignoring this makes the rating counter productive as tool for supporting customer decision taking.

A nice example of using fact sheets with ratings in a less than perfect way is demonstrated by a local Swiss (mobile) phone provider. All phones on display have a small fact sheet. This seems part if the new branding, without doubt aimed at making the shop (and company) hip and modern. There is a coffee machine and there is a boll with sweets to make waiting more pleasurable. Overall, the shop offers a ‘green’ – good for the ecology – look. Tables are rustically wood. Offers are drawn on the wall as menus in restaurants, illustrated with comic versions of app icons. Employees do not stand behind a counter but walk around ‘free’, looking for customers to serve. If needed, there are discussion points scattered around the shop. They also walk around with tablet computers tide to their left hand. Personnel with a tablet fixed to their left hand, which invariably indicates modernity and hipness, right? Regretfully, the tablets still run made-for-desktop software, which is useable on a tablet except for simple tasks, which is even more regrettable since simple tasks are those for which you permit yourself to call the hotline and probably do not come into the shop. So you find yourself waiting in line for one of the employees with a desktop PC to become available while the tablet-boys are standing around chatting away about soccer waiting for the more simple tasks to arrive. Yes, I have a coffee, please.

While waiting, you are confronted with the phones on display. The phones are accompanied by a small – A8 size – fact sheets, printed on ‘green’ paper highlighting main characteristics. The idea is that you can tear off the fact sheet and take it with you. Great idea but clearly not been tried; it takes certain practice before you can tear off a fact-sheet without tearing it.

<img class=”aligncenter size-full wp-image-565″ alt=”Factsheey_Offer” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/Factsheey_Offer.png” width=”500″ height=”705″ />

The fact sheet describes the phones’ qualities textually followed by a quantitated list of specific qualities and an environmental impact indication. Important to note is that these are two channels. Some customers will read the text, other customers will glance over the ratings and a few will do both. There is a very basic rule to adhere to when disseminating the same information through different channels; independent from the channel the information should be the same. It is highly recommended to check.

A very basic phone is descripted as (translated from German) ‘Exceptionally user friendly, with an uncommon simple user interface, excellent sound quality and utilities particularly useful to fulfill the needs of older users, such as alarm functions, reminders, memory teasers etc.’ So-far so good.

<img class=”aligncenter size-full wp-image-566″ alt=”phone_Emphoria” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/phone_Emphoria.png” width=”500″ height=”320″ />

The four qualities that are highlighted and quantified are ‘calling’, ‘camera’, ‘screen’ and ‘music’, each rated on a five-point scale.

Calling is listed with a 3; an average.

This completely baffles me. Here we have a phone that is clearly and without any doubt made to make phone calls. That’s it. You can make phone calls. Nothing more. Sure, it has a few add-on features, but these are without doubt simply thrown into the mix to fill up the menu structure with some content, nothing more than placeholders. And according to the fact sheet, this phone performs ‘average’ when it comes to making a phone call.

If the phone truly has an average calling performance, it has no business being offered. It should not be here. They should not have wasted the recycled paper the fact sheets are printed on.

Now I am curious. Is there a phone that received a 5 for calling? Interestingly enough phones that clearly are created for the single purpose of calling all receive an average. Phones that have been created to do much more than just calling, i.e. smart phones, do not even have calling as a feature mentioned.

<img class=”aligncenter size-full wp-image-567″ alt=”phone_Samson” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/phone_Samson.png” width=”500″ height=”320″ />

Form the fact sheets I cannot but conclude that none of the phones offered are excellent, if the only thing I want to do is make a call. Maybe there is no market for this? Not true. A study shows that 56 percent of all US adults now have smartphones. With more than 90% of US adults having a mobile phone, this means that the smartphone penetration is now about 60 percent. It also means that 40 percent has a ‘normal’ phone, for which making a phone call still is in the top 3 of most used functions (after checking time and sending SMS).

Smart phones are recommended based on surfing, camera, navigation &amp; roadmaps, and the ability to text. They are not rated by their quality in calling, which probably has become a nice-to-have feature.

Now I am poking a bit at this specific situation, but to me it illustrates two issues; you need to know whom you are selling to and you need to understand the purpose of rating scales. The example suggest a lack in both.

Using the product design canvas, let’s compare the convention phone with a smart phone.

The canvas for a mobile phone may look like this:
<img class=”aligncenter size-full wp-image-569″ alt=”ProductDesignCanvas_phone” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/ProductDesignCanvas_phone.gif” width=”500″ height=”333″ />

The canvas for a smart phone may look like this:
<img class=”aligncenter size-full wp-image-570″ alt=”ProductDesignCanvas_smartphone” src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/ProductDesignCanvas_smartphone.gif” width=”500″ height=”333″ />

&nbsp;

So, what are the main differences between the phones that you need to high-light? This mainly depends on the clients you address and the features you expect they will find important. What is interesting about this case is that there are distinct groups within one large target segment; a more sophisticated one in search for a personal digital communicator and a more conservative one looking for a mobile phone. However, there will also be cross-overs; the more conservative but progressive interested in trying a personal communicator or a more sophisticated user settling for a simple mobile phone.

The information and ratings have to be presented such that they are comparable. There are situations where the distinct groups within the target segment recognize themselves as belonging to one of the groups. For example with cloths. Being more business-oriented you may not frequent the jeans section, or as a male you are less likely to look for something in the female section (the reverse is more likely to occur). But we openly accept that we are male, or that we are casual rather than formal. As a shop employee you can ask if the customer is looking for something more formal or more sportive. In the case of the mobile phone, you cannot approach the customer asking him how old he or she is, or how hip. This means that the information presented on the factsheets must address the full customer segment. Concretely this means that also the smart phone should be rated on it’s ability to make a phone call.

Ratings used on the fact sheets often are not absolute, but relative to the portfolio presented. They indicate ‘best in class’. The purpose of the rating is to support in selecting from amongst the products you are offering. Regarding the products you are offering, for each of the criteria used, one of the products performs best. For example, even if all phones basically offer a lousy calling experience, one of the phones has the best performance or if you will, the least lousy performance. This one should receive the highest mark. More generally, for each of the criteria, at least one of the products of the portfolio must be rated as good (i.e. best of class within the portfolio). Must. Naturally, based on you customer understanding, you have selected criteria on which all products perform excellent and avoided including criteria on which all perform weak (even though both for criteria there will be one ‘best in class’).

The environmental factor is a special case because the ‘environmental’ is not well defined. What is an ‘environmental’ factor? What is being measured? What does it mean to have ‘3 leaves’? Does it indicate the results of a total life-cycle assessment, or merely show in a fancy way the expected battery life? The fact-sheets counts ‘leaves’ but does not tell you how they are counted, which makes it more decoration than information. Also, the counted numbers appear contra intuitive.

&nbsp;

<img class=”aligncenter size-full wp-image-571″ alt=”Nokia_LCA_500″ src=”http://www.expressiveproductdesign.com/wp-content/uploads/2013/08/Nokia_LCA_500.png” width=”499″ height=”133″ />

&nbsp;

<a href=”http://www.nokia.com/global/about-nokia/people-and-planet/sustainable-devices/products/products/” title=”Nokia – LCA on Mobile phone” target=”_blank”>Nokia</a> nicely illustrates the relative impact of the various steps in the total life-cycle of a mobile phone. It shows that the majority of the sustainability impact is contributed during production, and that the ‘use’ part contributes only 10%. Based on this breakdown expected is that a simpler phone (i.e. one that is easier to produce) is greener compared to a smart phone, which is more complex to produce. Or, in short, expected is that a traditional handy is greener than a smart phone. This comparison is not completely fair as the smart phone offers functions typically performed on a computer. Therefore, the environmental impact cannot be contributed in full to the phone functionality which makes the comparison invalid. But that is the point. What does 3.5 leaves mean?

Factsheets are good. They give a quick overview of the product’s properties and are instrumental in guide the customer in selection from amongst the offering. If you want your customers to take the fact sheets (and you) serious, make sure to take the definition and creation of the fact sheets serious.

From the above you can deduce four guidelines:
<li>Understand the customers for whom you are creating the fact sheets. Understanding you customers helps you to address facts that they find important.</li>
<li>If you communicate your facts through different channels, e.g. describe in text as well as highlight through a scale, make sure that all channels transmit the same information.</li>
<li>If ‘facts’ refer to an external source, make sure to be transparent in what source they refer to (e.g. environmental factor).</li>
<li>If the fact merely highlight the difference within the portfolio on display (i.e. within the class), make sure to take the ‘best in class’ approach.
</li>

Making toast

ge_115t17_smallAn example of ‘elegant interaction’ is GE’s Hot Point Toaster (see Fig. source http://www.toaster.org). The problem, which this mechanism solved, was toasting a slice of bread on both sides. Modern devices have a simpler but less elegant solution: you simply slide down the slice of bread between two heating elements and a spring and bi-metal based mechanism releases it again when the slice has turned into toast. The ‘flopper’ mechanism was an earlier and I think a more elegant solution, at least from an interaction point of view. The ‘Flopper’ toaster had only one heating element, placed at the center of the toaster. In order to toast the slice of bread on both sides, you would need to turn it. Continue reading