Simply visiting the users to observe them work is an extremely important usability method with applications both for task analysis and for information about the true field usability of installed systems. I am surprised how hard it is to do this in the “real world”. Although management is usually quite happy to let developers develop. Getting the funding to go to a user’s site can be phenomenally difficult. Often the customer isn’t that interested in letting developers interact with users until they know what’s going to happen. And then there’s the whole issue of getting travel funds and so forth. At least in the case of the work that I’ve been doing, it’s usually much easier to have users brought to the development facilities. I’m not really sure why, but this has been true for virtually every development job I’ve had.
Toward the end of the visit, it may be reasonable for the observer to step out of the role and help the users, both to pay them back for participating in the study and to learn more about the things the users want done and why they could not do them themselves. This is a good point, though it seems to happen in a more iterative way, at least in my experience. Users, particularly beta users have lots of issues, and seem to prefer a problem/solution iteration. It may not be the best from an ethnographic perspective, but it makes for a happier user, who feels less like the subject of an experiment.
One cannot always take user statements at face value. Data about people’s actual behavior should have precedence over people’s claims of what they think they do. In a classic study, Root and Draper [19831 asked users whether they knew various commands. Gospel. Amen!
Also, interviews can be more free-form than questionnaires, with the interviewer opportunistically asking follow-up questions that were not in the script. Since 805, I have become a huge fan of unstructured or loosely structured interviews. Much more information emerges from these discussions, and often seems to coalesce about certain points. It may be harder to code and analyze, but it’s much richer.
Questionnaires are probably the only usability method that makes such extensive coverage feasible. I disagree. Automated gathering should be better. That being said, I recently did a Google search that had a single “did this search get what you want” dialog pop up. With software that’s used over a large user base for long time spans, it might be more effective to ask a single question occasionally, as the effort to answer is very low.
Only ask a question if you want to know the answer (that is, if the replies will make any difference to your project). In a tactical or strategic sense? I think the questions narrow as the project matures, but there’s always the next version…
To prepare for a focus group, the moderator needs to prepare a list of the issues to be discussed and set goals for the kinds of information that are to be gathered. Is there such a thing as a pilot focus group, or is that easier with interviews? I wonder what the difference would be?
As with all methods that are based on asking users what they want instead of measuring or observing how they actually use things, focus groups involve the risk that the users may think they want one thing even though they in fact need another. Again, gospel. You’ve got to wonder if Microsoft is listening to too many users that say they want “bigger screens in more places”
In addition to statistical use of logging data, it is also possible to log complete transcripts of user sessions either for use in later playback. Yep, and I wonder if there are standard patterns of our responses that can automate the finding of a significant proportion of these issues.
Mainframe or tightly networked systems can directly include a “useless” or “gripe” command that will allow users to vent their frustration by sending a complaint to the development team immediately after they encounter a part of the system that does not address their needs. I have put this in software on the request of users. It never gets used. Users prefer to complain by email.
If possible, the acknowledgment should not be a form letter but should explicitly address the concerns raised by the user… Does bug tracking address this? It makes me feel better when a bug gets a tracking number.
Iterative design of such a system will be a combination of a few, longer-lasting “outer iterations” with field testing and a larger number of more rapid “inner iterations” that are used to polish the interface before it released to the field users. Pretty much classic “Agile” programming practice from 1993, eight years before the Agile Manifesto.