Research Methods for Information Research
2. Asking questions (and getting research answers)
2.10 Questionnaires: why they don’t work
Even some of the most common uses to which questionnaires are put are more problematic than they appear to be. What do we expect questionnaire respondents to do? Three of the most common tasks that we set respondents are:
Fitting themselves into categories:
This generally works well for the majority of respondents but, almost by definition, fails completely for the minority who don’t fit. Since much processing of results depends upon analysing responses from different occupational, educational, activity or other groups, guiding people into the ‘wrong box’ can be damaging – and it is all too easy to do this because there are never enough categories to cover every case and there is usually no hope of getting enough additional information to sort out mistakes.
Expressing levels of satisfaction by awarding scores:
We often ask respondents to tick boxes to show how much they like a service, product or activity. This can create an instant mess if people fail to read the instructions (by, for example, rating a service on a ten point scale, but taking 1 as the highest score instead of 10) – and someone will! Further, an unwritten law in analysing satisfaction rates is that if someone is way out of line in their evaluation compared with all other users, they will not have said why, even if asked to provide comments.
The real difficulty with this type of exercise is that we have no idea of how critical the respondents normally are when responding in this type of situation. If you are a chocolate lover you may have encountered a tasting club where you are asked to complete and return a ‘tasting scorecard’. In awarding your marks, do you judge against the finest conceivable chocolate taste or do you start from ‘compared with the stuff sold at the supermarket’? A trainer colleague recently received a small accolade along with a disappointingly low set of scores in the reactionnaire completed by a participant at the end of a workshop. After ticking the boxes the person wrote, “I never give more than 7 marks out of 10 for anything I attend – so 7.”
Estimating how frequently they do things:
As a general rule, senior managers are quite good at judging how much time they spend on activities, because they need to do this as part of their work. Most other people struggle with this type of question, whether they are thinking about time spent in Internet searching or how frequently they visit a library. Too often, the question compiler assumes that the respondent does do something (which should be the prior question). The easy way out is to pretend that you do, especially if you feel that you should. The most famous example of frequency distortion is probably the pair of Kinsey reports on the sexual behaviour of the human male and female respectively. Why was it so surprising at the time (and so unsurprising now) that, according to the surveys, men engaged in heterosexual activity more frequently than women?
And now for some real complications:
Having struggled through variations on these types of questions, the questionnaire compiler becomes ambitious and asks a series of open-ended questions, and the fundamental weakness of the questionnaire becomes manifest. Has not everyone who has ever analysed survey results found themselves thinking, “If only we could ask a follow-up question to seek clarification or get more detail here, and here and …”?