Research Methods for Information Research

2. Asking questions (and getting research answers)

2.5 Research interviews: choosing who to talk to

I recently spent a few days interviewing strategic managers to try to build up a picture of how the influx of mobile technologies is likely to affect the organisations deploying them. Several of the interviews were rewarding, notably a manager who saw an immediate flow of information from the front-line operational staff back to the headquarters as the answer to his main business concern. The ‘interview’ took the form of a free-flowing stream of consciousness oration in which the manager covered all the ground mapped in the interview schedule and more besides without any necessity to ask even one question.

The next interview was basically a waste of time: someone had assumed that this person’s role in future planning would make him a valuable respondent but he had not encountered mobile technologies and appeared to know nothing about their potential. And so it went on: some respondents had a well developed strategic view of organisational information flows and how information technology might change the picture, others did not and others again were cautious and apparently defensive in their responses.

One intriguing aspect of exploring organisational change is that there are usually few obvious signs for the outsider about what phase in the change process people have reached. If they are in denial about inevitable change or confused about the likely effects they are unlikely to be willing or able to talk constructively about possible benefits – and why should they? On the other hand, if people are strong proponents of change and have invested emotionally in the perceived benefits you are unlikely to get the ‘warts and all portrait’ of the issues. Again, if management has committed strongly to a particular strategy they are not likely to encourage much attention to unforeseen consequences, especially if some of these are perceived as negative.

All of which begs the question, how do you select the right respondents or informants to engage with in your research? The first part of the answer is ‘it depends on what you are trying to do in your research.’ This may seem self evident, but it runs counter to one of the powerful forces shaping much research. People engaging in quantitative research necessarily spend a great deal of time ensuring that they have appropriate samples of potential respondents taking due account of the full range of types of people likely to be able to contribute usefully. This process can turn into a ritual dance, such as when market researchers in the high street spend time selecting or rejecting their respondents on the basis of apparently irrelevant criteria such as gender or age (do your views on consumer affairs automatically change when people decide that they have reached the age 18 or 65?). If you spend time watching such footsoldiers of fieldwork it is hard to avoid the conclusion that all else is sacrificed in the quest to achieve the magic quotas of different respondent categories.

Concern to draw research evidence from a representative sample of respondents may tend to deflect attention from the need to get a representative range of evidence from the target groups, an intrinsically more difficult task which probably deserves a column in its own right. But there may also be big difficulties in establishing lists of names as the basis for applying sampling frames, especially in areas like LIS research which often depend upon relatively poor core records. For example, how many organisations dealing with enquiries from service users keep meticulous records of all the enquiries logged? Or, taking a step back, how many libraries and information services have accurate and up-to-date records of their registered users (always assuming that they have such a register or log-on procedure)?

Libraries are often keen to capture enrolment information from new users, but how many spend time and energy in systematically weeding these records to remove people who have moved elsewhere or died (assuming that such information is available, which it may be for academic libraries but may not be for public libraries). Why should they bother? Our recent research reported in an earlier column showed that academic libraries find it hard enough to find out potentially vital information about postgraduate and postdoctoral researchers (Who are they? What are they studying? Do they need help?) without setting themselves meaningless administrative tasks.

The same concerns about representativeness may be important in qualitative research but there are other factors to take into account.

One of the reasons why the fieldwork I described above was so hit and miss was that in choosing potential respondents the selection emphasis was on variety of strategic views and on representation of all main operational areas of the organisation rather than likelihood that respondents would have useful experience to impart. In this case, since the purpose of a research project is to build up a strategic picture of change the two main selection criteria for respondents should have been about strategic grasp and involvement in the change being studied. Using and publishing these criteria to the respondents would not have guaranteed high quality interview responses but would have made this more likely. Sharing the selection criteria would also have created the possibility of other appropriate people being suggested for interview. Going a step further and publishing the selection criteria used in any reports of the research would also enable readers to assess the likely reliability of the evidence reported.

There are of course circumstances when qualitative research is driven by the need to get a full range of views from different categories of respondents but even then it is probably unlikely that you will want to cover each category in the same depth (if numbers of respondents can reflect depth of response, which is debateable). Where there is likely to be more intrinsic interest in the views from one group rather than another the obvious way forward is through stratified sampling (with greater numbers of respondents drawn from some categories), but again this process depends upon having reliable lists of potential victims.

Has anyone seen any good victims recently?