Research Methods for Information Research
7. Beyond research methods
7.2 Interpreting research findings
Some years ago a short report in one of the ‘quality’ newspapers offered the timely reminder that taking research reports at face value can be dangerous. It claimed that “White people are more likely to be victims of racially motivated crime than those from ethnic minorities in Bradford …according to a police survey. It found 52 per cent of race victims in the city were white, 29 per cent Pakistani, 9 per cent black …” This lazy piece of prediction was based on someone’s interpretation of the survey and overlooked the small matter of the proportions of white and black people in Bradford! Here are some hints on digging below the surface, in the form of some questions that should be asked when reading any research report.
Who is saying it?
Although good quality research can pop up from the most unlikely places there is a ‘pecking order’ that can be detected in most research areas, including information research. It is reasonable to expect that work that has gone through the competitive funding and peer refereeing processes operated by the major funders should at least be competently conducted, especially if published in a research series offered by the funding body.
Again, we can get some reassurance if the work is picked up by a reputable journal or if it emanates from a good academic department with a research reputation to protect. After that, we have to work on a case by case basis and this is not easy given the pressure to publish put on UK academic staff by the Research Assessment Exercise.
What are they really saying?
It should be possible to tell what a research project is trying to do and whether the chosen approach is likely to deliver. Care is needed in describing the research ground covered: for instance to return to my opening topic, we are regularly informed that the crime rate has gone up or down, but what is being measured here? Crime statistics are usually based on recorded crime rather than on the crimes themselves, and may be distorted by many factors, such as reluctance of some people to talk to the police, how the police record what is reported, what the police have decided (or have been told) to target, the demands of insurance companies in processing claims, or the ability of criminals to cover their tracks.
One way into this tangled web is to ask ‘What questions would you expect to be addressed in the research?’ and see whether this ground has been covered. I was once asked to referee an article submitted to Educational Research which set out to assess the productivity of US University Departments in a particular area of health education. The chosen method was to rank all the Departments working in this area by the frequency of publication by their academic staff in two journals. This of course raised a few questions – Why these two journals? What were their editorial policies? What were their rejection rates and how long was the backlog of items awaiting publication? What was the likelihood of health education specialists opting to publish in other more general or more specialist publications? And so on. None of these questions was answered but there was an even more telling gap. In all the multitude of tables there was nothing that took account of the number of members of staff in each Department, although this information was given. When I carried out a ‘productivity per member of staff’ calculation the reason for this ‘lapse’ became obvious – there was an elegant inverse relationship between the size of staff and frequency of publication. In other words, the bigger the Department the lower the ‘productivity’, or, presented as a conclusion ‘to increase productivity all you have to do is get rid of staff!’
What evidence are they basing their conclusions on?
The research report should make it very clear how the work was conducted, with what intensity and when. This is not a simple case of ‘more is better’. Adequate representation of a ‘population’ (in the statistical sense of the number of relevant potential respondents or cases) is not just about total numbers surveyed. On the other hand, we have achieved a national questionnaire survey response rate of 90% which looks pretty reliable – if the questions (and the answers) are judged to be sensible by the report readers.
Unfortunately, seemingly portentous conclusions are regularly drawn from surveys where there is a low return. All that can be safely said when the response rate is below 50% is that we know nothing at all about the majority of potential respondents. Too often no response rate is given (especially in market surveys!) and sometimes no information is given about the basis on which potential respondents are targeted.
In qualitative research, challenging conclusions can be drawn from a small number of case studies if these are conducted in sufficiently rigorous fashion. More than thirty years ago Henry Mintzberg changed the whole direction of management thinking when his ground-breaking investigation of The nature of managerial work was published.19 His research was based on observation of five Chief Executives at work – and on his brilliant powers of synthesis.
However, with even the best conducted small-scale project, it is important to allow for the possibility that the people studied may not be typical of their peers. There are obvious dangers in trying to generalise from a small number of cases without first trying out tentative findings on larger numbers of people in similar roles. Two approaches (amongst several possibilities) are to translate the main findings into a questionnaire and test them in a larger survey or to try out emerging findings in workshops involving other practitioners in the same field. The research report should tell you what (if anything) has been done to strengthen the conclusions on offer. Enjoy the reading!
19. MINTZBERG, H. The nature of managerial work London: Harper and Row 1973 ↩