Research Methods for Information Research
5. Other Methods
5.4 Collecting Stories
There are a number of ways of showing that an innovation is bringing about change, ranging from measuring increased service activity to taking ‘before’ and ‘after’ photographs, but if you want to find out about real changes in what people are doing and how they are being affected, collecting stories systematically offers a valuable form of evidence and should also be useful in advocacy.
But whose stories are of interest and how should we go about collecting them? If you are undertaking impact evaluation of a service, you may want to collect stories from the people providing that service. The advantages are twofold: the local service providers should have some understanding of why you want to collect the stories as well as what sort of stories to collect and they are relatively easy to contact. Stories representing the views of the providers can be valuable if you can persuade them to tell the ‘warts and all’ version of the story (not just the good news) and if you also try to tap into the views of the people who are using the service. It may be difficult to get a sufficiently wide range of stories from service users unless you go about the task by thinking about when to try to make contact. There are no hard and fast rules here but two potentially useful approaches are to try to set up story collection sessions when the people you are interested in come together for another purpose (standing a round of drinks at the end of a meeting just might pay dividends!) or when they have finished a particular session using the service, in which case that encounter might be the critical incident that you get them to tell you about (see section 3.7). Critical incident interviewing is a form of structured question-led story telling.
An issue for story collectors is how much structure to provide. Do you want to adopt the ethnographic researcher’s standpoint of trying to get people to tell their own stories in their own way (‘capturing authentic voices’ in the jargon), in which case you will have to listen actively to a good deal of conversation that might seem tangential to the topic that interests you. Instead, you may like to use triggers such as photos of people using the service to act as a stimulus to and focus for storytelling, or may go equipped with a set of open-ended questions to get people started.
Even if we opt for the ‘naturalistic’ story gathering approach we should not fool ourselves that we are collecting the unvarnished truth. As David Silverman points out12 “telling someone about our experiences is not just emptying out the contents of our head but organising a tale told to a proper recipient by an authorised teller”. In our role as ‘proper recipients’ we have to watch what we are doing as well as what the storyteller is trying to bring about.
The next issue is about editing. Who decides which stories to use as evidence and in how much detail? You may want to get members of a community to choose their own representative stories, rely on service managers to select a range of views, or even hire in someone to take on the task more independently. Some years ago I helped to set up an experimental information service aimed at all local education authorities in England and Wales. At evaluation time we hired an education journalist to prepare a set of prompt questions, distribute these to the link-people nominated by each authority and ask them to gather stories from their colleagues about using the service. The journalist then edited the results into a formal evaluation report on the service which was submitted direct to the project funder. This was a rare example of delegating the entire evaluation process to the users and their orchestrator.
There are a number of more systematic approaches to collecting stories as part of impact evaluation13. The essence of most of these approaches is that you should be thorough in collecting stories of success and failure and should then use them as a focus for gathering more evidence to see if the stories describe one-off effects or are representative of the bigger picture. The question of how representative a picture is being presented through stories is a difficult one – how can any unique story represent a common experience? However, some attempt has to be made to indicate whether what is chosen as evidence is broadly representative or if it is an interesting (and hence potentially illuminating) exception. This issue of contextualising the evidence as carefully as possible is probably what distinguishes the use of stories as evidence and the presentation of stories for advocacy. Both are legitimate pursuits: the dangers start when these activities (and the use of stories) are conflated.
5.4 Organising the stories
A Chinese philosopher described the fulfilled life as ‘Doing something new every day’, so I probably shouldn’t have been surprised to find myself watching a video on tomato-selling. In the space of five minutes, the tomato producers from Syn’kiv Village in the Ukraine put the most powerful case for public libraries that I have come across in years. The villagers specialize in growing early vegetables, particularly tomatoes, and took advantage of the Library Electronic Access Program (LEAP) funded by the US Embassy, which equipped the village library with Internet access. This led the villagers to seek out high yield tomatoes and to closely monitor the weather conditions (to avoid over-watering their crops), not to mention looking for horses and for agricultural machinery. If you would like to see Librarian + Internet = better tomatoes see:
Managers involved in evaluating the impact of library and information services are getting increasingly interested in gathering stories from people illustrating how they use services and how their lives are changed as a result. But what do you do when you have collected the stories? Do you relay the stories in the narrators’ words, edit their grammar, select what you think are telling incidents or phrases, merge them into ‘typical tales’ or subject them to statistical analysis? It all depends, as ever, on what you are trying to achieve. However, what is clear is that if you are trying to build up an evidence base, it is important to try to collect ‘the good, the bad and the ugly’, not just evidence of success. As the tomato video illustrates, picking the right success stories is important as propaganda (especially if the narrators can be persuaded to retell their stories to emphasise the desired messages), but if you want the full picture in order, for example, to decide where to focus in developing your service, the ‘warts and all’ portrait is needed.
An interesting approach to organising and making sense of people’s narratives is the Kaleidoscope of Data, which picks up on the Constant Comparison method pioneered by Glaser and Strauss14. The Constant Comparison approach involves the researcher in four distinct stages: comparing incidents applicable to each category, integrating categories and their properties, delimiting the theory and writing the theory. When applied rigorously, the Glaser and Strauss approach can produce fresh insights: Goetze and Le Compte15 reported that “as events are constantly compared with previous events, new topographical dimensions, as well as new relationships may be discovered.” The act of categorizing should help the researcher gain an understanding of their evidence, but the difficulty lies in ‘keeping all the categorical options open’ throughout the early stages of the process. This is not easy, especially for new researchers, but one aid to this process has been offered by Jane Dye and her colleagues. They proposed using the kaleidoscope metaphor “to learn the importance of allowing categories to fit the data” by visually representing data and categories. In this rendering of the metaphor, “the loose bits of coloured glass represented our data bits, the two plain mirrors represented our categories, and the two flat plates represented the overarching category that informed our analysis.”16
Whatever aids are employed, the problem remains. How can the researcher reduce mountains of data to manageable proportions without seriously distorting the pictures that are buried within the mountain? You could fall back on the hypothesis testing approach to whittling down evidence into two categories – that which supports the hypothesis and that which undermines it, but this presupposes that you have a good inkling of what you will find out before you start the research. Reaction against applying what is essentially a scientific approach to social science research led to the grounded theory approach being advanced by Glaser and Strauss, complete with the idea of constant comparison. But how can researchers ensure that they are teasing their theories out of the evidence collected, especially when it is in the relatively free form of other people’s stories?
Another interesting way to organise narratives (and other forms of evidence) is the SenseMakerTM software offered by Cognitive Edge (www.cognitive-edge.com). In their words, “The software and linked methods allow the collection and tagging of multiple sense-making items which can be anecdotes, pictures, web sites, blogs and other forms of unstructured data. These items can be also linked to more traditional systems such as content management. The tagging provides sophisticated metadata which can be used to provide quantitative research material, as well as measurement systems and impact analysis. Visualisation tools, linked to methods and models, permit users to sense complex patterns and anomalies that would not be visible to conventional analysis.” [My emphasis.]
One possibility created by using these types of software is for an element of self-referencing by the narrators. Instead of concentrating on the researcher making sense of a collection of narratives, why not focus on helping the narrators to fit their own accounts into the broader picture — transforming them from passive ‘research subjects’ into genuine contributors of evidence? (No, this is not an original idea; David Snowden of Cognitive Edge is actively working on this approach.) And why not involve the narrators in deciding how to edit and use their narratives to advance the research — in which case, whose version of the research will emerge?
12. Silverman, D. A very short, fairly interesting and reasonably cheap book about qualitative research London: Sage 2007 ISBN 978-1-4129-4595-0 ↩
13. See for example Brinkerhoff, R.O. The Success Case method: find out quickly what’s working and what’s not San Francisco: Berrett-Koehler Publishers 2003. ↩
14. Glaser, B. and Strauss, A.L. The constant comparison method of qualitative analysis New York: Aldine de Gruter 1967. ↩
15. Goetz, J.P. and Le Compte, M.D. ‘Ethnographic research and the problem of data reduction’ Anthropology and Education Quarterly, 12 (1981) 51-70. ↩
16. Dye, J.F., Schatz, I.M., Rosenberg, B.A. and Coleman, S.T. ‘Constant comparison method: a kaleidoscope of data’ The Qualitative Report 4½ (January 2000)
www.nova.edu/ssss/QR/QR4-1/dye.html ↩