When carrying out research, we can become so involved in our topic and so concerned to show that our methods work and our conclusions are significant that it is easy to forget to question some of its basic assumptions. This may be despite including a discussion of its ‘limitations’ or concepts of ‘validity’.
This was brought home to me recently when I received feedback on the first (solo) journal article I submitted for publication. The article, 'Capturing learning: Using visual elicitation to investigate the workplace learning of ‘newly qualified’ in-service teachers in further education', addressed an aspect of the methodology of my doctoral research, namely the use of a visual tool to explore the learning of recently qualified in-service FE teachers in the workplace.
As part of the research, I carried out visual elicitation interviews using the ‘Pictor technique’ (King et al., 2013), which involves the participant using sticky arrows to label aspects of their experience and then placing them in relation to each other on a large piece of paper. This ‘chart’ is then used as an interview prompt, eliciting from the participant their interpretation of the items they have included and the relationship between them.
My reasons for choosing to use this method were multiple: like other visual elicitation methods it offered a way of positioning the participant as ‘expert’ in relation to their experience, and of introducing a ‘neutral third party’ (Banks, 2007) to the interview situation. The chart stimulates extended talk, without the researcher needing to do so. I was also keen to avoid participants providing only the ‘officially sanctioned’ (King et al., 2013) version of events, the kind of response I might have generated if I’d asked, ‘what have you learnt this week?’
Instead, I posed the question, ‘who or what influences what you do in your job role?’ My conceptualisation of learning as ‘situated practice’ (Lave & Wenger, 1991) meant that I was seeking to understand the everyday practices in which the recently qualified teachers engaged, the people with whom they interacted and the negotiation of meaning (and hence learning) necessitated by these interactions. This relational understanding of learning led me to view Pictor, a relational diagramming technique (Bravington and King, 2018) as an appropriate tool.
However, I had been so busy justifying my choice of method that I lost sight of a genuine consideration of its limitations.
What if, the peer-reviewer asked, the Pictor chart were merely an artefact of the research tool itself? Perhaps the relationships it appeared to reveal were a product of its focus on relationships?
This prompted a renewed consideration of my data and of the expectations (written and otherwise) I had conveyed to my interviewees. I believe I was ultimately able to show, through an analysis of the spoken data rather than of the visual chart in isolation, that there was evidence of interaction and participation beyond the production of the chart itself.
But this process reminded me of one of Howard Becker’s Tricks of the Trade (1998), that of applying a ‘null hypothesis’ to your object of study. You start out from the premise that there is no connection between ‘variables’ - that the apparently shared characteristics of your participants are coincidental. In this case, it means assuming that the labelling and placement of items in the Pictor chart is a random scattering, which has nothing to say about the participant’s actual participation in everyday practices. It is then the job of the researcher to show through careful analysis and communication of the data that these connections do exist.
This approach can be expressed through the concept of ‘rich rigor’ [sic], one of Sarah Tracy’s ‘big-tent’ criteria for qualitative research. It involves the detailed and transparent analysis of data that is ‘sufficient, abundant, appropriate, and complex’ (2010, p. 840). It would not shy away from data that was in some way problematic, ‘the mongrel stone’ that Dr Ian Rushton referred to in his recent 'Ed Space blog post.
In a sense, there is nothing here that goes beyond the expectation that researchers will work with integrity, exposing gaps where they may exist and being explicit about choices made. However, I find it a useful reminder of the need to look not just for things that may confirm my analysis, but for those that may weaken it. Although uncomfortable, and somehow counter-intuitive when I am trying to persuade others of my argument, ‘rich rigor’ highlights the value of seriously considering that I may be wrong.