What is evaluation? Who is it for? Why do it? What’s the difference between evaluation and research? Does it make as much difference as it purports to?
A recent report from the UK, Evaluating evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries by Maurice Davies and Christian Heath, makes for interesting reading in this area [1]. They observe that summative evaluation hasn’t informed practice as much as it might have, and look at some of the reasons why that might be the case. Their overall conclusion is one of “disappointment”:
Disappointment that all the energy and effort that has been put into summative evaluation appears to have had so little overall impact, and disappointment that so many evaluations say little that is useful, as opposed to merely interesting. . . With some notable exceptions, summative evaluation is not often taken seriously enough or well enough understood by museums, policy makers and funders. The visibility of summative evaluation is low. Too often, it is not used as an opportunity for reflection and learning but is seen as a necessary chore, part of accountability but marginal to the work of museums. (Davies and Heath 2013a, p.3)
I won’t go into their findings in detail as the full report is available online and I recommend you read the whole thing, or at least the executive summary(see references below). But I will tease out a couple of issues that are of particular relevance to me:
Conflicting and Competing agendas
Davies and Heath describe scenarios that are all too familiar to me from my time working in exhibition development: exhibition teams being disbanded at the conclusion of a project with no opportunity for reflection; summative reports not being shared with all team members (particularly designers and other outside consultants); insufficient funds or practical difficulties in implementing recommended changes once an exhibition is open; evaluation results that are too exhibition-specific and idiosyncratic to be readily applied to future exhibition projects.
They also give an insightful analysis of how the multiple potential purposes of evaluation can interfere with one another. They provide a convincing argument for separating out different kinds of evaluation recommendations or at least being more explicit about what purpose a given evaluation is meant to serve:
- Project-specific reflection: evaluation as a way of reflecting on a particular project and as an opportunity for the learning and development of exhibition team members
- Generalisable findings: the capacity of evaluation results to build the overall knowledge base of the sector
- Monitoring and accountability: evaluation reports are usually an important aspect of reporting to a project funder or the institution as a whole
- Advocacy and impact: using evaluation results to create an evidence base for the value of museums for potential funders and society at large
As we move down this list, the pressure on evaluation results to tell “good news” stories increases – evaluation is less a way of learning and improvement and more a platform to prove or demonstrate “success”. Museums may be reluctant to share critical self-appraisal for fear that exposing “failure” may make it more difficult to get support for future projects. Such findings may not be shared with other museums or even other departments within the musem – let alone potential funders or other stakeholders. Furthermore, generalisability is often limited by methodological inconsistencies between different institutions and the reporting requirements of different funding bodies.
Comparing Evaluation with Research
On the subject of methodology, I’ll make a couple more observations, in particular the difference between Evaluation and Research (at least in visitor studies). The two terms are often used interchangeably and the line is admittedly blurry, particularly since research and evaluation use essentially the same tools, approaches and methods.
The way I see it, visitor research seeks to understand “how things are”. It tries to advance knowledge and develop theory about what visitor experiences are and what they mean: to individuals, to institutions, to society at large. Visitor research is usually positioned within a broader academic discourse such as psychology or sociology. Research findings can be valid and useful even if they don’t directly lead to changes in practice [2].
In contrast, evaluation is more interested in “how things could be improved”. To quote Ben Gammon, who was one of my first mentors in this field:
Evaluation is not the same as academic research. Its purpose is not to increase the sum of human knowledge and understanding but rather to provide practical guidance. If at the end of an evaluation process nothing is changed there was no point in conducting the evaluation. This needs to be the guiding principle in the planning and execution of all evaluation projects. (quoted in Davies and Heath 2013a, p.14)
Evaluation is therefore more pragmatic and applied than visitor research. The validity of evaluation is less in its methodological rigour than the extent to which the results are useful and are used.
Notes
[1] At the outset of their research, Davies and Heath wrote an opinion piece for the Museums Journal outlining some of the issues they had identified with summative evaluation. I wrote a response to it at the time, which interestingly, was itself cited in their report. Besides being somewhat startled (and delighted!) to see one of my blog posts being cited in a more academic type of publication, it serves as an interesting example of how the lines are blurring between more formal and informal academic writing and commentary.
[2] When I was doing data collection for my PhD, many people assumed the purpose of my research was to “make improvements” to the galleries I was studying. It’s a reasonable inference to make, and I do hope my results will eventually influence exhibition design. However, my PhD is research, not evaluation – and as such is more interested in understanding fundamental phenomena than in the particular galleries I happened to use in my study.
References:
Davies, M., & Heath, C. (2013a). Evaluating Evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries. Retrieved from http://visitors.org.uk/files/Evaluating Evaluation Maurice Davies.pdf
Davies, M., & Heath, C. (2013b). “Good” organisational reasons for “ineffectual” research: Evaluating summative evaluation of museums and galleries. Cultural Trends, (in press). doi:10.1080/09548963.2014.862002