Building Evaluation Capacity

I recently attended the 27th Annual Visitor Studies Association conference in Albuquerque, NM. Given the theme was Building Capacity for Evaluation: Individuals, Institutions, the Field, it’s not surprising that “capacity building” was a common topic of discussion throughout the week. What do we mean by building capacity? Whose capacity are we building and why? Pulling together threads from throughout the conference, here are some of my thoughts:

Individual capacity building:

Any conference offers a chance to hear about developments in the field and to build your professional networks, which is a form of personal capacity-building. VSA in particular runs professional development workshops before and after the conference as an opportunity to sharpen your skills, be exposed to different approaches and to learn new techniques. These are useful for both newcomers to the field as well as more experienced researchers who might be interested in new ways of thinking, or new types of data collection and analysis.

A common thread I noticed was both the opportunities and challenges presented by technology – video and tracking software allow you to collect much more detailed data, and you can integrate different data types (audio, tracking data) into a single file. But technology’s no panacea, and good evaluation still boils down to having a well thought-through question you’re looking to investigate and the capacity to act on your findings.

Panel
Panel session at VSA 2014

Institutional capacity building:

There were a lot of discussions around how to increase the profile of Evaluation and Visitor Research within institutions. There seemed to be a general feeling that “buy-in” from other departments was often lacking: evaluation is poorly understood and therefore not valued by curators and others whose roles did not bring them into regular, direct contact with visitors. Some curators apparently come away with the impression that evaluators only asked visitors “what they don’t like”, or otherwise had a vested interest in exposing problems rather than celebrating successes[1]. Others believe they “already know” what happens on the exhibition floor, but without systematic observation may only be seeing what they want to see, or otherwise drawing conclusions about what works and what doesn’t based on their own assumptions, rather than evidence.

For many, the “aha!” moment comes when they become involved in the data collection process themselves. When people have an opportunity to observe and interview visitors, they start to appreciate where evaluation findings come from, and are subsequently more interested in the results. Several delegates described Damascene conversions of reluctant curators once they had participated in an evaluation. But others expressed reservations about this approach – does it give colleagues an oversimplified view of evaluation? Does it create the impression that “anyone can do evaluation”, therefore undermining our skills, knowledge and expertise? What about the impact on other functions of the museum: if curators, designers and others are spending time doing evaluation, what parts of their usual work will need to be sacrificed?

A counter to these reservations is that visitors are arguably the common denominator of *all* activities that take place in informal learning institutions, even if this isn’t obvious on a day to day basis in many roles. Participating in data collection acts as a reminder of this. Also, at its best, evaluation helps foster a more reflective practice more generally. But nonetheless the concerns are valid.

Capacity building across the Field:

I found this part of the discussion harder to be part of, as it was (understandably) focused on the US experience and was difficult to extrapolate to the Australian context due to massive differences in scale. One obvious difference is the impact that the National Science Foundation has had on the American museum landscape. NSF is a major funder of the production and evaluation of informal science learning [2]. NSF-supported websites like informalscience.org host literally hundreds of evaluation reports (that actually extend beyond the “science” remit that the site’s name implies – it’s a resource worth checking out).

There are a considerable number of science centres and science museums across the US, and because of these institutions’ history of prototyping interactive exhibits, they tend to have a larger focus on evaluation and visitor research than (say) history museums. Indeed, most of the delegates at VSA seem to represent science centres, zoos and aquariums, or are consultant evaluators for whom such institutions are their principal clients. There was also a reasonable art museum presence, and while there were a few representatives of historical sites, on the whole I got the impression that history museums were under-represented.

In any case, I came away with the impression that exhibition evaluation is more entrenched in museological practice in the US than it is here in Australia. It seems that front-end and formative research is commonly done as part of the exhibition development process, and conducting or commissioning summative evaluations of exhibitions is routine. In contrast, besides a handful of larger institutions, I don’t see a huge amount of evidence that exhibition evaluation is routinely happening in Australia. Perhaps this is just the availability heuristic at play – the US is much bigger so it’s easier to bring specific examples to mind. Or it could be that evaluation is happening in Australian museums, but as an internal process that is not being shared? Or something else?

 

[1] A lesson from this is that evaluation reports may read too much like troubleshootingdocuments and not give enough attention to what *is* working well.

[2] The Wellcome Trust plays a similar role in the UK, but as far as I’m aware there is nothing comparable (at least in scale) in Australia.