Evaluation: it’s a culture, not a report

The UK Museums Journal website has recently published the opinion piece Why evaluation doesn’t measure up by Christian Heath and Maurice Davies. Heath and Davies are currently conducting a meta analysis of evaluation in the UK.

Is this the fate of many carefully prepared evaluation reports?

The piece posits that: “[n]o one seems to have done the sums, but UK museums probably spend millions on evaluation each year. Given that, it’s disappointing how little impact evaluation appears to have, even within the institution that commissioned it.”

If this is the case, I’d argue it’s because evaluation is being done as part of reporting requirements and is being ringfenced as such. Essentially, the evaluation report has been prepared to tick somebody else’s boxes – a funder usually – and the opportunity to use it to reflect upon and learn from experience is lost. Instead, it gets quietly filed with all the other reports, never to be seen again.

So even when evaluation is being conducted (something that cannot be taken as a given in the first place), there are structural barriers that prevent evaluation findings filtering through the institution’s operations. One of these is that exhibition and program teams are brought together with the opening date in mind, and often disperse once the ribbon is cut (as a former exhibition design consultant, their point about external consultants rarely seeing summative reports resonated with my experience). Also, if the evaluation report is produced for the funder and not the institution, there is a strong tendency to promote ‘success’ and gloss over anything that didn’t quite go to plan. After all, we’ve got the next grant round to think of and we want to present ourselves in the best possible light, right?

In short, Heath and Davies describe a situation where evaluation has become all about producing the report so we can call the job done and finish off our grant acquittal forms. And the report is all about marching to someone else’s tune. We may be doing evaluation, but is it part of our culture as an organisation?

It might even be the case that funder-instigated evaluation is having a perverse effect on promoting an evaluation culture. After all, it is set up to answer someone else’s questions, not our own. As a result findings might not be as useful in improving future practice as they might be. So evaluation after evaluation goes nowhere, making people wonder why we’re bothering at all. Evaluation becomes a chore, not a key aspect of what we do.

NB: This piece was originally written for the EVRNN blog, the blog of the Evaluation and Visitor Research National Network of Museums Australia.

 

Leave a Reply

Your email address will not be published. Required fields are marked *