Before and After: Ediacaran Fossils

The SA Museum has recently opened its refurbished Ediacaran Fossils gallery, a small permanent exhibition showing the fossilised remnants of some of the earliest multicellular animals on Earth.

I did a few accompanied visits in this gallery during the first phase of my PhD research. In this earlier iteration, the dominant colour scheme was a strong red, presumably intended to evoke the red earth of the Flinders Ranges, the outback location where the ediacaran fossils were discovered. That’s how my participants tended to see it:

“in retrospect that red colour kind of seems to connect to the area itself of the Flinders.  . .”

“Er the fossil room was very red. Was very red. But then again so’s the area where they all came from”

IMG_4237comp
A view of the original Ediacaran Fossils Gallery. The mural at the back is a large photograph of Wilpena Pound (a well-known site in the Flinders Ranges). The vertical display in the foreground is a section of what was once sea bed – abut 600 million years ago.
A view along the back wall of the original Ediacaran Fossils gallery.
A view along the back wall of the original Ediacaran Fossils gallery.

In my study, participants had different opinions on the red colour:

“I think it’s good that it’s a really strong colour because it’s very vibrant and it and it um, it makes it a really warm rich colour, and then the sense maybe that you’re actually on a cliff wall, that is like a cliff wall of where you might find things or . . .”

“. . . you sort of wonder whether it would be better off with a neutral, with neutral walls, to draw more attention to the exhibits . . . .I mean to have a red fossil wall that looks great, but then to have it in a room, I think that room was red, it sort of detracts from it a bit.”

The refurbished gallery has retained the same basic layout, but has changed the colour palette to a deep green-blue:

The refurbished fossils gallery. The Wilpena Pound image is still there, but to me felt somehow less dominant now it's in a mostly green backdrop rather than surrounded by red.
The refurbished fossils gallery. The Wilpena Pound image is still there, but to me felt somehow less dominant now it’s in a mostly green backdrop rather than surrounded by red.

I believe the rationale[1] behind the colour change was to be more evocative of what the environment would have been like when the creatures were alive (ie. the sea bed) rather than the outback setting that the area is now. This sense of being “under the sea” is enhanced by the line drawings of Dickinsonia et al up at high level. It also seems to increase the sense of height in the space.

The back wall in the refurbished gallery
The back wall in the refurbished gallery

I don’t know if it is the increased sense of height or that the back wall has been smoothed out and simplified a little, but it somehow seems more spacious in this new gallery (at least to me). It could also be that the size of the gallery, while not changing physically, has been enlarged conceptually by making what previously felt like a hallway become part of the exhibition proper.

Unfortunately I don't have a shot of the original gallery from this angle, but you can see where the lift comes out (silver doors) and the door to the stairs is at the far left. In the old gallery, the bit between the pylon and the lift/stairs felt more like a corridor as there was a window (now blocked off and turned into more display space).
Unfortunately I don’t have a shot of the original gallery from this angle, but you can see where the lift comes out (silver doors) and the doorway to the stairs is at the far left. In the old gallery, the bit between the pylon and the lift/stairs felt more like a corridor as there was a window in the far corner (now blocked off and turned into more display space). There were also some display plinths around this area that seemed to “block off” the corridor from the rest of the exhibition space.

So now, as soon as you come out of the lift/stairs, you feel like you’re in the gallery straight away rather than some ante-chamber or holding space. Blocking off the window has also dropped the light levels in this area, perhaps adding to that sense of “under the sea” immersion.

Overall I found this a calmer space to be in than the earlier iteration – they do say red is a highly arousing colour after all, and perhaps this colour scheme is a little gentler on the senses.

The new gallery has also made use of technology to help interpret the fossils, many of which can look like amorphous smudges to the untrained eye. iPad-based labels highlight the outline of the fossil imprints on the corresponding rock sections, making it easier to see what you’re looking at.

[1] Disclaimer – I had no involvement in the gallery refurbishment although I know the design team through being based at the SA Museum (also the senior designer, Brett Chandler, is a former colleague of mine and we’ve collaborated on exhibitions in the past). My commentary on the design is based on my own interpretations alone.

Evaluating Evaluation

What is evaluation? Who is it for? Why do it? What’s the difference between evaluation and research? Does it make as much difference as it purports to?

A recent report from the UK, Evaluating evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries by Maurice Davies and Christian Heath, makes for interesting reading in this area [1]. They observe that summative evaluation hasn’t informed practice as much as it might have, and look at some of the reasons why that might be the case. Their overall conclusion is one of “disappointment”:

Disappointment that all the energy and effort that has been put into summative evaluation appears to have had so little overall impact, and disappointment that so many evaluations say little that is useful, as opposed to merely interesting. . . With some notable exceptions, summative evaluation is not often taken seriously enough or well enough understood by museums, policy makers and funders. The visibility of summative evaluation is low. Too often, it is not used as an opportunity for reflection and learning but is seen as a necessary chore, part of accountability but marginal to the work of museums. (Davies and Heath 2013a, p.3)

I won’t go into their findings in detail as the full report is available online and I recommend you read the whole thing, or at least the executive summary(see references below). But I will tease out a couple of issues that are of particular relevance to me:

Conflicting and Competing agendas

Davies and Heath describe scenarios that are all too familiar to me from my time working in exhibition development: exhibition teams being disbanded at the conclusion of a project with no opportunity for reflection; summative reports not being shared with all team members (particularly designers and other outside consultants); insufficient funds or practical difficulties in implementing recommended changes once an exhibition is open; evaluation results that are too exhibition-specific and idiosyncratic to be readily applied to future exhibition projects.

They also give an insightful analysis of how the multiple potential purposes of evaluation can interfere with one another. They provide a convincing argument for separating out different kinds of evaluation recommendations or at least being more explicit about what purpose a given evaluation is meant to serve:

  1. Project-specific reflection: evaluation as a way of reflecting on a particular project and as an opportunity for the learning and development of exhibition team members
  2. Generalisable findings: the capacity of evaluation results to build the overall knowledge base of the sector
  3. Monitoring and accountability: evaluation reports are usually an important aspect of reporting to a project funder or the institution as a whole
  4. Advocacy and impact: using evaluation results to create an evidence base for the value of museums for potential funders and society at large

As we move down this list, the pressure on evaluation results to tell “good news” stories increases – evaluation is less a way of learning and improvement and more a platform to prove or demonstrate “success”. Museums may be reluctant to share critical self-appraisal for fear that exposing “failure” may make it more difficult to get support for future projects. Such findings may not be shared with other museums or even other departments within the musem – let alone potential funders or other stakeholders. Furthermore, generalisability is often limited by methodological inconsistencies between different institutions and the reporting requirements of different funding bodies.

Comparing Evaluation with Research

On the subject of methodology, I’ll make a couple more observations, in particular the difference between Evaluation and Research (at least in visitor studies). The two terms are often used interchangeably and the line is admittedly blurry, particularly since research and evaluation use essentially the same tools, approaches and methods.

The way I see it, visitor research seeks to understand “how things are”. It tries to advance knowledge and develop theory about what visitor experiences are and what they mean: to individuals, to institutions, to society at large. Visitor research is usually positioned within a broader academic discourse such as psychology or sociology. Research findings can be valid and useful even if they don’t directly lead to changes in practice [2].

In contrast, evaluation is more interested in “how things could be improved”. To quote Ben Gammon, who was one of my first mentors in this field:

Evaluation is not the same as academic research. Its purpose is not to increase the sum of human knowledge and understanding but rather to provide practical guidance. If at the end of an evaluation process nothing is changed there was no point in conducting the evaluation. This needs to be the guiding principle in the planning and execution of all evaluation projects. (quoted in Davies and Heath 2013a, p.14)

Evaluation is therefore more pragmatic and applied than visitor research. The validity of evaluation is less in its methodological rigour than the extent to which the results are useful and are used.

Notes

[1] At the outset of their research, Davies and Heath wrote an opinion piece for the Museums Journal outlining some of the issues they had identified with summative evaluation. I wrote a response to it at the time, which interestingly, was itself cited in their report. Besides being somewhat startled (and delighted!) to see one of my blog posts being cited in a more academic type of publication, it serves as an interesting example of how the lines are blurring between more formal and informal academic writing and commentary.

[2] When I was doing data collection for my PhD, many people assumed the purpose of my research was to “make improvements” to the galleries I was studying. It’s a reasonable inference to make, and I do hope my results will eventually influence exhibition design. However, my PhD is research, not evaluation – and as such is more interested in understanding fundamental phenomena than in the particular galleries I happened to use in my study.

References:

Davies, M., & Heath, C. (2013a). Evaluating Evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries. Retrieved from http://visitors.org.uk/files/Evaluating Evaluation Maurice Davies.pdf

Davies, M., & Heath, C. (2013b). “Good” organisational reasons for “ineffectual” research: Evaluating summative evaluation of museums and galleries. Cultural Trends, (in press). doi:10.1080/09548963.2014.862002