Experience Design

Late last year there was an article on The Conversation about “Experience Design“. I found it interesting and tweeted a link to it; soon afterwards the author, Faye Miller, got in touch. One thing led to another, and culminated in me writing a piece with Toni Roberts for the inaugural XD: Experience Design magazine, which has just come out.

Our piece is on Interpretive Design, and we group our thoughts around the interlinking concepts of Think, Feel, Do. Toni and I have known each other for a few years and have both been working on PhDs on exhibition design – me from the visitor perspective, Toni from the perspective of the design process (her PhD is done; mine is in the final stages). Coincidentally, we had both independently come up with a Venn diagram comprising Thinking, Feeling and Acting – something we came to realise when I posted a link to this presentation I gave last November. We’d discussed that it would be good for us to flesh out the overlaps between our ideas in a publication of some sort, and when the opportunity to write for XD came about it seemed like the right place to do it.

XD is intended to bring together people and disciplines that don’t normally overlap: industry, academia, management; theory, practice and user groups/audiences. I encourage you to subscribe to the XD newsletter, or better yet pick up a copy!

Exit through the gift shop

These days it’s more or less a given that a museum will have a gift shop of some description. There’s a body of literature around museum retail (here is a good example). Museum shops vary greatly in quality and tone. Some clearly put a lot of effort into their retail offer, and the larger museums tend to have excellent shops that are great for souvenir shopping (you can even shop online). Others appear to be doing it as a tick-the-boxes exercise or as an afterthought.

Generally speaking, debates about the museum shop revolve around:

  • Location: should visitor flow be routed through the gift shop such that avoiding it is difficult, if not impossible?
  • Integration: research suggests that visitors see the shop as part of the museum experience as a whole, not as a separate entity. Should this be embraced to make retail a more holistic part of the visitor experience, and if so, how?
  • Merchandise: how closely should items stock represent the museum’s “brand” in terms of quality, content and provenance? It’s easy to stock piles of generic souvenir fodder, and it probably moves quickly. But does it enhance or detract from the rest of the museum experience?

However, the very idea that a museum should have a shop is seldom brought into question. That is, until a few days ago, when the 9/11 Museum opened (complete with shop) at Ground Zero in New York. The New York Post called it “absurd“, and families are reportedly infuriated by the “crass commercialism” such a shop embodies. Of course, the shop is not the only controversy surrounding the museum, and it’s not surprising that Ground Zero is such a contested site. But that’s a bigger subject; one for another day and another post.

I’m interested in exploring reactions to the shop in particular. The juxtaposition of a site of great and recent tragedy with a place you can pick up commemorative trinkets does trigger a bit of a visceral “yuck” factor. But then again, other sites with gift/souvenir shops include the USS Arizona Memorial at Pearl Harbor, Arlington National Cemetery, and the Holocaust Museum in DC (just to cite a few US examples). The main difference seems to be the recency of events being commemorated at Ground Zero (and as I’ve argued before, recent events can be ‘too hot to handle‘).

The museum itself argues that merchandise has been “carefully selected”, and that proceeds help support this non-profit organisation (and presumably there’s market demand for these souvenirs and keepsakes). Others have said that this just underscores how commercialism has permeated every aspect of American society.

I’m still working through what I think of this, and trying not to reach judgement one way or another too quickly. As a foreigner, I’m aware that it’s not really for me to judge what is or isn’t an appropriate way for Americans to remember and commemorate their own heritage. I’d be interested to hear what others think.

UPDATE: This piece elegantly and powerfully describes the difficult, sometimes darkly comical experience of having a private tragedy turned into public memorial, complete with souvenirs. There are so many bits I could pull out and quote. Better yet just read it.

Acknowledgement: In case it’s not obvious, the title of this post is a reference to the 2010 Banksy movie of the same name.

On “challenging” your audience

. . .but we should be challenging visitors, not just giving them what they want. . .

Work in evaluation and visitor research for long enough and you’re bound to hear someone say this. And from the point of view of an evaluator, it’s frustrating for a few reasons:

  • It betrays an assumption that conducting evaluation somehow means you’re going to ‘dumb down’ or otherwise pander to the masses. Evaluation shouldn’t fundamentally alter your mission, it should just give you clues as to where your stepping-off point should be.
  • It can be used as an excuse for maintaining the status quo and not thinking critically about how well current practices are working for audiences. Are we genuinely challenging audiences. . . or just confusing them?
  • It tends to conflate knowledge with intelligence. If you (and many people you work with) are an expert on a given topic, it’s easy to overestimate how much “everybody” knows about that subject. If there is a big gap between how much you assume visitors know and what they actually know, no amount of intelligence on the visitors’ part will be able to bridge that gap.
  • A challenge is only a challenge when someone accepts it. In a free-choice setting like a museum, who is accepting the challenge and on whose terms? If the ‘challenge’ we set our audiences is rejected, does that leave us worse off than where we started?

This post on the Uncatalogued Museum neatly sums how visitors can be up for a challenge – often more so than we think – if we find the right balance between meeting visitors where they are and extending them to new horizons. But finding this balance depends on actually getting out there and talking to people, not resting solely on assumptions and expert knowledge.

If the goal is genuinely to challenge visitors, then visitors need to be part of the conversation. If we’re not asking them, what are we afraid of?

 

What do museum visitors think ‘science’ is?

The word “science” has its roots in the Latin for ‘knowledge’, and historically it has been used to describe any systematic body of knowledge. In common parlance, however, it tends to pertain to a particular approach to studying physical / natural phenomena, based on testable hypotheses, systematic gathering of evidence and conducing experiments.

So what do visitors to Natural History museums think “science” is? How do these beliefs influence how relevant they see science to their everyday lives? Do they see the connection between science and the work that Natural History museums do?

Museum visitors agree: this is definitely a scientist.

These were the guiding questions for a qualitative study conducted by Jennifer DeWitt and Emma Pegram at the Natural History Museum in London, as reported in the most recent issue of Visitor Studies. They interviewed 20 family groups in different parts of the museum, asking them questions about what they found interesting in the museum, whether they thought the museum was a ‘sciencey’ place or not, and whether they participated in science activities in their daily lives.

Visitors were split as to whether they thought the museum staff they interacted with were ‘sciencey’ or not. Staff were considered ‘sciencey’ when they demonstrated subject-specific knowledge, but facilitating enquiry in others was not necessarily a ‘sciencey’ thing for staff to do (visitors drew a distinction between ‘science’ and ‘education’ in this sense). Families more commonly described the activities they took part in at the museum as ‘sciencey’ – hallmarks of ‘sciencey’ activities were the use of technical equipment such as microscopes, detailed observation and specialist terminology. However, there was also evidence that activities that were accessible or friendly were considered not ‘sciencey’ for that reason.

Are these people scientists? Natural history museum visitors are not sure.

When it came to the Museum itself, visitors were equivocal as to whether it was a ‘science place’, having different views regarding whether particular types of content, exhibits or activities constituted ‘science’. Again a perceived conflict between ‘science’ and ‘education’ came up. And interestingly, some visitors did not consider natural history to constitute science*.

Perceptions of whether the museum was a science place or not were informed by each family’s prior conceptions of science. While 19 of the 20 families had at least one member who claimed to be interested in science, only a minority of families considered themselves ‘sciencey’. Further probing often revealed that families often did participate in science related activities (e.g. rock collecting) but such activities did not fall within the relatively narrow conception of ‘science’ that most participants had. “Science” conjured up the notion of “facts” or expert knowledge that was not particularly accessible. It was more readily associated with the physical sciences and technology than with nature.

Admittedly this study is based on a small sample, but it points to some interesting preconceptions about what science is, as well as a potential disconnect between how Natural History museums see themselves, and how they are viewed by their audiences.


*The authors concede that in their particular case, being adjacent to the Science Museum may reinforce the perception of the Natural History Museum being something other than science.

Beyond “warm impulses”

I’ve been catching up on the Museopunks podcast series, and a section of March’s installment, the Economics of Free, particularly caught my attention. In an interview, director of the Dallas Museum of Art, Maxwell L. Anderson compares the data that shopping malls collect about their customers to the relative paucity of data that is collected about visitors to the typical art museum. I think it’s worth repeating (from about 18min into the podcast):

[Malls] know all this basic information about their visitors. Then you go to an art museum. What do we know? How many warm impulses cross a threshold? That’s what we count! And then we’re done! And we have no idea what people are doing, once they come inside, what they’re experiencing, what they’re learning, what they’re leaving with, who they are, where they live, what interests and motivates them . . . so apart from that we’re doing great, you know. We’re like that mall that has no idea of sales per square foot, sales per customer. . . so we’re really not doing anything in respect to knowing our visitors. And learning about our visitors seems to me the most basic thing we can do after hanging the art. You know, you hang the art, and then you open the doors and all we have been doing is “hey look there are more people in the doors”.  And the Art Newspaper dedicates an annual ‘statistical porn’ edition of how many bodies crossed thresholds. Nobody’s asking how important the shows were, or what scholarly advances were realised as a function of them, or what people learned, how they affected grades in school. Nobody knows any of that. Nobody knows who the visitors were. So I consider it a baseline. We’re just at the primordial ooze of starting to understand what museums should be doing with this other part of our mission which is not the collection but the public.

I’d argue that we’re a little bit beyond the ‘primordial ooze’ stage of understanding*, although Anderson’s right in that many museums don’t go much beyond counting ‘warm impulses’ (those infra-red people counters). He goes on to describe how the DMA’s Friends program is giving the museum more data about what their visitors do while inside the museum, and how this can inform their engagement strategies (22:45):

This is just another form of research, you know . . . we do research on our collections without blinking an eye, we think nothing of it. We spend copious amounts of time sending curators overseas to look at archives to study works of art but we’ve never studied our visitors. The only time museums typically study their visitors is when they have a big show, and they’re outperforming their last three years, everybody’s excited, and there’s a fever, and you measure that moment, which is measuring a fever. The fever subsides, the data’s no longer relevant but that’s what you hold on to and point to as economic impact. And largely, it’s an illusion.

I find it interesting that Anderson puts visitor research on a par with collection-based research. Often, I get the sense that collection research is seen as ‘core’ museological business, but visitor research is only a ‘nice to have’ if there is the budget. But perhaps this is a sign of shifting priorities?

 

*Historically, most visitor experience research has taken place in science centres, children’s museums, zoos and aquariums rather than museums of fine art. Although there are of course exceptions.

IPOP Model of Visitor Preference

Most typologies of museum visitors tend to categorise visitors by demographics, motivation, or a mixture of both. The IPOP model, developed by Andrew Pekarik and colleagues at the Smithsonian Institution (Pekarik et al, 2014), is a little different in that it categorises visitors according to their preferred interests. Developed through years of research with visitors across the Smithsonian sites, the IPOP model is based on four key experience preferences:

  • Ideas – a liking for abstract concepts and facts
  • People – attraction to stories, emotional connections and social interaction
  • Objects – appreciation for objects, aesthetics and craftsmanship
  • Physical – attraction to sensory experiences, movement and physicality (this P was a later addition to the model as it evolved).

These are indicative of overall preferences rather than being absolute and mutually exclusive categories. Scores are based on responses to a self-administered questionnaire that is based on agreement to statements such as: I like to know how things are made, or I like to bring people together. The full version comprises 38 items, with shorter 20 and 8 item versions also used. Using responses to these statements, 79% of visitors show a clear preference for one of the IPOP dimensions: 18% Idea, 18% People, 19% Object, 23% Physical. The remaining 21% tend to show a combination of two dimensions (rarely three) rather than a single clear preference*.

By combining self-report IPOP preferences with tracking and timing data, Pekarik and his team have shown that it is possible to predict what exhibits a given visitor will attend to (or indeed, which exhibits they will avoid) based on their IPOP preference. People tend to seek out experiences that suit their preferences and match their expectations. When people see what they expect, they report being satisfied with their experience. However, sometimes visitors are engaged by something unexpected and different from their usual preferences. This phenomenon, described by the authors as “flipping”, can lead to more memorable and meaningful experiences.

The exhibition Pekarik et al (2014) use to illustrate the predictive value of IPOP is Against All Odds, an exhibition at the National Museum of Natural History about the rescue of trapped Chilean miners in 2010. I happened to see this exhibition on my 2012 study tour of Washington DC, and while I recall seeing it, I don’t have any specific memories of it (a consequence of breezing through dozens of exhibitions for days on end). Although I was amused to observe that the two photos I took of the exhibition are very similar to those in the Curator article!

SI-NMNH Chilean miners exhibit
My photo of the entrance / introductory graphic
SI-NMNH Chilean miners exhibit 2
The rescue capsule. The image in the Curator article takes a wider view which encompasses a tactile drill bit on the left and a video on the right. The rescue capsule was the largest and most distinctive object in the display. Whether that is why I photographed it as a way of recording the exhibition, or whether this says something about my IPOP preference I’m not sure.

I’m not sure what to make of this. Either I intuitively grasped which views best encapsulated the exhibition, or I have the same IPOP preference as the person who selected the images. . .

UPDATE 2/5/2014: I’ve just found out that the Pekarik et al article is available online for free. Happy reading!

*Interestingly, the research team categorised themselves according to the IPOP typology and found they had preferences in three of the four dimensions (none of the team was a People person). It strikes me as an interesting exercise for exhibition development teams to conduct at the outset of the project, giving individuals an insight into their own preferences as well as an appreciation of those of their differently-preferenced colleagues – there is more on this point in Pekarik and Mogel (2010).

References

Pekarik, A., & Mogel, B. (2010). Ideas, Objects, or People? A Smithsonian Exhibition Team Views Visitors Anew. Curator: The Museum Journal, 53(4), 465–482. doi:10.1111/j.2151-6952.2010.00047.x

Pekarik, A., Schreiber, J. B., Hanemann, N., Richmond, K., & Mogel, B. (2014). IPOP: A Theory of Experience Preference. Curator: The Museum Journal, 57(1), 5–27. doi:10.1111/cura.12048

PhD – three years down . . .

Although this blog has only made passing reference to my PhD journey on a personal level, now that I’m three years in it’s interesting to look back at those yearly updates/reflections and see how my thinking and outlook have changed.

One year in and I was filled with optimism and a sense of achievement about my first milestone. Another year on and that milestone felt like a long way in the past. Self-doubt was creeping in and it felt like any tangible progress was painfully slow. I feared falling behind and not getting any worthwhile results. Fast forward another 12 months and I’ve passed the three year mark (in terms of calendar time at least – “officially” the three-year clock doesn’t run out until mid-May due to a couple of candidature breaks) – it’s the home stretch, the finish line is in sight!

Although there is still a lot of work to go, I’ve pulled together about 90% of a full first draft of the thesis. There’s a sense of accomplishment of seeing some 75,000 words* all together in one document. Moreover, they are words that I think tell a story and seem to reach some meaningful conclusions. Recently when one of my supervisors asked me what my research had found, I was able to give a (fairly) straight and succinct answer. I can look back at what I set out to do at the beginning of my PhD and see I’ve managed to find at least some answer to all the research questions I had at the outset.

Everyone’s PhD journey is different, but for me it felt like I turned a corner once I’d finished my data collection in about June last year. My worries about not asking the right questions was replaced by pragmatism: my data set was what it was, and I had to make the best of it come what may. I increased my confidence and competence in data analysis as interesting results started emerging. Diving into the numbers of my quantitative data set satisfied my inner nerd.

Looking back, I think I underestimated what an emotionally draining process data collection can be. All in all I approached some 1200 visitors – roughly half of whom agreed to participate in my research – and discreetly tracked over 200 more. It takes a lot of concentration, upbeat manner and acceptance of rejection! In my own case, data collection coincided with a time I’d spread myself a little thinly due to volunteering, as well as a difficult period in my private life, both of which probably magnified the sense of being emotionally spent. But I’d wager it’s a draining process at the best of times.

Now that I’ve conducted a piece of my own research, I feel more able to critically evaluate the research of others. It’s made me a better reader of the literature.  I found it useful going through the peer review process for my first academic publication – the reviewer comments helped sharpen my arguments. And although it’s hard to measure this about yourself, I think the overall quality of my thinking has improved.

Where from here?

Although the finish line is in sight, it’s fair way off in the horizon. Once I have pulled together a first full draft, it will be a chance for me (and my supervisors) to see how everything hangs together, identify the weaknesses, plug any holes. I don’t want to underestimate the size of that task, but at the moment it feels achievable. There are probably another 2-3 publications that can come out of my research, although for the time being I’m concentrating on the thesis. Some of the results I’ll be presenting at the Visitor Studies Association conference in Albuquerque this July, which is an extension of what I presented at the Visitor Research Forum at UQ last month. So gradually I’m putting the results “out there”; I just don’t want to pre-empt too much of that on this blog.

But stay tuned . . .

*That count includes absolutely *everything* – figures, tables, captions, footnotes, references, appendices. The word limit for PhD theses at UQ is 80,000 words including everything except references.

Get in line!

I have a confession to make – I’m incredibly impatient at times.

I’m often baffled at the number of people who appear willing to join snaking queues that have no apparent motion. It seems there are a lot of people far more willing than me to wait indeterminate periods to have that unique experience or get that special bargain. I’m likely to take one look at the size of the line and decide it just can’t be worth it.

The start of the line of people waiting for Free Fridays at MOMA, New York August 2013. By this stage the queue had stretched around the block, with still about 2 hours to go before opening. People at the front of the line told me they had been there for most of the afternoon.
The start of the line of people waiting for Free Fridays at MOMA, New York August 2013. By this stage the queue had stretched around the block, with still about 2 hours to go before opening. People at the front of the line told me they had been there for most of the afternoon.

Perhaps it comes from growing up in a relatively small city – you grow up accustomed to going about your daily business with few problems associated with crowds. In cities like New York, queuing is a fact of life and several times we found ourselves in long lines waiting to get into museums or art installations while visiting there last year. But when we could avoid it, either by planning ahead or purchasing premium-rate tickets, we jumped at the chance. But then again, not everyone is willing or able to spend the extra cash needed to jump the line.

Another factor in my queue-aversion could be my size. At a mere 155cm (5’1″) in height, I’m quickly lost in a crowd. And it’s easy to lose a sense of control over your environment when all you can see up ahead is sea of backs. And that sense of autonomy and control is what environmental psychology tells us we need in order to feel comfortable in a given setting.

However, one thing that visitor experience research has taught me is that you should never just take what you think about a given scenario and extrapolate from there, assuming everyone else thinks the same. While I’m prepared to wager there are few people who actually enjoy spending hours on end waiting in line, there are clearly many people doing the same cost-benefit calculation as I am and coming up with a different result.

While it’s not something I relate to personally, I can see how the line could be considered part of the experience itself, the journey being just as important as the destination. A queue could help build anticipation about an event, even add to the “buzz” – if the line’s so long, it must be good! A queue can be a sign of success for event planners.

There could be a fantastic experience ahead, then again it could just be the line for the portaloos.

As far as queues go, there is such a thing as “good” and “bad” ones. A “good” queue has a clear beginning and end. If it’s a long queue there are enough barriers to keep it orderly and good signage to direct people appropriately. The people in charge are organised and look like they know what they’re doing. The queue moves: if not quickly, then at least at a predictable rate. A timed ticket gets you in at the time it says it will.

A “bad” queue appears chaotic – it’s not clear where it starts or ends, if there are multiple queues it’s not obvious which one you’re meant to be in, and the chaos seems to let people ‘jump ahead’ of those who have been waiting patiently. No-one seems to know what’s going on and the staff look underprepared and overwhelmed.

Giving estimated wait times reduces uncertainty associated with queuing and allows visitors to make an informed judgement about how willing they are to wait that length of time.

A queue will always be associated with some uncertainty: how long will I have to wait? Am I in the right place? Will it be worth the wait? People differ in their tolerance of uncertainty, and it may be lessened if they are uncomfortable with crowds in the first place. But there are ways of reducing uncertainty (e.g. signposting queues with waiting times, offering timed tickets) so that even those who are usually disinclined to wait will be happy to be (at least a little) patient.

Before and After: Ediacaran Fossils

The SA Museum has recently opened its refurbished Ediacaran Fossils gallery, a small permanent exhibition showing the fossilised remnants of some of the earliest multicellular animals on Earth.

I did a few accompanied visits in this gallery during the first phase of my PhD research. In this earlier iteration, the dominant colour scheme was a strong red, presumably intended to evoke the red earth of the Flinders Ranges, the outback location where the ediacaran fossils were discovered. That’s how my participants tended to see it:

“in retrospect that red colour kind of seems to connect to the area itself of the Flinders.  . .”

“Er the fossil room was very red. Was very red. But then again so’s the area where they all came from”

IMG_4237comp
A view of the original Ediacaran Fossils Gallery. The mural at the back is a large photograph of Wilpena Pound (a well-known site in the Flinders Ranges). The vertical display in the foreground is a section of what was once sea bed – abut 600 million years ago.
A view along the back wall of the original Ediacaran Fossils gallery.
A view along the back wall of the original Ediacaran Fossils gallery.

In my study, participants had different opinions on the red colour:

“I think it’s good that it’s a really strong colour because it’s very vibrant and it and it um, it makes it a really warm rich colour, and then the sense maybe that you’re actually on a cliff wall, that is like a cliff wall of where you might find things or . . .”

“. . . you sort of wonder whether it would be better off with a neutral, with neutral walls, to draw more attention to the exhibits . . . .I mean to have a red fossil wall that looks great, but then to have it in a room, I think that room was red, it sort of detracts from it a bit.”

The refurbished gallery has retained the same basic layout, but has changed the colour palette to a deep green-blue:

The refurbished fossils gallery. The Wilpena Pound image is still there, but to me felt somehow less dominant now it's in a mostly green backdrop rather than surrounded by red.
The refurbished fossils gallery. The Wilpena Pound image is still there, but to me felt somehow less dominant now it’s in a mostly green backdrop rather than surrounded by red.

I believe the rationale[1] behind the colour change was to be more evocative of what the environment would have been like when the creatures were alive (ie. the sea bed) rather than the outback setting that the area is now. This sense of being “under the sea” is enhanced by the line drawings of Dickinsonia et al up at high level. It also seems to increase the sense of height in the space.

The back wall in the refurbished gallery
The back wall in the refurbished gallery

I don’t know if it is the increased sense of height or that the back wall has been smoothed out and simplified a little, but it somehow seems more spacious in this new gallery (at least to me). It could also be that the size of the gallery, while not changing physically, has been enlarged conceptually by making what previously felt like a hallway become part of the exhibition proper.

Unfortunately I don't have a shot of the original gallery from this angle, but you can see where the lift comes out (silver doors) and the door to the stairs is at the far left. In the old gallery, the bit between the pylon and the lift/stairs felt more like a corridor as there was a window (now blocked off and turned into more display space).
Unfortunately I don’t have a shot of the original gallery from this angle, but you can see where the lift comes out (silver doors) and the doorway to the stairs is at the far left. In the old gallery, the bit between the pylon and the lift/stairs felt more like a corridor as there was a window in the far corner (now blocked off and turned into more display space). There were also some display plinths around this area that seemed to “block off” the corridor from the rest of the exhibition space.

So now, as soon as you come out of the lift/stairs, you feel like you’re in the gallery straight away rather than some ante-chamber or holding space. Blocking off the window has also dropped the light levels in this area, perhaps adding to that sense of “under the sea” immersion.

Overall I found this a calmer space to be in than the earlier iteration – they do say red is a highly arousing colour after all, and perhaps this colour scheme is a little gentler on the senses.

The new gallery has also made use of technology to help interpret the fossils, many of which can look like amorphous smudges to the untrained eye. iPad-based labels highlight the outline of the fossil imprints on the corresponding rock sections, making it easier to see what you’re looking at.

[1] Disclaimer – I had no involvement in the gallery refurbishment although I know the design team through being based at the SA Museum (also the senior designer, Brett Chandler, is a former colleague of mine and we’ve collaborated on exhibitions in the past). My commentary on the design is based on my own interpretations alone.

Evaluating Evaluation

What is evaluation? Who is it for? Why do it? What’s the difference between evaluation and research? Does it make as much difference as it purports to?

A recent report from the UK, Evaluating evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries by Maurice Davies and Christian Heath, makes for interesting reading in this area [1]. They observe that summative evaluation hasn’t informed practice as much as it might have, and look at some of the reasons why that might be the case. Their overall conclusion is one of “disappointment”:

Disappointment that all the energy and effort that has been put into summative evaluation appears to have had so little overall impact, and disappointment that so many evaluations say little that is useful, as opposed to merely interesting. . . With some notable exceptions, summative evaluation is not often taken seriously enough or well enough understood by museums, policy makers and funders. The visibility of summative evaluation is low. Too often, it is not used as an opportunity for reflection and learning but is seen as a necessary chore, part of accountability but marginal to the work of museums. (Davies and Heath 2013a, p.3)

I won’t go into their findings in detail as the full report is available online and I recommend you read the whole thing, or at least the executive summary(see references below). But I will tease out a couple of issues that are of particular relevance to me:

Conflicting and Competing agendas

Davies and Heath describe scenarios that are all too familiar to me from my time working in exhibition development: exhibition teams being disbanded at the conclusion of a project with no opportunity for reflection; summative reports not being shared with all team members (particularly designers and other outside consultants); insufficient funds or practical difficulties in implementing recommended changes once an exhibition is open; evaluation results that are too exhibition-specific and idiosyncratic to be readily applied to future exhibition projects.

They also give an insightful analysis of how the multiple potential purposes of evaluation can interfere with one another. They provide a convincing argument for separating out different kinds of evaluation recommendations or at least being more explicit about what purpose a given evaluation is meant to serve:

  1. Project-specific reflection: evaluation as a way of reflecting on a particular project and as an opportunity for the learning and development of exhibition team members
  2. Generalisable findings: the capacity of evaluation results to build the overall knowledge base of the sector
  3. Monitoring and accountability: evaluation reports are usually an important aspect of reporting to a project funder or the institution as a whole
  4. Advocacy and impact: using evaluation results to create an evidence base for the value of museums for potential funders and society at large

As we move down this list, the pressure on evaluation results to tell “good news” stories increases – evaluation is less a way of learning and improvement and more a platform to prove or demonstrate “success”. Museums may be reluctant to share critical self-appraisal for fear that exposing “failure” may make it more difficult to get support for future projects. Such findings may not be shared with other museums or even other departments within the musem – let alone potential funders or other stakeholders. Furthermore, generalisability is often limited by methodological inconsistencies between different institutions and the reporting requirements of different funding bodies.

Comparing Evaluation with Research

On the subject of methodology, I’ll make a couple more observations, in particular the difference between Evaluation and Research (at least in visitor studies). The two terms are often used interchangeably and the line is admittedly blurry, particularly since research and evaluation use essentially the same tools, approaches and methods.

The way I see it, visitor research seeks to understand “how things are”. It tries to advance knowledge and develop theory about what visitor experiences are and what they mean: to individuals, to institutions, to society at large. Visitor research is usually positioned within a broader academic discourse such as psychology or sociology. Research findings can be valid and useful even if they don’t directly lead to changes in practice [2].

In contrast, evaluation is more interested in “how things could be improved”. To quote Ben Gammon, who was one of my first mentors in this field:

Evaluation is not the same as academic research. Its purpose is not to increase the sum of human knowledge and understanding but rather to provide practical guidance. If at the end of an evaluation process nothing is changed there was no point in conducting the evaluation. This needs to be the guiding principle in the planning and execution of all evaluation projects. (quoted in Davies and Heath 2013a, p.14)

Evaluation is therefore more pragmatic and applied than visitor research. The validity of evaluation is less in its methodological rigour than the extent to which the results are useful and are used.

Notes

[1] At the outset of their research, Davies and Heath wrote an opinion piece for the Museums Journal outlining some of the issues they had identified with summative evaluation. I wrote a response to it at the time, which interestingly, was itself cited in their report. Besides being somewhat startled (and delighted!) to see one of my blog posts being cited in a more academic type of publication, it serves as an interesting example of how the lines are blurring between more formal and informal academic writing and commentary.

[2] When I was doing data collection for my PhD, many people assumed the purpose of my research was to “make improvements” to the galleries I was studying. It’s a reasonable inference to make, and I do hope my results will eventually influence exhibition design. However, my PhD is research, not evaluation – and as such is more interested in understanding fundamental phenomena than in the particular galleries I happened to use in my study.

References:

Davies, M., & Heath, C. (2013a). Evaluating Evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries. Retrieved from http://visitors.org.uk/files/Evaluating Evaluation Maurice Davies.pdf

Davies, M., & Heath, C. (2013b). “Good” organisational reasons for “ineffectual” research: Evaluating summative evaluation of museums and galleries. Cultural Trends, (in press). doi:10.1080/09548963.2014.862002