The What, When and How of Participant Incentives

[Note: This is a modified version of an article that first appeared in Museum Australia’s Evaluation and Visitor Research Network’s Spring 2014 newsletter]


We’ve all seen it; we’ve all done it: Complete our survey and enter the draw to win! Agree to be interviewed and get a free pen! Researchers call these “participant incentives”, which generally speaking are defined as “benefit[s] offered to encourage a person to participate in a research program”.[1] Offering incentives is considered to be good practice in evaluation and visitor research. Visitors agree to give us time out of their visit for the benefit of our research, and it behoves us to value this time and use it ethically[2]. If we consider research as a social contract, incentives are a gesture of reciprocity, acknowledging the value of visitors’ time.

But what kind of incentive is appropriate for a given piece of research? What’s feasible? What’s ethical? What might be some unintended consequences? This article will explore some of the issues surrounding participant incentives.

The Bigger Picture

To understand the role of participant incentives, we first need to consider why people respond to surveys in the first place. There seem to be three main kinds of reasons: altruistic (people who want to help or see it as their civic duty); egotistic (having specific stake in the results, or simply enjoying doing surveys) and study-specific (interest in the topic or organisation)[3]. Incentives increase the “egotistic” reason for completing a survey. But appealing to respondent’s altruism can also increase response rates, as can the fact that many visitors hold museums in positions of high trust and regard.

Particularly for online surveys, incentives have been shown to increase the response rate, but this also depends on the length of the survey, who you’re trying to target and whether they have a stake in the research outcome[4]. As a general rule of thumb, you should state up-front how long any survey is going to take, and offer an incentive that reflects the time commitment you are requesting. For online surveys, anything taking longer than 20 minutes to complete counts as a “long” survey that warrants an incentive. One of the most popular incentives is to give participants the opportunity to enter a prize draw of something of considerable value (e.g. gift certificates valued at least $100, a tablet computer or similar items).

However, a higher response rate isn’t necessarily the ideal – irrespective of the response rate, your survey strategy should aim to minimise systematic differences between people who do respond and those who do not (nonresponse bias). This is distinct from overall response quality, which does not appear to be affected by incentives[5]. Nonetheless, if there is a particular target audience of interest (e.g. teachers, visitors who have participated in a particular programme, visitors from a particular cultural or ethnic group, etc.), you may need to consider ways to increase the response rate among those people in particular.

Compared to the use of incentives in telephone and online surveys, there is very little published research about the practicalities of conducting onsite visitor interviews in museums and similar sites. Rather, examples of practice are shared through informal networks (more on this later).

Ethical Guidelines

Neither the Australian Market & Social Research Society (AMSRS) Code of Professional Conduct[6] nor the Australasian Evaluation Society’s Guidelines for Ethical Conduct of Evaluations[7] specifically mention participant incentives, however both outline important principles with which any choice of incentive should comply. In particular, the AMSRS code specifies that there must be a clear delineation between market research and “non-research activities” such as promotions or compilation of databases for marketing purposes. This may have implications for what you can use as incentives, as well as how you use any contact details you collect for the purposes of prize draws.

Care should be taken to ensure that incentives cannot be interpreted as coercion, particularly if the incentive is large enough to cause certain participants (e.g. at-risk groups) to reluctantly participate in order to receive the incentive. In any case, it has been suggested that it may be better to increase intrinsic motivations rather than rely solely on monetary incentives[8].

Is it an Incentive, a Thank You, or Compensation?

The principle that monetary incentives should only be used as a last resort may appear at odds with the idea that visitors’ time is valuable and should be acknowledged as such. However, it’s largely to do with the way incentives are framed: an incentive can be considered an inducement to participate, but it can also be presented as a “thank you gift” that you give to visitors as a token of your appreciation. In this sense, the timing of the incentive may come into play. Giving an incentive in advance may increase participation and there is no evidence that it raises a sense of obligation among potential participants[9].

There is another type of payment that we should briefly mention here, and that is compensation. This is particularly relevant where participation incurs costs direct costs (e.g. travel to a focus group session). Any costs that participants so incur must always be compensated.

Some Examples

In September 2014, there was a discussion on the Visitor Studies Association (VSA) listserv about the incentives that different institutions give away to visitors who participate in short (<5-10 minutes) onsite surveys. Among this community of practice, the respective merits and drawbacks of different approaches were discussed[10]. The key points are summarised below:

Incentive Features Drawbacks / Considerations
Vouchers for in-visit added extras(e.g. simulator rides, temporary exhibitions, etc.) Adds value to visitors’ experience with little or no direct cost to Museum May lead to unanticipated spikes in demand for additional experiences – e.g. can the simulator accommodate everyone who’s given a voucher?
Small gifts(e.g. pens/pencils, stickers, temporary tattoos, bookmarks, postcards, key-rings) Tangible and popular gifts, especially for children.If you’re surveying adults in a family group, giving children a few items to choose from can keep them usefully occupied while the adults respond to the survey.Cheap if purchased in bulk.


Gift needs to match target audience of survey (e.g. temporary tattoos are great for kids, less so for adult responders)Children may end up using stickers to decorate your exhibits!
Food / coffee / ice cream vouchers Generally popular and well-received. Can create a rush in the café if you’re doing large numbers of surveys.May be limited by the contract arrangements in place with caterers. 
Prize draws Popular with visitors and practical to implement with online surveys.Cost of a single big-ticket prize may work out cheaper than hundreds of small giveaways. Visitor contact details must be recorded for prize draw. These details must be able to be separated from the survey responses to maintain anonymity.Be aware that offering a free membership as a prize may reduce membership take-up during the survey period[11].
Gift certificates Can be used for longer surveys or detailed interviews that involve a longer time commitment and therefore warrant a higher value incentive. Gift certificates may be seen as equivalent to cash from a tax perspective.
Free return tickets No direct costs. Tickets can be given away to friends and family if participants can’t re-visit. Not relevant to free-entry institutions.Could be perceived as marketing.
Discounted museum membership Encourages a longer term relationship with the visitor. Not an attractive incentive for tourists.



Incentives are established good practice in evaluation and visitor research, and are generally intended to represent a token of appreciation for visitors’ time. Although incentives can increase response rates, this is not necessarily the principal reason why incentives are used. Like all aspects of visitor research, decisions regarding the size, nature and timing of giving visitor incentives must be clearly thought through from an operational, financial and ethical perspective at the outset of the research. Done well, incentives offer the dual benefits of increasing responses and creating a sense of good will among visitors.


[1] Arts Victoria. (n.d.) Visitor Research Made Easy, p. 82 (sourced from:

[2] Bicknell, S., and Gammon, B. (1996). Ethics and visitor studies – or not? Retrieved from:

[3] Singer, E., and Ye, C. (2013) The use and effects of incentives in surveys. Annals of the American Academy of Political and Social Science, Vol 645, 112-141

[4] Parsons, C. (2007) Web-based surveys: Best practices based on the research literature. Visitor Studies, Vol 10(1), 13-33.

[5] Singer & Ye (2013).



[8] Singer & Ye (2013).

[9] Singer & Ye (2013).

[10] Contributors to this discussion included (in alphabetical order): Stephen Ashton, Sarah Cohn, Susan Foutz, Ellen Giusti, Joe Heimlich, Karen Howe, Amy Hughes, Elisa Israel, Kathryn Owen, Beverly Serrell, Marley Steele Inama, Carey Tisdal and Nick Visscher (with apologies to any contributors who have been missed). VSA listserv archives can be accessed via

[11] Visitor Research Made Easy, p. 60.

Visitor Observation: Privacy Issues

During my PhD I spent some time tracking and timing visitors to learn more about visitor behaviour in the exhibitions I was studying (more on the history and applications of visitor tracking here). Recently, I was asked about the privacy implications of doing such research. What steps do we need to take to ensure we’re a) staying on the right side of the law and b) respecting visitors’ rights to informed consent and ability to opt out of participating in research?

On the first part (i.e. The Law), I’ll tread carefully since I’m not a lawyer and it will vary in specifics from place to place anyway. However, in a general sense, museums will generally count as a “public place”, and people can reasonably expect to be seen in public places. Therefore if you’re just documenting visitors’ readily observable public behaviour, and nothing about them that may allow them to be identified as individuals, you’re probably in safe territory. However, it would be wise to check whether your museum is classed (in a legal sense) as a “public place” – for instance an entry charge may implicitly impose an expectation of some level of privacy on the part of paying guests.

So how about different approaches to informed consent?

The first consideration is cuing – do you tell visitors they’re going to be watched and/or listened to at the start of their visit? If so, then you are studying cued visitors – and gaining informed consent is relatively straightforward. When you approach potential participants, you explain the benefits and risks of participating, and they can decide whether they want to be part of it or not. The downside of cuing, of course, is that you’re probably no longer going to be documenting natural visitor behaviour – people tend to do different things when they know they are being watched.

Depending on what you’re studying, this may not be an issue – and, like contestants on Big Brother, visitors tend to forget they’re being watched or listened to after a while, even if they’re rigged up with audio recording equipment (Leinhardt & Knutson, 2004). Also, if you’re going to be tracking the same group of visitors over the course of a whole visit, which could mean following them for 2-3 hours, then you really do need to cue them first – otherwise, frankly, it just ends up getting creepy and weird for all concerned.

If you’re tracking visitors across a whole site, sooner or later they’re bound to notice you. Awkward. You’d be better off telling them first.

In contrast, tracking and timing uncued visitors through a single exhibition gallery can be done discreetly without visitors becoming aware they are being tracked (assuming you are not trying to hear what they are saying as well, meaning you can observe from a reasonable distance). It still takes a bit of practice, and is easier in some exhibitions than others. Even so, if someone approaches you and asks what you’re up to, the right thing to do is fess up, explain what you were doing, stop tracking that person and try again with a different visitor.

If you’re taking this uncued approach to visitor observation, you’re in a far greyer area with respect to informed consent. The usual approach is to post a sign at the entrance to the museum or the gallery informing visitors that observations are taking place, and giving them steps to take if they wish to opt out of being observed. In practice, this might be notices telling visitors which areas to avoid if they don’t want to be watched, or having a mechanism for visitors to opt-out by wearing a lapel sticker or wrist band (although chances are this won’t be necessary – it never came up in my research and my experience tallies with other researchers I’ve spoken to).

What about when you’re recording?

Things can get a little more complicated when you go beyond simple observation and field notes to audio or video recording visitor behaviour. It’s one thing to watch publicly observable behaviour, another to have that behaviour recorded, replayed, and deconstructed ad infinitum. This doesn’t mean it’s not done – audio recording at individual exhibits dates back to at least the 1980s and Paulette McManus’s landmark study of visitors evidently reading labels more than it might first appear (McManus, 1989). In that study, specific exhibits were hooked up to a radio microphone linked to a tape (tape!) recorder, and an observer unobtrusively watched the exhibit from a safe distance, making field notes to aid subsequent interpretation (Leinhardt and Knutson also emphasise how important observational data is to back up audio recordings, where there are frequently snippets that make little sense if you don’t have additional details about what was happening at the time). As far as I can tell, visitors were uncued in this study.

Audio recording of uncued visitors poses fewer difficulties than video recording, as people can’t (easily) be identified based on voice recordings alone. Things get tricker when you get to video, of course. My first exposure to video-based visitor research was seeing Christian Heath speak about his and Dirk vom Lehn’s work in V&A’s British Galleries in the early 00s (Heath and vom Lehn, 2004). In this case, although they specify that visitors explicitly consented to being part of the research, it’s not obvious whether this was done in advance, or after the fact by approaching visitors once they’d left the exhibit of interest (and then discarding the data of those who have refused to participate prior to analysis). This ex post facto approach is a way you can ensure both uncued visitor behaviour and informed consent, but as I have no direct experience of this, I don’t know how high the refusal rate is and how complicated it is to ensure data is discarded appropriately as required.

Irrespective of the type of informed consent, there is the issue of data storage. Gone are the days of tapes that could be kept under lock and key. You’ll need to have a data retention policy in place to ensure that anything that could potentially identify participants is kept secure, safe from those who have no need to access it . . . and from accidental syncing to your public Facebook feed.

Disclaimer: This is just general advice based on my own experience and what I can glean from some of the literature. Different parts of the world and different ethics committees may have different views, and the specifics of any given piece of research may make a difference as well.


Heath, C., & vom Lehn, D. (2004). Configuring Reception: (Dis-)Regarding the “Spectator” in Museums and Galleries. Theory, Culture & Society, 21(6), 43–65. Leinhardt, G., & Knutson, K. (2004). Listening in on museum conversations. Walnut Creek, CA: AltaMira Press.
McManus, P. (1989). Oh, yes they do: How museum visitors read labels and interact with exhibit texts. Curator: The Museum Journal, 32(3), 174–189.


Building Evaluation Capacity

I recently attended the 27th Annual Visitor Studies Association conference in Albuquerque, NM. Given the theme was Building Capacity for Evaluation: Individuals, Institutions, the Field, it’s not surprising that “capacity building” was a common topic of discussion throughout the week. What do we mean by building capacity? Whose capacity are we building and why? Pulling together threads from throughout the conference, here are some of my thoughts:

Individual capacity building:

Any conference offers a chance to hear about developments in the field and to build your professional networks, which is a form of personal capacity-building. VSA in particular runs professional development workshops before and after the conference as an opportunity to sharpen your skills, be exposed to different approaches and to learn new techniques. These are useful for both newcomers to the field as well as more experienced researchers who might be interested in new ways of thinking, or new types of data collection and analysis.

A common thread I noticed was both the opportunities and challenges presented by technology – video and tracking software allow you to collect much more detailed data, and you can integrate different data types (audio, tracking data) into a single file. But technology’s no panacea, and good evaluation still boils down to having a well thought-through question you’re looking to investigate and the capacity to act on your findings.

Panel session at VSA 2014

Institutional capacity building:

There were a lot of discussions around how to increase the profile of Evaluation and Visitor Research within institutions. There seemed to be a general feeling that “buy-in” from other departments was often lacking: evaluation is poorly understood and therefore not valued by curators and others whose roles did not bring them into regular, direct contact with visitors. Some curators apparently come away with the impression that evaluators only asked visitors “what they don’t like”, or otherwise had a vested interest in exposing problems rather than celebrating successes[1]. Others believe they “already know” what happens on the exhibition floor, but without systematic observation may only be seeing what they want to see, or otherwise drawing conclusions about what works and what doesn’t based on their own assumptions, rather than evidence.

For many, the “aha!” moment comes when they become involved in the data collection process themselves. When people have an opportunity to observe and interview visitors, they start to appreciate where evaluation findings come from, and are subsequently more interested in the results. Several delegates described Damascene conversions of reluctant curators once they had participated in an evaluation. But others expressed reservations about this approach – does it give colleagues an oversimplified view of evaluation? Does it create the impression that “anyone can do evaluation”, therefore undermining our skills, knowledge and expertise? What about the impact on other functions of the museum: if curators, designers and others are spending time doing evaluation, what parts of their usual work will need to be sacrificed?

A counter to these reservations is that visitors are arguably the common denominator of *all* activities that take place in informal learning institutions, even if this isn’t obvious on a day to day basis in many roles. Participating in data collection acts as a reminder of this. Also, at its best, evaluation helps foster a more reflective practice more generally. But nonetheless the concerns are valid.

Capacity building across the Field:

I found this part of the discussion harder to be part of, as it was (understandably) focused on the US experience and was difficult to extrapolate to the Australian context due to massive differences in scale. One obvious difference is the impact that the National Science Foundation has had on the American museum landscape. NSF is a major funder of the production and evaluation of informal science learning [2]. NSF-supported websites like host literally hundreds of evaluation reports (that actually extend beyond the “science” remit that the site’s name implies – it’s a resource worth checking out).

There are a considerable number of science centres and science museums across the US, and because of these institutions’ history of prototyping interactive exhibits, they tend to have a larger focus on evaluation and visitor research than (say) history museums. Indeed, most of the delegates at VSA seem to represent science centres, zoos and aquariums, or are consultant evaluators for whom such institutions are their principal clients. There was also a reasonable art museum presence, and while there were a few representatives of historical sites, on the whole I got the impression that history museums were under-represented.

In any case, I came away with the impression that exhibition evaluation is more entrenched in museological practice in the US than it is here in Australia. It seems that front-end and formative research is commonly done as part of the exhibition development process, and conducting or commissioning summative evaluations of exhibitions is routine. In contrast, besides a handful of larger institutions, I don’t see a huge amount of evidence that exhibition evaluation is routinely happening in Australia. Perhaps this is just the availability heuristic at play – the US is much bigger so it’s easier to bring specific examples to mind. Or it could be that evaluation is happening in Australian museums, but as an internal process that is not being shared? Or something else?


[1] A lesson from this is that evaluation reports may read too much like troubleshootingdocuments and not give enough attention to what *is* working well.

[2] The Wellcome Trust plays a similar role in the UK, but as far as I’m aware there is nothing comparable (at least in scale) in Australia.

On “challenging” your audience

. . .but we should be challenging visitors, not just giving them what they want. . .

Work in evaluation and visitor research for long enough and you’re bound to hear someone say this. And from the point of view of an evaluator, it’s frustrating for a few reasons:

  • It betrays an assumption that conducting evaluation somehow means you’re going to ‘dumb down’ or otherwise pander to the masses. Evaluation shouldn’t fundamentally alter your mission, it should just give you clues as to where your stepping-off point should be.
  • It can be used as an excuse for maintaining the status quo and not thinking critically about how well current practices are working for audiences. Are we genuinely challenging audiences. . . or just confusing them?
  • It tends to conflate knowledge with intelligence. If you (and many people you work with) are an expert on a given topic, it’s easy to overestimate how much “everybody” knows about that subject. If there is a big gap between how much you assume visitors know and what they actually know, no amount of intelligence on the visitors’ part will be able to bridge that gap.
  • A challenge is only a challenge when someone accepts it. In a free-choice setting like a museum, who is accepting the challenge and on whose terms? If the ‘challenge’ we set our audiences is rejected, does that leave us worse off than where we started?

This post on the Uncatalogued Museum neatly sums how visitors can be up for a challenge – often more so than we think – if we find the right balance between meeting visitors where they are and extending them to new horizons. But finding this balance depends on actually getting out there and talking to people, not resting solely on assumptions and expert knowledge.

If the goal is genuinely to challenge visitors, then visitors need to be part of the conversation. If we’re not asking them, what are we afraid of?


Beyond “warm impulses”

I’ve been catching up on the Museopunks podcast series, and a section of March’s installment, the Economics of Free, particularly caught my attention. In an interview, director of the Dallas Museum of Art, Maxwell L. Anderson compares the data that shopping malls collect about their customers to the relative paucity of data that is collected about visitors to the typical art museum. I think it’s worth repeating (from about 18min into the podcast):

[Malls] know all this basic information about their visitors. Then you go to an art museum. What do we know? How many warm impulses cross a threshold? That’s what we count! And then we’re done! And we have no idea what people are doing, once they come inside, what they’re experiencing, what they’re learning, what they’re leaving with, who they are, where they live, what interests and motivates them . . . so apart from that we’re doing great, you know. We’re like that mall that has no idea of sales per square foot, sales per customer. . . so we’re really not doing anything in respect to knowing our visitors. And learning about our visitors seems to me the most basic thing we can do after hanging the art. You know, you hang the art, and then you open the doors and all we have been doing is “hey look there are more people in the doors”.  And the Art Newspaper dedicates an annual ‘statistical porn’ edition of how many bodies crossed thresholds. Nobody’s asking how important the shows were, or what scholarly advances were realised as a function of them, or what people learned, how they affected grades in school. Nobody knows any of that. Nobody knows who the visitors were. So I consider it a baseline. We’re just at the primordial ooze of starting to understand what museums should be doing with this other part of our mission which is not the collection but the public.

I’d argue that we’re a little bit beyond the ‘primordial ooze’ stage of understanding*, although Anderson’s right in that many museums don’t go much beyond counting ‘warm impulses’ (those infra-red people counters). He goes on to describe how the DMA’s Friends program is giving the museum more data about what their visitors do while inside the museum, and how this can inform their engagement strategies (22:45):

This is just another form of research, you know . . . we do research on our collections without blinking an eye, we think nothing of it. We spend copious amounts of time sending curators overseas to look at archives to study works of art but we’ve never studied our visitors. The only time museums typically study their visitors is when they have a big show, and they’re outperforming their last three years, everybody’s excited, and there’s a fever, and you measure that moment, which is measuring a fever. The fever subsides, the data’s no longer relevant but that’s what you hold on to and point to as economic impact. And largely, it’s an illusion.

I find it interesting that Anderson puts visitor research on a par with collection-based research. Often, I get the sense that collection research is seen as ‘core’ museological business, but visitor research is only a ‘nice to have’ if there is the budget. But perhaps this is a sign of shifting priorities?


*Historically, most visitor experience research has taken place in science centres, children’s museums, zoos and aquariums rather than museums of fine art. Although there are of course exceptions.

Evaluating Evaluation

What is evaluation? Who is it for? Why do it? What’s the difference between evaluation and research? Does it make as much difference as it purports to?

A recent report from the UK, Evaluating evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries by Maurice Davies and Christian Heath, makes for interesting reading in this area [1]. They observe that summative evaluation hasn’t informed practice as much as it might have, and look at some of the reasons why that might be the case. Their overall conclusion is one of “disappointment”:

Disappointment that all the energy and effort that has been put into summative evaluation appears to have had so little overall impact, and disappointment that so many evaluations say little that is useful, as opposed to merely interesting. . . With some notable exceptions, summative evaluation is not often taken seriously enough or well enough understood by museums, policy makers and funders. The visibility of summative evaluation is low. Too often, it is not used as an opportunity for reflection and learning but is seen as a necessary chore, part of accountability but marginal to the work of museums. (Davies and Heath 2013a, p.3)

I won’t go into their findings in detail as the full report is available online and I recommend you read the whole thing, or at least the executive summary(see references below). But I will tease out a couple of issues that are of particular relevance to me:

Conflicting and Competing agendas

Davies and Heath describe scenarios that are all too familiar to me from my time working in exhibition development: exhibition teams being disbanded at the conclusion of a project with no opportunity for reflection; summative reports not being shared with all team members (particularly designers and other outside consultants); insufficient funds or practical difficulties in implementing recommended changes once an exhibition is open; evaluation results that are too exhibition-specific and idiosyncratic to be readily applied to future exhibition projects.

They also give an insightful analysis of how the multiple potential purposes of evaluation can interfere with one another. They provide a convincing argument for separating out different kinds of evaluation recommendations or at least being more explicit about what purpose a given evaluation is meant to serve:

  1. Project-specific reflection: evaluation as a way of reflecting on a particular project and as an opportunity for the learning and development of exhibition team members
  2. Generalisable findings: the capacity of evaluation results to build the overall knowledge base of the sector
  3. Monitoring and accountability: evaluation reports are usually an important aspect of reporting to a project funder or the institution as a whole
  4. Advocacy and impact: using evaluation results to create an evidence base for the value of museums for potential funders and society at large

As we move down this list, the pressure on evaluation results to tell “good news” stories increases – evaluation is less a way of learning and improvement and more a platform to prove or demonstrate “success”. Museums may be reluctant to share critical self-appraisal for fear that exposing “failure” may make it more difficult to get support for future projects. Such findings may not be shared with other museums or even other departments within the musem – let alone potential funders or other stakeholders. Furthermore, generalisability is often limited by methodological inconsistencies between different institutions and the reporting requirements of different funding bodies.

Comparing Evaluation with Research

On the subject of methodology, I’ll make a couple more observations, in particular the difference between Evaluation and Research (at least in visitor studies). The two terms are often used interchangeably and the line is admittedly blurry, particularly since research and evaluation use essentially the same tools, approaches and methods.

The way I see it, visitor research seeks to understand “how things are”. It tries to advance knowledge and develop theory about what visitor experiences are and what they mean: to individuals, to institutions, to society at large. Visitor research is usually positioned within a broader academic discourse such as psychology or sociology. Research findings can be valid and useful even if they don’t directly lead to changes in practice [2].

In contrast, evaluation is more interested in “how things could be improved”. To quote Ben Gammon, who was one of my first mentors in this field:

Evaluation is not the same as academic research. Its purpose is not to increase the sum of human knowledge and understanding but rather to provide practical guidance. If at the end of an evaluation process nothing is changed there was no point in conducting the evaluation. This needs to be the guiding principle in the planning and execution of all evaluation projects. (quoted in Davies and Heath 2013a, p.14)

Evaluation is therefore more pragmatic and applied than visitor research. The validity of evaluation is less in its methodological rigour than the extent to which the results are useful and are used.


[1] At the outset of their research, Davies and Heath wrote an opinion piece for the Museums Journal outlining some of the issues they had identified with summative evaluation. I wrote a response to it at the time, which interestingly, was itself cited in their report. Besides being somewhat startled (and delighted!) to see one of my blog posts being cited in a more academic type of publication, it serves as an interesting example of how the lines are blurring between more formal and informal academic writing and commentary.

[2] When I was doing data collection for my PhD, many people assumed the purpose of my research was to “make improvements” to the galleries I was studying. It’s a reasonable inference to make, and I do hope my results will eventually influence exhibition design. However, my PhD is research, not evaluation – and as such is more interested in understanding fundamental phenomena than in the particular galleries I happened to use in my study.


Davies, M., & Heath, C. (2013a). Evaluating Evaluation: Increasing the Impact of Summative Evaluation in Museums and Galleries. Retrieved from Evaluation Maurice Davies.pdf

Davies, M., & Heath, C. (2013b). “Good” organisational reasons for “ineffectual” research: Evaluating summative evaluation of museums and galleries. Cultural Trends, (in press). doi:10.1080/09548963.2014.862002


Young Adults and Museums

It’s always exciting when your research data throws up something counter-intuitive. Or at least something that’s at odds with “conventional wisdom” on the subject.

One such piece of wisdom about museum visitors is that young adults (particularly those aged under 25) tend not to visit museums. Population-level statistical data tends to back this up, with a characteristic dip in the 18-24 age bracket (see this graphic from a previous post):

Attendance by age, using figures from Table 1.4 in ABS report
Heritage visitation in Australia by age. Percentage of respondents who visited a heritage site in the previous 12 months (Source: ABS)

Now, here is the age breakdown of the respondents to my visitor survey conducted at the SA Museum as part of my PhD research:

Age Range

Not only are visitors aged under 30 not under-represented, they form the biggest age group I surveyed by a considerable margin! This is a surprising (albeit incidental) finding from my research which makes me wonder what’s going on here. Based on what I observed at the Museum during my fieldwork I have come up with the following hypotheses:

  • Proximity to university campuses. The SA Museum is right next door to Adelaide University and not very far from one of the main campuses of the University of South Australia. I got into conversation with a couple of groups of young adults who indicated they were visiting the museum to kill time between lectures.
  • The backpacker factor: The SA Museum is a popular destination with both interstate and international visitors (more than half of my sample indicated they were visiting the Museum for the first time, and I would wager that the majority of these people were tourists). Among the survey sample, there appeared to be considerable numbers of young “backpacker” tourists (based on my fieldwork observations). Anecdotally, it appeared that younger international tourists were less likely to experience the language barriers of older tourists, which would have prevented them from participating in the study (about 7% of the visitors I approached to complete a survey had limited or no English).
  • Free and centrally located: a few people indicated they were in the museum because it was free to enter and a way of escaping the heat or rain. There were a couple of people who were waiting for someone with a hospital appointment (the Royal Adelaide Hospital is just down the road). Of course, they could have also spent this time in the shopping malls which are just across the road – but for some reason chose not to. So there is clearly some other characteristics of the museum that are attractive to them but which were beyond the scope of this survey. Others appear to have been ‘doing’ the precinct, visiting the Art Gallery of South Australia (next door) as well as the museum.
  • Young parents: A fair proportion of those in the 18-29 age group were accompanying young(ish) children. I don’t know if it’s just me, but I sense there has been a demographic shift between Generations X and Y. Most people of my (Gen X) vintage seemed to be well into their thirties before they settled down and started families. I suspect Gen Ys are having children younger, for a whole range of complex reasons which are beyond the scope of this post. This is just a gut feeling though – I haven’t cracked open the data.
  • Young couples: There was a surprising proportion of young (and highly demonstrative!) couples around. The museum as a date venue?
  • Patterns in the smoke: There is of course the possibility that this cluster is just a random quirk of my particular data set. However, the surveys were conducted across weekdays, weekends and public holidays (but not school holidays) to help control for variation in visiting patterns. My fieldwork observations show nothing to indicate that 18-29 year olds were more likely to agree to complete a survey than other age groups.

In retrospect, it would have been good if I’d been able to distinguish between the under and over 25s by splitting the age ranges the way the ABS do (I had a reason why I didn’t but in any case it’s no big deal). However, I went back to a pilot sample from late last year and found the age spread using different categories was broadly similar:

Pilot Age Range

So what does all this mean? I’m not sure yet. Age is not expected to be a significant variable in my own research, and I only collected very basic demographic information so I had a general sense of the survey population. I’d be interested in how this tallies with other museums though, particularly those that are free as opposed to ticketed entry. Ticketed venues tend to collect more comprehensive visitor data, and we tend to extrapolate from that. But perhaps they are not fully representative of museums as a whole?

Survey Responses – Benchmarks and Tips

I’ve now collected a grand total of 444 questionnaires for my PhD research (not including pilot samples) – which is not far off my target sample of 450-500. Just a few more to go! Based on my experiences,  I thought I’d share some of the lessons I’ve learned along the way . . .

Paper or Tablet?

My survey was a self-complete questionnaire (as opposed to an interviewer-led survey) that visitors filled out while on the exhibition floor. During piloting I tried both paper surveys and an electronic version on an ipad, but ended up opting for the paper version as I think the pros outweighed the cons for my purposes.

The big upside of tablet based surveys is that there is no need for manual data entry as a separate step – survey programs like Qualtrics can export directly into an SPSS file for analysis. And yes, manually entering data from paper surveys into a statistics program is time-consuming, tedious and a potential source of error. The other advantage of a tablet-based survey (or any electronic survey for that matter) is that you can set up rules that prompt people to answer questions they may have inadvertently skipped, automatically randomise the order of questions to control for ordering effects, and so on. So why did I go the other way?

First of all, time is a trade off: with paper surveys, I could recruit multiple people to complete the survey simultaneously – all I needed was a few more clipboards and pencils and plenty of comfortable seating nearby. Whereas I only had one tablet, which meant only one person could be completing my survey at a time. By the time you take into account the time saved from being able to collect far more paper surveys in a given time compared to the tablet, I think I’m still in front doing the manual data entry. Plus I’m finding doing the data entry manually is a useful first point of analysis, particularly during the piloting stages when you’re looking to see where the survey design flaws are.

Secondly, I think many visitors were more comfortable using the old-fashioned paper surveys. They could see at a glance how long the survey was and how much further they had to go, whereas this was less transparent on the ipad (even though I had a progress bar).

This doesn’t mean I would never use a tablet – I think they’d be particularly useful for interviewer-led surveys where you can only survey one participant at a time anyway, or large scale surveys with multiple interviewers and tablets in use.

Refining the recruitment “spiel”

People are understandably wary of enthusiastic-looking clipboard-bearers – after all, they’re usually trying to sell or sign you up to something. In my early piloting I think my initial approach may have come across as too “sales-y”, so I refined it such that the first thing I said was that I am a student. My gut feel is that this immediately made people less defensive and more willing to listen to the rest of my “spiel” for explaining the study and recruiting participants. Saying I was a student doing some research made it clear up front that I was interested in what they had to say, not in sales or spamming.

Response, Refusal and Attrition Rates

Like any good researcher should, I kept a fieldwork journal while I was out doing my surveys. In this I documented everyone I approached, approximately what time I did so, whether they took a participant information sheet or refused, and if they refused, what reason (if any) they gave for doing so. During busy periods, recording all this got a bit chaotic so some pages of notes are more intelligible than others, but over a period of time I evolved a shorthand for noting the most important things. The journal was also a place to document general facts about the day (what the weather was like, whether there was a cruise ship in town that day, times when large numbers of school groups dominated the exhibition floor, etc.). Using this journal, I’ve been able to look at what I call my response, refusal and attrition rates.

  • Response rate: the proportion of visitors (%) I approached who eventually returned a survey
  • Refusal rate: the proportion of visitors (%) approached who refused my invitation to participate when I approached them
  • Attrition rate: this one is a little specific to my particular survey method and wouldn’t always be relevant. I wanted people to complete the survey after they had finished looking around the exhibition, but for practical reasons could not do a traditional “exit survey” method (since there’s only one of me, I couldn’t simultaneously cover all the exhibition exits). So, as an alternative, I approached visitors on the exhibition floor, invited them to participate and gave them a participant information sheet if they accepted my invitation. As part of the briefing I asked them to return to a designated point once they had finished looking around the exhibition, at which point I gave them the questionnaire to fill out [1]. Not everyone who initially accepted a participant information sheet came back to complete the survey. These people I class as the attrition rate.

So my results were as follows: I approached a total of 912 visitors, of whom 339 refused to participate, giving an average refusal rate of 36.8%. This leaves 573 who accepted a participant information sheet. Of these, 444 (77%) came back and completed a questionnaire, giving me an overall average response rate of (444/912) 49.4%. The attrition rate as a percentage of those who initially agreed to participate is therefore 23%, or, if you’d rather, 14% of the 912 people initially approached.

So is this good, bad or otherwise? Based on some data helpfully provided by Carolyn Meehan at Museum Victoria, I can say it’s probably at least average. Their average refusal rate is a bit under 50% – although it varies by type of survey, venue (Museum Victoria has three sites) and interviewer (some interviewers have a higher success rate than others).

Reasons for Refusal

While not everyone gave a reason for not being willing to participate (and they were under no obligation to do so), many did, and often apologetically so. Across my sample as a whole, reasons for refusal were as follows:

  • Not enough time 24%
  • Poor / no English: 19%
  • Child related: 17%
  • Others / No reason given: 39%

Again, these refusal reasons are broadly comparable to those experienced by Museum Victoria, with the possible exception that my refusals included a considerably higher proportion of non-English speakers. It would appear that the South Australian Museum attracts a lot of international tourists or other non-English speakers, at least during the period I was doing surveys.

Improving the Response Rate

As noted above, subtly adjusting the way you approach and invite visitors to participate can have an impact on response rates. But there are some other approaches as well:

  • Keep the kids occupied: while parents with hyperactive toddlers are unlikely to participate under any circumstances, those with slightly older children can be encouraged if you can offer something to keep the kids occupied for 10 minutes or so. I had some storybooks and some crayons/paper which worked well – in some cases the children were still happily drawing after the parents had completed the survey and the parents were dragging the kids away!
  • Offer a large print version: it appears that plenty of people leave their reading glasses at home (or in the bag they’ve checked into the cloakroom). Offering a large print version gives these people the option to participate if they wish. Interestingly, however, some people claimed they couldn’t read even the large print version without their glasses. I wonder how they can see anything at all sans spectacles if this is the case . . . then again, perhaps this is a socially acceptable alibi used by people with poor literacy levels?
  • Comfortable seating: an obvious one. Offer somewhere comfortable to sit down and complete the questionnaire. I think some visitors appreciated the excuse to have a sit and have a break! Depending on your venue, you could also lay out some sweets or glasses of water.
  • Participant incentives: because I was doing questionnaires on the exhibition floor, putting out food or drink was not an option for me. But I did give everyone who returned a survey a voucher for a free hot drink at the Museum cafe. While I don’t think many (or any) did the survey just for the free coffee, it does send a signal that you value and appreciate your participants’ time.

[1] A potential issue with this approach is cuing bias – people may conceivably behave differently if they know they are going to fill out a questionnaire afterwards. I tried to mitigate this with my briefing, in which I asked visitors to “please continue to look around this exhibition as much or as little as you were going to anyway”, so that visitors did not feel pressure to visit the exhibition more diligently than they may have otherwise. Also, visitors did not actually see the questionnaire before they finished visiting the exhibition – if they asked what it was about, I said it was asking them “how you’d describe this exhibition environment and your experience in it”. In some cases I reassured visitors that it was definitely “not a quiz!”. This is not a perfect approach of course, and I can’t completely dismiss cuing bias as a factor, but any cuing bias would be a constant between exhibition spaces as I used comparable methods in each.

My first Evaluation conference

Last week I went to the Australasian Evaluation Society’s conference – mainly because it was happening on my doorstep. Another draw was that Michael Quinn Patton (who needs no introduction to anyone who has done a research methods course) was delivering one of the keynotes. At first I was disappointed when I learned that this keynote was via videolink, but in this instance it worked well, if not quite the same as being in the same room.

I wasn’t really sure what to expect from the conference, because I’m not sure I can really lay claim to the title ‘evaluator’. But since evaluation spans such broad areas – public policy, health, international development, education and indigenous programs to name just a few – I wasn’t obviously out of place. Having said that, there was still some unfamiliar terminology and assumed knowledge among the delegates that had me reaching for Google later on.

I’ve compiled a storify of conference tweets which serves as a good overview. While most of the sessions were not directly related to my research, it was interesting to get some different perspectives on theories and methodologies, and see where the commonalities were. I also met some interesting people and gained a different perspective on how my skills could be used in my life post-PhD.




Evaluation: it’s a culture, not a report

The UK Museums Journal website has recently published the opinion piece Why evaluation doesn’t measure up by Christian Heath and Maurice Davies. Heath and Davies are currently conducting a meta analysis of evaluation in the UK.

Is this the fate of many carefully prepared evaluation reports?

The piece posits that: “[n]o one seems to have done the sums, but UK museums probably spend millions on evaluation each year. Given that, it’s disappointing how little impact evaluation appears to have, even within the institution that commissioned it.”

If this is the case, I’d argue it’s because evaluation is being done as part of reporting requirements and is being ringfenced as such. Essentially, the evaluation report has been prepared to tick somebody else’s boxes – a funder usually – and the opportunity to use it to reflect upon and learn from experience is lost. Instead, it gets quietly filed with all the other reports, never to be seen again.

So even when evaluation is being conducted (something that cannot be taken as a given in the first place), there are structural barriers that prevent evaluation findings filtering through the institution’s operations. One of these is that exhibition and program teams are brought together with the opening date in mind, and often disperse once the ribbon is cut (as a former exhibition design consultant, their point about external consultants rarely seeing summative reports resonated with my experience). Also, if the evaluation report is produced for the funder and not the institution, there is a strong tendency to promote ‘success’ and gloss over anything that didn’t quite go to plan. After all, we’ve got the next grant round to think of and we want to present ourselves in the best possible light, right?

In short, Heath and Davies describe a situation where evaluation has become all about producing the report so we can call the job done and finish off our grant acquittal forms. And the report is all about marching to someone else’s tune. We may be doing evaluation, but is it part of our culture as an organisation?

It might even be the case that funder-instigated evaluation is having a perverse effect on promoting an evaluation culture. After all, it is set up to answer someone else’s questions, not our own. As a result findings might not be as useful in improving future practice as they might be. So evaluation after evaluation goes nowhere, making people wonder why we’re bothering at all. Evaluation becomes a chore, not a key aspect of what we do.

NB: This piece was originally written for the EVRNN blog, the blog of the Evaluation and Visitor Research National Network of Museums Australia.