The What, When and How of Participant Incentives

[Note: This is a modified version of an article that first appeared in Museum Australia’s Evaluation and Visitor Research Network’s Spring 2014 newsletter]

Introduction

We’ve all seen it; we’ve all done it: Complete our survey and enter the draw to win! Agree to be interviewed and get a free pen! Researchers call these “participant incentives”, which generally speaking are defined as “benefit[s] offered to encourage a person to participate in a research program”.[1] Offering incentives is considered to be good practice in evaluation and visitor research. Visitors agree to give us time out of their visit for the benefit of our research, and it behoves us to value this time and use it ethically[2]. If we consider research as a social contract, incentives are a gesture of reciprocity, acknowledging the value of visitors’ time.

But what kind of incentive is appropriate for a given piece of research? What’s feasible? What’s ethical? What might be some unintended consequences? This article will explore some of the issues surrounding participant incentives.

The Bigger Picture

To understand the role of participant incentives, we first need to consider why people respond to surveys in the first place. There seem to be three main kinds of reasons: altruistic (people who want to help or see it as their civic duty); egotistic (having specific stake in the results, or simply enjoying doing surveys) and study-specific (interest in the topic or organisation)[3]. Incentives increase the “egotistic” reason for completing a survey. But appealing to respondent’s altruism can also increase response rates, as can the fact that many visitors hold museums in positions of high trust and regard.

Particularly for online surveys, incentives have been shown to increase the response rate, but this also depends on the length of the survey, who you’re trying to target and whether they have a stake in the research outcome[4]. As a general rule of thumb, you should state up-front how long any survey is going to take, and offer an incentive that reflects the time commitment you are requesting. For online surveys, anything taking longer than 20 minutes to complete counts as a “long” survey that warrants an incentive. One of the most popular incentives is to give participants the opportunity to enter a prize draw of something of considerable value (e.g. gift certificates valued at least $100, a tablet computer or similar items).

However, a higher response rate isn’t necessarily the ideal – irrespective of the response rate, your survey strategy should aim to minimise systematic differences between people who do respond and those who do not (nonresponse bias). This is distinct from overall response quality, which does not appear to be affected by incentives[5]. Nonetheless, if there is a particular target audience of interest (e.g. teachers, visitors who have participated in a particular programme, visitors from a particular cultural or ethnic group, etc.), you may need to consider ways to increase the response rate among those people in particular.

Compared to the use of incentives in telephone and online surveys, there is very little published research about the practicalities of conducting onsite visitor interviews in museums and similar sites. Rather, examples of practice are shared through informal networks (more on this later).

Ethical Guidelines

Neither the Australian Market & Social Research Society (AMSRS) Code of Professional Conduct[6] nor the Australasian Evaluation Society’s Guidelines for Ethical Conduct of Evaluations[7] specifically mention participant incentives, however both outline important principles with which any choice of incentive should comply. In particular, the AMSRS code specifies that there must be a clear delineation between market research and “non-research activities” such as promotions or compilation of databases for marketing purposes. This may have implications for what you can use as incentives, as well as how you use any contact details you collect for the purposes of prize draws.

Care should be taken to ensure that incentives cannot be interpreted as coercion, particularly if the incentive is large enough to cause certain participants (e.g. at-risk groups) to reluctantly participate in order to receive the incentive. In any case, it has been suggested that it may be better to increase intrinsic motivations rather than rely solely on monetary incentives[8].

Is it an Incentive, a Thank You, or Compensation?

The principle that monetary incentives should only be used as a last resort may appear at odds with the idea that visitors’ time is valuable and should be acknowledged as such. However, it’s largely to do with the way incentives are framed: an incentive can be considered an inducement to participate, but it can also be presented as a “thank you gift” that you give to visitors as a token of your appreciation. In this sense, the timing of the incentive may come into play. Giving an incentive in advance may increase participation and there is no evidence that it raises a sense of obligation among potential participants[9].

There is another type of payment that we should briefly mention here, and that is compensation. This is particularly relevant where participation incurs costs direct costs (e.g. travel to a focus group session). Any costs that participants so incur must always be compensated.

Some Examples

In September 2014, there was a discussion on the Visitor Studies Association (VSA) listserv about the incentives that different institutions give away to visitors who participate in short (<5-10 minutes) onsite surveys. Among this community of practice, the respective merits and drawbacks of different approaches were discussed[10]. The key points are summarised below:

Incentive Features Drawbacks / Considerations
Vouchers for in-visit added extras(e.g. simulator rides, temporary exhibitions, etc.) Adds value to visitors’ experience with little or no direct cost to Museum May lead to unanticipated spikes in demand for additional experiences – e.g. can the simulator accommodate everyone who’s given a voucher?
Small gifts(e.g. pens/pencils, stickers, temporary tattoos, bookmarks, postcards, key-rings) Tangible and popular gifts, especially for children.If you’re surveying adults in a family group, giving children a few items to choose from can keep them usefully occupied while the adults respond to the survey.Cheap if purchased in bulk.

 

Gift needs to match target audience of survey (e.g. temporary tattoos are great for kids, less so for adult responders)Children may end up using stickers to decorate your exhibits!
Food / coffee / ice cream vouchers Generally popular and well-received. Can create a rush in the café if you’re doing large numbers of surveys.May be limited by the contract arrangements in place with caterers. 
Prize draws Popular with visitors and practical to implement with online surveys.Cost of a single big-ticket prize may work out cheaper than hundreds of small giveaways. Visitor contact details must be recorded for prize draw. These details must be able to be separated from the survey responses to maintain anonymity.Be aware that offering a free membership as a prize may reduce membership take-up during the survey period[11].
Gift certificates Can be used for longer surveys or detailed interviews that involve a longer time commitment and therefore warrant a higher value incentive. Gift certificates may be seen as equivalent to cash from a tax perspective.
Free return tickets No direct costs. Tickets can be given away to friends and family if participants can’t re-visit. Not relevant to free-entry institutions.Could be perceived as marketing.
Discounted museum membership Encourages a longer term relationship with the visitor. Not an attractive incentive for tourists.

 

Conclusions

Incentives are established good practice in evaluation and visitor research, and are generally intended to represent a token of appreciation for visitors’ time. Although incentives can increase response rates, this is not necessarily the principal reason why incentives are used. Like all aspects of visitor research, decisions regarding the size, nature and timing of giving visitor incentives must be clearly thought through from an operational, financial and ethical perspective at the outset of the research. Done well, incentives offer the dual benefits of increasing responses and creating a sense of good will among visitors.

References

[1] Arts Victoria. (n.d.) Visitor Research Made Easy, p. 82 (sourced from: http://www.arts.vic.gov.au/Research_Resources/Resources/Visitor_Research_Made_Easy)

[2] Bicknell, S., and Gammon, B. (1996). Ethics and visitor studies – or not? Retrieved from: http://informalscience.org/images/research/VSA-a0a4h9-a_5730.pdf

[3] Singer, E., and Ye, C. (2013) The use and effects of incentives in surveys. Annals of the American Academy of Political and Social Science, Vol 645, 112-141

[4] Parsons, C. (2007) Web-based surveys: Best practices based on the research literature. Visitor Studies, Vol 10(1), 13-33.

[5] Singer & Ye (2013).

[6] http://www.amsrs.com.au/professional-standards/amsrs-code-of-professional-behaviour

[7] http://www.aes.asn.au/images/stories/files/membership/AES_Guidelines_web.pdf

[8] Singer & Ye (2013).

[9] Singer & Ye (2013).

[10] Contributors to this discussion included (in alphabetical order): Stephen Ashton, Sarah Cohn, Susan Foutz, Ellen Giusti, Joe Heimlich, Karen Howe, Amy Hughes, Elisa Israel, Kathryn Owen, Beverly Serrell, Marley Steele Inama, Carey Tisdal and Nick Visscher (with apologies to any contributors who have been missed). VSA listserv archives can be accessed via https://list.pitt.edu/mailman/listinfo/vsa

[11] Visitor Research Made Easy, p. 60.

Building Evaluation Capacity

I recently attended the 27th Annual Visitor Studies Association conference in Albuquerque, NM. Given the theme was Building Capacity for Evaluation: Individuals, Institutions, the Field, it’s not surprising that “capacity building” was a common topic of discussion throughout the week. What do we mean by building capacity? Whose capacity are we building and why? Pulling together threads from throughout the conference, here are some of my thoughts:

Individual capacity building:

Any conference offers a chance to hear about developments in the field and to build your professional networks, which is a form of personal capacity-building. VSA in particular runs professional development workshops before and after the conference as an opportunity to sharpen your skills, be exposed to different approaches and to learn new techniques. These are useful for both newcomers to the field as well as more experienced researchers who might be interested in new ways of thinking, or new types of data collection and analysis.

A common thread I noticed was both the opportunities and challenges presented by technology – video and tracking software allow you to collect much more detailed data, and you can integrate different data types (audio, tracking data) into a single file. But technology’s no panacea, and good evaluation still boils down to having a well thought-through question you’re looking to investigate and the capacity to act on your findings.

Panel
Panel session at VSA 2014

Institutional capacity building:

There were a lot of discussions around how to increase the profile of Evaluation and Visitor Research within institutions. There seemed to be a general feeling that “buy-in” from other departments was often lacking: evaluation is poorly understood and therefore not valued by curators and others whose roles did not bring them into regular, direct contact with visitors. Some curators apparently come away with the impression that evaluators only asked visitors “what they don’t like”, or otherwise had a vested interest in exposing problems rather than celebrating successes[1]. Others believe they “already know” what happens on the exhibition floor, but without systematic observation may only be seeing what they want to see, or otherwise drawing conclusions about what works and what doesn’t based on their own assumptions, rather than evidence.

For many, the “aha!” moment comes when they become involved in the data collection process themselves. When people have an opportunity to observe and interview visitors, they start to appreciate where evaluation findings come from, and are subsequently more interested in the results. Several delegates described Damascene conversions of reluctant curators once they had participated in an evaluation. But others expressed reservations about this approach – does it give colleagues an oversimplified view of evaluation? Does it create the impression that “anyone can do evaluation”, therefore undermining our skills, knowledge and expertise? What about the impact on other functions of the museum: if curators, designers and others are spending time doing evaluation, what parts of their usual work will need to be sacrificed?

A counter to these reservations is that visitors are arguably the common denominator of *all* activities that take place in informal learning institutions, even if this isn’t obvious on a day to day basis in many roles. Participating in data collection acts as a reminder of this. Also, at its best, evaluation helps foster a more reflective practice more generally. But nonetheless the concerns are valid.

Capacity building across the Field:

I found this part of the discussion harder to be part of, as it was (understandably) focused on the US experience and was difficult to extrapolate to the Australian context due to massive differences in scale. One obvious difference is the impact that the National Science Foundation has had on the American museum landscape. NSF is a major funder of the production and evaluation of informal science learning [2]. NSF-supported websites like informalscience.org host literally hundreds of evaluation reports (that actually extend beyond the “science” remit that the site’s name implies – it’s a resource worth checking out).

There are a considerable number of science centres and science museums across the US, and because of these institutions’ history of prototyping interactive exhibits, they tend to have a larger focus on evaluation and visitor research than (say) history museums. Indeed, most of the delegates at VSA seem to represent science centres, zoos and aquariums, or are consultant evaluators for whom such institutions are their principal clients. There was also a reasonable art museum presence, and while there were a few representatives of historical sites, on the whole I got the impression that history museums were under-represented.

In any case, I came away with the impression that exhibition evaluation is more entrenched in museological practice in the US than it is here in Australia. It seems that front-end and formative research is commonly done as part of the exhibition development process, and conducting or commissioning summative evaluations of exhibitions is routine. In contrast, besides a handful of larger institutions, I don’t see a huge amount of evidence that exhibition evaluation is routinely happening in Australia. Perhaps this is just the availability heuristic at play – the US is much bigger so it’s easier to bring specific examples to mind. Or it could be that evaluation is happening in Australian museums, but as an internal process that is not being shared? Or something else?

 

[1] A lesson from this is that evaluation reports may read too much like troubleshootingdocuments and not give enough attention to what *is* working well.

[2] The Wellcome Trust plays a similar role in the UK, but as far as I’m aware there is nothing comparable (at least in scale) in Australia.

Museum Life Interview

I’m currently on my way back from Albuquerque, New Mexico, where I attended the Visitor Studies Association annual conference. It’s been a very thought provoking conference and has been a chance for me to present some of the results from my PhD research (more on the conference later, once I’ve had a chance to digest it all).

Sometimes when you’re in a different time zone, interesting opportunities present themselves – this time, while in Albuquerque, I was a guest on Carol Bossert’s online radio program Museum Life. It streamed live but also is available online:

It’s an in-depth interview: the whole show goes for a little under an hour (so go grab a coffee now if you plan to listen. . .). I talk a little bit about how I came to museums, what led to me pursuing a PhD, an overview of some of my research findings, and how I think these might be able to be applied to museum practice. I hope you find it interesting!