The What, When and How of Participant Incentives

[Note: This is a modified version of an article that first appeared in Museum Australia’s Evaluation and Visitor Research Network’s Spring 2014 newsletter]

Introduction

We’ve all seen it; we’ve all done it: Complete our survey and enter the draw to win! Agree to be interviewed and get a free pen! Researchers call these “participant incentives”, which generally speaking are defined as “benefit[s] offered to encourage a person to participate in a research program”.[1] Offering incentives is considered to be good practice in evaluation and visitor research. Visitors agree to give us time out of their visit for the benefit of our research, and it behoves us to value this time and use it ethically[2]. If we consider research as a social contract, incentives are a gesture of reciprocity, acknowledging the value of visitors’ time.

But what kind of incentive is appropriate for a given piece of research? What’s feasible? What’s ethical? What might be some unintended consequences? This article will explore some of the issues surrounding participant incentives.

The Bigger Picture

To understand the role of participant incentives, we first need to consider why people respond to surveys in the first place. There seem to be three main kinds of reasons: altruistic (people who want to help or see it as their civic duty); egotistic (having specific stake in the results, or simply enjoying doing surveys) and study-specific (interest in the topic or organisation)[3]. Incentives increase the “egotistic” reason for completing a survey. But appealing to respondent’s altruism can also increase response rates, as can the fact that many visitors hold museums in positions of high trust and regard.

Particularly for online surveys, incentives have been shown to increase the response rate, but this also depends on the length of the survey, who you’re trying to target and whether they have a stake in the research outcome[4]. As a general rule of thumb, you should state up-front how long any survey is going to take, and offer an incentive that reflects the time commitment you are requesting. For online surveys, anything taking longer than 20 minutes to complete counts as a “long” survey that warrants an incentive. One of the most popular incentives is to give participants the opportunity to enter a prize draw of something of considerable value (e.g. gift certificates valued at least $100, a tablet computer or similar items).

However, a higher response rate isn’t necessarily the ideal – irrespective of the response rate, your survey strategy should aim to minimise systematic differences between people who do respond and those who do not (nonresponse bias). This is distinct from overall response quality, which does not appear to be affected by incentives[5]. Nonetheless, if there is a particular target audience of interest (e.g. teachers, visitors who have participated in a particular programme, visitors from a particular cultural or ethnic group, etc.), you may need to consider ways to increase the response rate among those people in particular.

Compared to the use of incentives in telephone and online surveys, there is very little published research about the practicalities of conducting onsite visitor interviews in museums and similar sites. Rather, examples of practice are shared through informal networks (more on this later).

Ethical Guidelines

Neither the Australian Market & Social Research Society (AMSRS) Code of Professional Conduct[6] nor the Australasian Evaluation Society’s Guidelines for Ethical Conduct of Evaluations[7] specifically mention participant incentives, however both outline important principles with which any choice of incentive should comply. In particular, the AMSRS code specifies that there must be a clear delineation between market research and “non-research activities” such as promotions or compilation of databases for marketing purposes. This may have implications for what you can use as incentives, as well as how you use any contact details you collect for the purposes of prize draws.

Care should be taken to ensure that incentives cannot be interpreted as coercion, particularly if the incentive is large enough to cause certain participants (e.g. at-risk groups) to reluctantly participate in order to receive the incentive. In any case, it has been suggested that it may be better to increase intrinsic motivations rather than rely solely on monetary incentives[8].

Is it an Incentive, a Thank You, or Compensation?

The principle that monetary incentives should only be used as a last resort may appear at odds with the idea that visitors’ time is valuable and should be acknowledged as such. However, it’s largely to do with the way incentives are framed: an incentive can be considered an inducement to participate, but it can also be presented as a “thank you gift” that you give to visitors as a token of your appreciation. In this sense, the timing of the incentive may come into play. Giving an incentive in advance may increase participation and there is no evidence that it raises a sense of obligation among potential participants[9].

There is another type of payment that we should briefly mention here, and that is compensation. This is particularly relevant where participation incurs costs direct costs (e.g. travel to a focus group session). Any costs that participants so incur must always be compensated.

Some Examples

In September 2014, there was a discussion on the Visitor Studies Association (VSA) listserv about the incentives that different institutions give away to visitors who participate in short (<5-10 minutes) onsite surveys. Among this community of practice, the respective merits and drawbacks of different approaches were discussed[10]. The key points are summarised below:

Incentive Features Drawbacks / Considerations
Vouchers for in-visit added extras(e.g. simulator rides, temporary exhibitions, etc.) Adds value to visitors’ experience with little or no direct cost to Museum May lead to unanticipated spikes in demand for additional experiences – e.g. can the simulator accommodate everyone who’s given a voucher?
Small gifts(e.g. pens/pencils, stickers, temporary tattoos, bookmarks, postcards, key-rings) Tangible and popular gifts, especially for children.If you’re surveying adults in a family group, giving children a few items to choose from can keep them usefully occupied while the adults respond to the survey.Cheap if purchased in bulk.

 

Gift needs to match target audience of survey (e.g. temporary tattoos are great for kids, less so for adult responders)Children may end up using stickers to decorate your exhibits!
Food / coffee / ice cream vouchers Generally popular and well-received. Can create a rush in the café if you’re doing large numbers of surveys.May be limited by the contract arrangements in place with caterers. 
Prize draws Popular with visitors and practical to implement with online surveys.Cost of a single big-ticket prize may work out cheaper than hundreds of small giveaways. Visitor contact details must be recorded for prize draw. These details must be able to be separated from the survey responses to maintain anonymity.Be aware that offering a free membership as a prize may reduce membership take-up during the survey period[11].
Gift certificates Can be used for longer surveys or detailed interviews that involve a longer time commitment and therefore warrant a higher value incentive. Gift certificates may be seen as equivalent to cash from a tax perspective.
Free return tickets No direct costs. Tickets can be given away to friends and family if participants can’t re-visit. Not relevant to free-entry institutions.Could be perceived as marketing.
Discounted museum membership Encourages a longer term relationship with the visitor. Not an attractive incentive for tourists.

 

Conclusions

Incentives are established good practice in evaluation and visitor research, and are generally intended to represent a token of appreciation for visitors’ time. Although incentives can increase response rates, this is not necessarily the principal reason why incentives are used. Like all aspects of visitor research, decisions regarding the size, nature and timing of giving visitor incentives must be clearly thought through from an operational, financial and ethical perspective at the outset of the research. Done well, incentives offer the dual benefits of increasing responses and creating a sense of good will among visitors.

References

[1] Arts Victoria. (n.d.) Visitor Research Made Easy, p. 82 (sourced from: http://www.arts.vic.gov.au/Research_Resources/Resources/Visitor_Research_Made_Easy)

[2] Bicknell, S., and Gammon, B. (1996). Ethics and visitor studies – or not? Retrieved from: http://informalscience.org/images/research/VSA-a0a4h9-a_5730.pdf

[3] Singer, E., and Ye, C. (2013) The use and effects of incentives in surveys. Annals of the American Academy of Political and Social Science, Vol 645, 112-141

[4] Parsons, C. (2007) Web-based surveys: Best practices based on the research literature. Visitor Studies, Vol 10(1), 13-33.

[5] Singer & Ye (2013).

[6] http://www.amsrs.com.au/professional-standards/amsrs-code-of-professional-behaviour

[7] http://www.aes.asn.au/images/stories/files/membership/AES_Guidelines_web.pdf

[8] Singer & Ye (2013).

[9] Singer & Ye (2013).

[10] Contributors to this discussion included (in alphabetical order): Stephen Ashton, Sarah Cohn, Susan Foutz, Ellen Giusti, Joe Heimlich, Karen Howe, Amy Hughes, Elisa Israel, Kathryn Owen, Beverly Serrell, Marley Steele Inama, Carey Tisdal and Nick Visscher (with apologies to any contributors who have been missed). VSA listserv archives can be accessed via https://list.pitt.edu/mailman/listinfo/vsa

[11] Visitor Research Made Easy, p. 60.

Visitor Observation: Privacy Issues

During my PhD I spent some time tracking and timing visitors to learn more about visitor behaviour in the exhibitions I was studying (more on the history and applications of visitor tracking here). Recently, I was asked about the privacy implications of doing such research. What steps do we need to take to ensure we’re a) staying on the right side of the law and b) respecting visitors’ rights to informed consent and ability to opt out of participating in research?

On the first part (i.e. The Law), I’ll tread carefully since I’m not a lawyer and it will vary in specifics from place to place anyway. However, in a general sense, museums will generally count as a “public place”, and people can reasonably expect to be seen in public places. Therefore if you’re just documenting visitors’ readily observable public behaviour, and nothing about them that may allow them to be identified as individuals, you’re probably in safe territory. However, it would be wise to check whether your museum is classed (in a legal sense) as a “public place” – for instance an entry charge may implicitly impose an expectation of some level of privacy on the part of paying guests.

So how about different approaches to informed consent?

The first consideration is cuing – do you tell visitors they’re going to be watched and/or listened to at the start of their visit? If so, then you are studying cued visitors – and gaining informed consent is relatively straightforward. When you approach potential participants, you explain the benefits and risks of participating, and they can decide whether they want to be part of it or not. The downside of cuing, of course, is that you’re probably no longer going to be documenting natural visitor behaviour – people tend to do different things when they know they are being watched.

Depending on what you’re studying, this may not be an issue – and, like contestants on Big Brother, visitors tend to forget they’re being watched or listened to after a while, even if they’re rigged up with audio recording equipment (Leinhardt & Knutson, 2004). Also, if you’re going to be tracking the same group of visitors over the course of a whole visit, which could mean following them for 2-3 hours, then you really do need to cue them first – otherwise, frankly, it just ends up getting creepy and weird for all concerned.

If you’re tracking visitors across a whole site, sooner or later they’re bound to notice you. Awkward. You’d be better off telling them first.

In contrast, tracking and timing uncued visitors through a single exhibition gallery can be done discreetly without visitors becoming aware they are being tracked (assuming you are not trying to hear what they are saying as well, meaning you can observe from a reasonable distance). It still takes a bit of practice, and is easier in some exhibitions than others. Even so, if someone approaches you and asks what you’re up to, the right thing to do is fess up, explain what you were doing, stop tracking that person and try again with a different visitor.

If you’re taking this uncued approach to visitor observation, you’re in a far greyer area with respect to informed consent. The usual approach is to post a sign at the entrance to the museum or the gallery informing visitors that observations are taking place, and giving them steps to take if they wish to opt out of being observed. In practice, this might be notices telling visitors which areas to avoid if they don’t want to be watched, or having a mechanism for visitors to opt-out by wearing a lapel sticker or wrist band (although chances are this won’t be necessary – it never came up in my research and my experience tallies with other researchers I’ve spoken to).

What about when you’re recording?

Things can get a little more complicated when you go beyond simple observation and field notes to audio or video recording visitor behaviour. It’s one thing to watch publicly observable behaviour, another to have that behaviour recorded, replayed, and deconstructed ad infinitum. This doesn’t mean it’s not done – audio recording at individual exhibits dates back to at least the 1980s and Paulette McManus’s landmark study of visitors evidently reading labels more than it might first appear (McManus, 1989). In that study, specific exhibits were hooked up to a radio microphone linked to a tape (tape!) recorder, and an observer unobtrusively watched the exhibit from a safe distance, making field notes to aid subsequent interpretation (Leinhardt and Knutson also emphasise how important observational data is to back up audio recordings, where there are frequently snippets that make little sense if you don’t have additional details about what was happening at the time). As far as I can tell, visitors were uncued in this study.

Audio recording of uncued visitors poses fewer difficulties than video recording, as people can’t (easily) be identified based on voice recordings alone. Things get tricker when you get to video, of course. My first exposure to video-based visitor research was seeing Christian Heath speak about his and Dirk vom Lehn’s work in V&A’s British Galleries in the early 00s (Heath and vom Lehn, 2004). In this case, although they specify that visitors explicitly consented to being part of the research, it’s not obvious whether this was done in advance, or after the fact by approaching visitors once they’d left the exhibit of interest (and then discarding the data of those who have refused to participate prior to analysis). This ex post facto approach is a way you can ensure both uncued visitor behaviour and informed consent, but as I have no direct experience of this, I don’t know how high the refusal rate is and how complicated it is to ensure data is discarded appropriately as required.

Irrespective of the type of informed consent, there is the issue of data storage. Gone are the days of tapes that could be kept under lock and key. You’ll need to have a data retention policy in place to ensure that anything that could potentially identify participants is kept secure, safe from those who have no need to access it . . . and from accidental syncing to your public Facebook feed.

Disclaimer: This is just general advice based on my own experience and what I can glean from some of the literature. Different parts of the world and different ethics committees may have different views, and the specifics of any given piece of research may make a difference as well.

References

Heath, C., & vom Lehn, D. (2004). Configuring Reception: (Dis-)Regarding the “Spectator” in Museums and Galleries. Theory, Culture & Society, 21(6), 43–65. Leinhardt, G., & Knutson, K. (2004). Listening in on museum conversations. Walnut Creek, CA: AltaMira Press.
McManus, P. (1989). Oh, yes they do: How museum visitors read labels and interact with exhibit texts. Curator: The Museum Journal, 32(3), 174–189.