Survey Responses – Benchmarks and Tips

I’ve now collected a grand total of 444 questionnaires for my PhD research (not including pilot samples) – which is not far off my target sample of 450-500. Just a few more to go! Based on my experiences,  I thought I’d share some of the lessons I’ve learned along the way . . .

Paper or Tablet?

My survey was a self-complete questionnaire (as opposed to an interviewer-led survey) that visitors filled out while on the exhibition floor. During piloting I tried both paper surveys and an electronic version on an ipad, but ended up opting for the paper version as I think the pros outweighed the cons for my purposes.

The big upside of tablet based surveys is that there is no need for manual data entry as a separate step – survey programs like Qualtrics can export directly into an SPSS file for analysis. And yes, manually entering data from paper surveys into a statistics program is time-consuming, tedious and a potential source of error. The other advantage of a tablet-based survey (or any electronic survey for that matter) is that you can set up rules that prompt people to answer questions they may have inadvertently skipped, automatically randomise the order of questions to control for ordering effects, and so on. So why did I go the other way?

First of all, time is a trade off: with paper surveys, I could recruit multiple people to complete the survey simultaneously – all I needed was a few more clipboards and pencils and plenty of comfortable seating nearby. Whereas I only had one tablet, which meant only one person could be completing my survey at a time. By the time you take into account the time saved from being able to collect far more paper surveys in a given time compared to the tablet, I think I’m still in front doing the manual data entry. Plus I’m finding doing the data entry manually is a useful first point of analysis, particularly during the piloting stages when you’re looking to see where the survey design flaws are.

Secondly, I think many visitors were more comfortable using the old-fashioned paper surveys. They could see at a glance how long the survey was and how much further they had to go, whereas this was less transparent on the ipad (even though I had a progress bar).

This doesn’t mean I would never use a tablet – I think they’d be particularly useful for interviewer-led surveys where you can only survey one participant at a time anyway, or large scale surveys with multiple interviewers and tablets in use.

Refining the recruitment “spiel”

People are understandably wary of enthusiastic-looking clipboard-bearers – after all, they’re usually trying to sell or sign you up to something. In my early piloting I think my initial approach may have come across as too “sales-y”, so I refined it such that the first thing I said was that I am a student. My gut feel is that this immediately made people less defensive and more willing to listen to the rest of my “spiel” for explaining the study and recruiting participants. Saying I was a student doing some research made it clear up front that I was interested in what they had to say, not in sales or spamming.

Response, Refusal and Attrition Rates

Like any good researcher should, I kept a fieldwork journal while I was out doing my surveys. In this I documented everyone I approached, approximately what time I did so, whether they took a participant information sheet or refused, and if they refused, what reason (if any) they gave for doing so. During busy periods, recording all this got a bit chaotic so some pages of notes are more intelligible than others, but over a period of time I evolved a shorthand for noting the most important things. The journal was also a place to document general facts about the day (what the weather was like, whether there was a cruise ship in town that day, times when large numbers of school groups dominated the exhibition floor, etc.). Using this journal, I’ve been able to look at what I call my response, refusal and attrition rates.

  • Response rate: the proportion of visitors (%) I approached who eventually returned a survey
  • Refusal rate: the proportion of visitors (%) approached who refused my invitation to participate when I approached them
  • Attrition rate: this one is a little specific to my particular survey method and wouldn’t always be relevant. I wanted people to complete the survey after they had finished looking around the exhibition, but for practical reasons could not do a traditional “exit survey” method (since there’s only one of me, I couldn’t simultaneously cover all the exhibition exits). So, as an alternative, I approached visitors on the exhibition floor, invited them to participate and gave them a participant information sheet if they accepted my invitation. As part of the briefing I asked them to return to a designated point once they had finished looking around the exhibition, at which point I gave them the questionnaire to fill out [1]. Not everyone who initially accepted a participant information sheet came back to complete the survey. These people I class as the attrition rate.

So my results were as follows: I approached a total of 912 visitors, of whom 339 refused to participate, giving an average refusal rate of 36.8%. This leaves 573 who accepted a participant information sheet. Of these, 444 (77%) came back and completed a questionnaire, giving me an overall average response rate of (444/912) 49.4%. The attrition rate as a percentage of those who initially agreed to participate is therefore 23%, or, if you’d rather, 14% of the 912 people initially approached.

So is this good, bad or otherwise? Based on some data helpfully provided by Carolyn Meehan at Museum Victoria, I can say it’s probably at least average. Their average refusal rate is a bit under 50% – although it varies by type of survey, venue (Museum Victoria has three sites) and interviewer (some interviewers have a higher success rate than others).

Reasons for Refusal

While not everyone gave a reason for not being willing to participate (and they were under no obligation to do so), many did, and often apologetically so. Across my sample as a whole, reasons for refusal were as follows:

  • Not enough time 24%
  • Poor / no English: 19%
  • Child related: 17%
  • Others / No reason given: 39%

Again, these refusal reasons are broadly comparable to those experienced by Museum Victoria, with the possible exception that my refusals included a considerably higher proportion of non-English speakers. It would appear that the South Australian Museum attracts a lot of international tourists or other non-English speakers, at least during the period I was doing surveys.

Improving the Response Rate

As noted above, subtly adjusting the way you approach and invite visitors to participate can have an impact on response rates. But there are some other approaches as well:

  • Keep the kids occupied: while parents with hyperactive toddlers are unlikely to participate under any circumstances, those with slightly older children can be encouraged if you can offer something to keep the kids occupied for 10 minutes or so. I had some storybooks and some crayons/paper which worked well – in some cases the children were still happily drawing after the parents had completed the survey and the parents were dragging the kids away!
  • Offer a large print version: it appears that plenty of people leave their reading glasses at home (or in the bag they’ve checked into the cloakroom). Offering a large print version gives these people the option to participate if they wish. Interestingly, however, some people claimed they couldn’t read even the large print version without their glasses. I wonder how they can see anything at all sans spectacles if this is the case . . . then again, perhaps this is a socially acceptable alibi used by people with poor literacy levels?
  • Comfortable seating: an obvious one. Offer somewhere comfortable to sit down and complete the questionnaire. I think some visitors appreciated the excuse to have a sit and have a break! Depending on your venue, you could also lay out some sweets or glasses of water.
  • Participant incentives: because I was doing questionnaires on the exhibition floor, putting out food or drink was not an option for me. But I did give everyone who returned a survey a voucher for a free hot drink at the Museum cafe. While I don’t think many (or any) did the survey just for the free coffee, it does send a signal that you value and appreciate your participants’ time.

[1] A potential issue with this approach is cuing bias – people may conceivably behave differently if they know they are going to fill out a questionnaire afterwards. I tried to mitigate this with my briefing, in which I asked visitors to “please continue to look around this exhibition as much or as little as you were going to anyway”, so that visitors did not feel pressure to visit the exhibition more diligently than they may have otherwise. Also, visitors did not actually see the questionnaire before they finished visiting the exhibition – if they asked what it was about, I said it was asking them “how you’d describe this exhibition environment and your experience in it”. In some cases I reassured visitors that it was definitely “not a quiz!”. This is not a perfect approach of course, and I can’t completely dismiss cuing bias as a factor, but any cuing bias would be a constant between exhibition spaces as I used comparable methods in each.

Leave a Reply

Your email address will not be published. Required fields are marked *