“Will You Take My Survey?”: Data Collection Methods

By Alex Lussenhop

Do you think you know the best way to collect data about museum audiences? When we started our COVES piloting process, I thought I did.

At the Museum of Science, Boston, we’d been collecting audience data since 2009 using an email collection method. We’d approach visitors during their visit, tell them about our survey, and invite them to give the data collector their email address to fill out the survey later. However, some of our Governing Body institutions used different data collection methods, including exit surveys and interviews. Our goal of comparable multi-institutional data relied on a consistent data collection method, so we had to choose one. The piloting process focused on trying the three methods (email collection, exit survey, and full exit interview) at as many different types of sites as possible in order to answer questions about the practicality and data quality of each method.

After six months of piloting different data collection methods at our eight pilot sites, we reached some surprising conclusions (surprising to me, anyway). For one, while visitors are often willing to give out their emails on the museum floor to a data collector who asks, very few actually fill out the survey they are sent. This led to the email collection method having a final response rate far lower than the other two tested methods. Only 19% of visitors approached on the museum floor completed an online survey, compared to 62% of visitors approached for an exit survey and 51% of those approached for an exit interview (Fig. 1).

response-rate-chart-pilot
Fig. 1: Percentage of visitors that completed each data collection method

Likely because of this low response rate, the email collection method was also the least efficient method. The email collection method involved an average of 21 minutes of data collection time per completed survey, compared to 14 minutes for the survey method and 17 minutes for the interview method (Fig. 2). While the email collection method may feel more efficient because it involves very little imposition on the visitor’s time on the floor, it ends up taking more time in the end.

Fig. 2: The average time it took visitors to complete each data collection method

But What About the Data?

Did different methods result in different kinds of visitor responses? Two of our pilot sites, one small and one medium-sized, split their pilot data collection between all three methods, allowing us to compare data directly across methods at those institutions. We chose to focus on the one rating question on the pilot survey: the overall Net Promoter Score question, “How likely is it that you would recommend [institution name] to a friend or colleague?” rated on a 0-10 scale, where 0 is “Not at all likely” and 10 is “Extremely likely.” Did interviewed, surveyed, or emailed visitors answer this question differently?

It turned out that at the small pilot site, we saw a statistically significant difference in Net Promoter Scores. In particular, the interview method elicited higher responses than either the exit survey or interview. 93% of interviewed visitors rated this question a perfect “10,” compared to 83% of emailed visitors and just 58% of visitors surveyed at exit!

It’s not that exit survey-takers wouldn’t recommend the museum—most who didn’t give a rating of “10” gave a rating of “9.” However, these data indicate that at least at this museum, there might be some sort of “social desirability effect,” in which visitors gave the museum higher ratings in a face-to-face conversation than they would when filling out the survey more privately.

Things to Remember

All in all, piloting three data collection methods simultaneously was an illuminating process, and it gave us the confidence to make the decision to go with the exit survey method of data collection for COVES. When planning your own data collection efforts, keep in mind that the volume of responses, while important, isn’t the only major factor to consider. As responsible evaluators, we want to think about:

  • how our collection methods will impose on the visitor;
  • how to manage staff and volunteer time efficiently; and
  • how to maximize the response rate while still collecting meaningful data.

In the end, be adaptable; if things aren’t working, try something new! Evaluation is a learning process, but having actionable data your institution can use is worth the trial-and-error.

 

Questions about COVES? Let us know at info@understandingvisitors.org.