Ample Sampling, Part II: You’re not such a special snowflake after all

By Ryan Auster

As noted in my previous blog, sampling can play a big role in any study involving museum visitors. And when we first began considering the systematic sampling method that COVES institutions would use to collect data, one of the questions we had was, “How important are individual differences?”

Allow me to clarify just a bit: individual differences, by way of the personal characteristics, previous experiences, and museum visit history that visitors bring with them when they come to any of our institutions, are extremely important, and are exactly what COVES is attempting to unpack through our work with visitors. Typically, when we consider the “visitor experience,” we talk about this through the lens of these differences along with important factors such as the composition of the visiting group. (Worth noting: only 2% of visitors surveyed as of this posting attended by themselves, with no “group” of which to speak.) So when we asked ourselves the question above about individual differences, what we were really asking was, “Do responses differ when we sample visiting groups compared to when we sample visiting individuals?”1 The pilot study that we undertook in the winter of 2016 set out to answer this question.

Note: Harrison Ford was ineligible for participation in our pilot study due to his ability to express many emotions simultaneously.

Another way of thinking about this is to pose the question, “Do individuals in the same group, who typically share most, if not all, aspects of the museum experience, feel the same about their visit?” Unfortunately, due to issues of perceived burden, we did NOT invite multiple individuals from the same group to complete separate surveys to test for within-group differences. Although this would have enabled us to answer the question about differences within visiting groups, what we were most interested in learning was if the group sampling methodology was fundamentally Group sampling: all adults eligible to participate, but only one selected.flawed by allowing an overly eager visitor to complete the survey, despite the otherwise random sampling employed.

To test this, two of our pilot sites took on the task of sampling groups of visitors and individuals within groups using a consistent data collection method (one using an onsite exit survey, one using an email post-visit survey). Sampling groups entailed asking for any one adult within a visiting group for their participation.

 

Individual sampling: only one preselected adult eligible to participate.

Sampling individuals meant identifying a specific adult within the visiting group and asking for their participation.

(And how this specific adult was chosen was systematically random – for example, the adult closest to the data collector). Because both sampling methods were done at the same institution, we were able compare visitor characteristics (e.g., visit history, education level, etc.) as well as perceptions of the visit (e.g., Net Promoter Score®) between visitors who were sampled using the two methods at both of these institutions.

No differences were detected between those visitors who participated using the group sampling strategy and those who participated using the individual sampling strategy.2 Perhaps unsurprisingly, data collector feedback overwhelmingly suggested that using the group sampling strategy was preferred. Visitors who were singled out for participation using the individual sampling strategy were less likely to say yes (30% response rate, as opposed to 33% for group sampling). Furthermore, almost all data collectors who tested both strategies commented on how awkward it felt to isolate an individual for testing rather than ask the group – and how unethical it felt to enforce individual participation if other group members offered to contribute to the response.

As no sampling bias was introduced through the group sampling strategy, and because data collectors preferred this method of approaching visitors, COVES employs the systematic random sampling of visiting groups for collecting data.

 

Eager to continue the nerdy conversation? Email me a rauster@mos.org or come to one of my upcoming conference presentations!!


Another way of thinking about this is to pose the question, “Do individuals in the same group, who typically share most, if not all, aspects of the museum experience, feel the same about their visit?” Unfortunately, due to issues of perceived burden, we did NOT invite multiple individuals from the same group to complete separate surveys to test for within-group differences. Although this would have enabled us to answer the question about differences within visiting groups, what we were most interested in learning was if the group sampling methodology was fundamentally flawed by allowing an overly eager visitor to complete the survey, despite the otherwise random sampling employed.

2Non-parametric, inferential analyses such as chi-square and Mann-Whitney tests were performed to statistically compare responses between data collected using the two sampling strategies.