By Ryan Auster
The choice of whom to include in a study can have a big impact on what you learn. For example, in studying the average height of students at American universities, you wouldn’t only select players on basketball teams to measure…but you wouldn’t exclude them, either! The consideration of selecting participants for study first involves understanding the population of interest – the entire unit of people (or things) being studied – and then choosing from that large group a smaller group to ask questions of, measure, or observe. We sample from the population because we don’t have access to everyone (like all university students across the country), or because it would be too costly or time-consuming to survey everyone. This selection process in research or evaluation is sampling, and there are several possible sampling techniques.
COVES employs simple random sampling—this means that, although we don’t have access to every visitor who comes through our doors, we collect data at different (random!) times of day and days of the week throughout the year, so that every visitor has the same probability of being asked to participate in our study. Of course, our data collectors don’t spend all day everyday on the floors so technically this isn’t 100% true, but we don’t limit our potential sample by any knowable factors (like collecting data only on rainy days, or during lunch time, or anything else that could create an obvious bias).
Once data collectors begin a data collection session, we provide a protocol for them to follow, which we believe helps avoid invisible bias—those tendencies we may have as humans, conscious or not, to speak to other individuals who look like we do, act like we do, or share some other trait. Although this is not “systematic” sampling because we don’t order visitors by any specific characteristics, it does provide a systematic method for data collectors to use when approaching visitors to ask if they’d be willing to complete our survey.
While we may not be able to guarantee that our samples are truly generalizable to our entire museum audiences (because we don’t have census-level information on this population), the combination of random sampling and the systematic protocol helps us ensure that the visitors included in our study are as representative of our overall audience as possible, given the many constraints that we operate under—the obvious ones being time and money, and perhaps less obviously, that we can’t force anyone to provide us with data like a true randomized control trial!
Without detailing the other sampling techniques you may have heard of or used before, it is important to acknowledge the critical role of sampling when collecting data from visitors. Using any approach whereby visitors are asked to participate in a survey compared to one in which visitors self-select to participate (comment cards, anyone?) offers vast improvements in representativeness and a drastic difference in the data. More often than not, you will find that when visitors self-select to complete a comment card-style survey, they tend to be very opinionated—those groups that have had either an exceptionally positive experience or those who have had a dismally upsetting one. This type of bimodal response pattern in which the two extremes are very highly represented is atypical and can lead to misinterpretations of the overall museum experience.
As you seek to better understand your visitors, we strongly encourage you to think about how you collect data so that you can be sure the information you are getting is accurate, and therefore actionable.
Enjoy nerdy methodology talk? Look for my next post on our piloting of individual vs. group sampling techniques.