By Michelle Kenner
In 2013, a group of professionals from the science center and museum field got together to discuss how standardized measures could help science centers and museums understand themselves and each other better. In 2014, COVES received a grant from the Institute of Museum and Library Services (IMLS), and I joined the project shortly thereafter.
As someone freshly out of graduate school, recently back in the United States, and inexperienced in research and evaluation, COVES was not an opportunity to “get my feet wet”; it was like getting pushed into the 12-foot section of a pool but never having had a single swimming lesson.
In other words, I had no idea what I was doing.
That being said, I completely identify with how someone feels when they are brand new to evaluation. I’ve spoken to several non-evaluator science center and museum professionals about what evaluation is like in their institutions, and often I’m met with the same sentiment: it’s important, but hiring a data collector/evaluator/researcher, whether full-time or part-time, isn’t feasible.
Part of COVES’ directive is to build evaluation capacity in science centers and museums so that they collect reliable data that can help them make better, more informed decisions. I can’t say I’m an expert (not anywhere close), but here are a few ways I’ve kept my head above water in the middle of the evaluation community pool:
Take full advantage of colleagues at other organizations who know a thing or two about evaluation. I learned the most in those first few months on calls with COVES’ Governing Body and Research Team (after which I promptly Googled terms and concepts that went flying over my head). In fact, admitting how much I don’t know has opened up opportunities for others to teach me; the great thing about working with evaluators is they understand the importance of robust, reliable data, and they want you to succeed in collecting them.
COVES uses Qualtrics to power its survey, and I took advantage of their online training materials to practice creating surveys with different types of questions and features. There are myriad ways to design a survey, but that doesn’t mean you can’t enjoy the process. Pilot test your survey, even if it’s just something you created for fun; getting informed opinions will help you craft better questions on your next survey.
Take other people’s surveys. Consider the questions they choose to ask, what information they’re looking for, and what scales they use. Can their methods be adapted to meet the needs at your institution?
Also, read widely. When articles cite studies, how are they framing the results? Where did they get their data? What are they trying to communicate, and is there another side to the story? Not only will exposure to various surveys and studies help you craft better surveys yourself, it will make you a more critical consumer of data.
Learning evaluation may seem daunting, but in the end, it’s all about curiosity: every question is an opportunity to discover something new about our visitors, our preconceived notions, and our field. And while I may never become a professional evaluator myself, I have a much greater appreciation for it in all its complexity than I did three years ago. Whatever you do, don’t give up!
Are you a self-taught evaluator? Have any great resources I should check out as I continue to learn? Tell us more at email@example.com.