top of page
CKB Logo.JPG
Search
Writer's pictureRoss Markle

Improving survey response rates and motivation: tips, tricks, and science

As with many things in assessment, getting more and better responses isn't magic. It's really about learning from the science, a little bit of best practice, and - perhaps most importantly - making the necessary commitment.


When I began working in higher education research, I quickly came to appreciate one of the conveniences compared with K-12 research: working with adults. I always imagined it was easier working with college students who, generally being over the age of 18, didn't have to garner parental consent before participating in a study. While there is certainly bureaucracy and the management of various contingencies, work in higher education tends to be more streamlined, lacking the navigation factors such as school boards or teachers' unions.


But there is one great practical hurdle: motivation. Those student-adults who can give their own consent can also elect not to participate or, perhaps even worse, elect to participate with validity-threatening levels of motivation. This research consideration also plays an important part in most of the practical/operational work I do. Whether measuring student learning or gathering data to predict their success, I'm often asking students to provide data in what are generally considered "low-stakes" situations. That is, their results do not impact circumstances such as admission to an institution or graduation from one.


It's not just a problem I face, but one common across settings.


"How do we get students to respond?"


"How do we know we can trust these data?"


"Will students take this seriously?"


These are questions commonly asked by many faculty and staff at the colleges and universities with which I work, whether they're talking about a meaningful assessment to inform student success work, or something as simple as a satisfaction survey.


The good news is that there are various approaches to not only improving response rates, but to improve response validity. Motivation, after all, is a tricky proposition. Too low, and responses can't be trusted because they are essentially random. Too high, and responses can be invalid for other reasons, such as misrepresentation (when dealing with self-report based assessments, like surveys) or even cheating.


1. Distinguish between assessments and surveys

Whenever I'm working on a data collection effort, such as a survey, inevitably someone will ask , "how many responses do we need?" It's one of those questions that, while anticipated, always irks me. It suggest a compliance mindset, whereby usually the questioner is seeking to do the minimal amount to get by. It's much like the student in class who asks, "Is this going to be on the test?"


I use two paradigms to data collection: surveys and assessments. With surveys, we are acknowledging that we have essentially no information. How satisfied are students with the intake process? What improvements could we make to advising? These are, more or less, exploratory questions. As such, any information we can get is valuable. We are starting with zero, we will take whatever we can get, and those who don't respond, we might be safe in assuming that they didn't have anything to say on the matter.


Conversely, with an assessment, we are seeking to make an inference about our students (or perhaps even individual students). In these cases, we would ideally have every student respond, or perhaps identify a random sample of students to assess. When we have less than that identified group, concerns arise about the ability of generalizing whatever inference we make to the students who didn't respond.


Student learning and student success are two particularly salient examples of assessment work. Let's say I wanted to measure the critical thinking of graduating seniors, and I decided to assess all of them. While an 80% response rate sounds good in most cases, I'm very concerned about the other 20%. Are they the students with the least confidence in their critical thinking skills who thus decided not to participate? How might my results look if they were included? Is whatever inference I draw reasonable?


Most of my work has been spent in the student success arena. Here, when we're trying to understand student strengths and challenges, it's quite possible that those students who don't respond to a survey/assessment are "disengaged," failing to connect with the programs, resources, and people at the institution who are trying to foster their success. As an administrator once told me, "our best predictor of success isn't anything a student tells us on the assessment, it's whether they complete it or not." Thus, I'm not only concerned that response attrition is impacting any inference I make about students, but also undercutting the success of my attempts to improve student success.


Admittedly, distinguishing between "assessment" and "survey" paradigms doesn't help you drive responses, but it does help to frame your expectations and gauge what level of commitment you and your colleagues need for a successful data collection effort. And this leads to perhaps the most important recommendation...


2. Commit to it

I've never seen a great accomplishment - a Nobel prize, a hall of fame speech, an Oscar acceptance - that was attributed to mediocre effort. Yet this is what we expect from our data. We hope that (sorry to pick on this person again) our "minimal response rate" will provide the information we need.


I'm bordering on a diatribe here, but this is a common hurdle to change and improvement in higher education. Kay McClenny, esteemed scholar and practitioner in the community college world, once said that institutions are "perfectly designed to produce precisely the results they are currently getting" (Reed, 2012). I've often interpreted this as the organizational equivalent of "repeating the same thing over and over again and expecting different results is the definition of insanity." The only way to get better results in anything we do - student success, faculty engagement, or data collection - is to do something differently. In some cases, more of the same old thing might work, sure, but usually (and ESPECIALLY in higher education) doing something different is what's required.


Thus, if you want good data, surveys and assessments must be a priority for your institution/program/class/etc. Sorry to do this two weeks in a row, but again my alma mater, James Madison University, is a shining example. When they decided to commit to student learning, they created an Assessment Day. For incoming students, this is a half-day taken as part of orientation, and for continuing students, an entire day of classes is cancelled. During this time, students only task is to complete assessments of student learning. Certainly, this model has its strengths and challenges, but the point is, it's a phenomenal demonstration of commitment.


There's another adage that psychometricians and other data scientists use: "garbage in, garbage out." If you put bad data into a model, no matter how complex and intricate the methodology, your results are meaningless. This is my point about commitment. We can't simply tack on a survey just to get it out. We can't expect a low-effort blast e-mail to all students to yield results. If you want to be a data-based decision maker, you have to make sure you have good inputs to your process, and thus data collection should be a priority.


3. Communicate importance...

One of the best thing about JMU's Assessment Day is the platform it's provided to study data collection. Meta-research, if you will. In my early days as a graduate student, faculty and more senior graduate students were beginning to observe the variance in student effort on these relatively low-stakes assessments. As A-Day takes place in many classrooms across campus, each overseen by a different proctor, they quickly learned that the way the assessments were presented to students (and, accordingly, the behavior of the proctor) was a key determinant of student motivation.


Numerous studies have come from JMU's Center for Assessment and Research studies, examining the role of motivation on assessment results, as well as ways to motivate students. (I've posted several examples, each marked with an asterisk, in the references section below.) I think the biggest take-away from their research is that convincing students that an assessment is important - primarily through explaining how results will be used - is a simple and effective way to enhance motivation and thus the validity of responses.


When working with clients, I always want to ensure that communications to students clearly state how results will be used, either for the student as an individual or for the institution as a whole. Any time we can give students individual feedback, that's not only powerful for motivating them to complete the assessment and taking it seriously, but it's also just good educational practice. And you'd be surprised to see the impact of a message about institutional use and importance.


Sure, putting this in the instructions is a good start, but if students don't know that before they (for example) click a survey link, they'll never be motivated to click in the first place. Thus, effective assessment/survey efforts should think about their outreach as an informational campaign.


4. ...through multiple methods

If we think of our outreach to students as an information campaign, it's worth noting that the campaign should work through multiple media. I'm utterly shocked at how many schools I work with where not all students have or use their campus e-mail. Yet this tends to be the coin of the realm for communication. And sending multiple e-mails doesn't count as multiple methods of communication, by the way.


Yes, go to social media, but don't stop there. Is there a way to have faculty remind students in high enrollment classes? Can we put announcements up on campus or on the institution's website? The point here is not just about reaching multiple audiences, but a student receiving the message in multiple ways and hopefully from multiple people. This is another way that we communicate the importance of assessment efforts.


4. A note about incentives

"Shouldn't we reward students? Maybe not pay each student, but a drawing for a gift card or something?" This is another invariable question I face when planning data collection efforts.


I understand the logical compulsion, especially for those who loathe and/or fear assessments. This is something we don't perceive as enjoyable, and we perceive it as an additional burden, so why not compensate students for their time?


This presumption is where you've made your first mistake, and it's correlated with the aforementioned commitment issue. The beauty of JMU's Assessment Day is that it communicates to students that learning outcomes assessments are a part of the educational process - an expectation of your work as a student just as much as showing up to class.


This not only demonstrates the mindset of the institution, but it also couches the way students view the assessment. A student who's participating in a survey because they have a chance at a gift card is likely going to view that experience (and thus have different sets of motivation and effort) than a student who is viewing an assessment as an educational expectation.


Now, I've certainly run into people who won't adopt this mindset and who say, "well our students just won't participate if we don't reward them." I'd argue that's not the students' fault. That's because you haven't communicated to them the importance of and expectations around their participation.


Another reason for avoiding incentives is sustainability. We need student data, and we just can't give them $5 every time they sit down for a survey. In fact, even though research has shown that incentives can significantly improve performance (Duckworth et al., 2011), a recent study on ways to improve student motivation elected to exclude a financial incentive condition solely on the basis of sustainability (Liu, Bridgeman, & Adler, 2015).


6. Funnel points

A "funnel point," as I've come to call them, refers to a step in students' behavior through which every (or almost every) student passes and we can create an opportunity to stop, check and see if they've completed the assessment/survey, and not allow them to proceed until they've done so. There are several opportunities to do this at the beginning of the student life-cycle, such as enrollment, orientation, or course placement testing. For continuing students, course registration is perhaps the most common funnel point (and is what JMU uses as its funnel point for Assessment Day).


Funnel points are perhaps the most effective way to drive response rates, though they do require the highest level of planning and commitment to execute. The first challenge is commitment. As an institution, you have to be willing to say, "we're going to hold up [for example] our orientation process so we can take students who haven't completed the assessment over to the computer lab and make sure they complete it." It's not easy, but again, nothing comes without effort.


The second challenge is around coordination. I've called on many orientation offices and testing centers to be a centerpiece of data collection even when they are not the main recipients of those results. It's a big challenge, but if your institution has started with commitment, it's certainly surmountable.


...


As I said at the beginning of this post, it's not magic. That doesn't mean it's easy. Establishing commitment to a data collection effort is one of those recommendations like "the easiest way to lose weight is to cut calories," and epitomizes easier said than done. But I also reemphasize the importance of doing something different in order to see success. I, for one, would like to have data to inform what that different should be, and I'd prefer those data to be robust and meaningful. Without commitment, it's a survey that can be pushed aside or perhaps unseen by students aside as just another email.


References

*Barry, C. L., Horst, S. J., Finney, S. J., Brown, A. R., & Kopp, J. P. (2010). Do examinees have similar test-taking effort? A high-stakes question for low-stakes testing.International Journal of Testing,10(4), 342-363.


Duckworth, A. L., Quinn, P. D., Lynam, D. R., Loeber, R., & Stouthamer-Loeber, M. (2011). Role of test motivation in intelligence testing. Proceedings of the National Academy of Sciences, 108, 7716-7720.


*Lau, A. R., Swerdzewski, P. J., Jones, A. T., Anderson, R. D., & Markle, R. E. (2009). Proctors matter: Strategies for increasing examinee effort on general education program assessments.The Journal of General Education, 196-217.


Liu, O. L., Bridgeman, B., & Adler, R. M. (2012). Measuring learning outcomes in higher education: Motivation matters.Educational Researcher,41(9), 352-362.


Reed, M. (2012).Confessions of a community college administrator. John Wiley & Sons.


*Swerdzewski, P. J., Harmes, J. C., & Finney, S. J. (2011). Two approaches for identifying low-motivated students in a low-stakes assessment context.Applied Measurement in Education,24(2), 162-188.


*Wise, S. L., & DeMars, C. E. (2005). Low examinee effort in low-stakes assessment: Problems and potential solutions.Educational assessment,10(1), 1-17.

57 views0 comments

Recent Posts

See All

Comments


bottom of page