Updated: May 26
I had a call with a client the other day, a man who is highly experienced, intelligent, and – perhaps most importantly – legitimately concerned about issues relating to student success. We were discussing some data on a group of students with which he is working, and I was breaking down some of the results. After what was probably a longer and more nuanced explanation than needed on my part, he stopped me to ask, “Ok…. But what’s the best predictor of success?”
This is a question that’s bothered me for as long as I can remember working in this field. Perhaps the first time I recall noting my thoughts on the topic was while listening to a talk by Nathan Kuncel. Nathan’s presentation remains the most entertaining academic talk I’ve ever witnessed, and inspired me to be a better research, presenter, and communicator.
During that talk, he mentioned someone stating, without citation, something to the effect of “we all know that SES [socioeconomic status] is the best predictor of college success.” It’s a comment I’ve heard several times since that day, and one that’s been meta-analytically shown not to be true by a large-scale study done by Steve Robbins and colleagues when he was at ACT in 2004.
Regardless of its factual validity, my initial umbrage with the statement turned out to foreshadow later qualms I’d have with people’s discussions of the “best predictor of student success.” Not only do I not have an answer for what is the best predictor, it’s a question that I find to be entirely off base and irrelevant to, if not counter-productive to, student success efforts. Here’s three reasons why.
Reason 1: The “Best”… but out of what?
I’ve probably visited more than a hundred college campuses in my career, during which I’ve been fortunate to meet a lot of people doing great work around student success. Admittedly, I’m usually visiting these campuses talking about the role of noncognitive assessment in helping students and those working with them to improve outcomes like retention, persistence, and graduation. As I’ve stated before in this blog, there’s ample evidence to show that noncognitive factors significantly predict many student success outcomes.
But about once in every 10-12 visits, someone will raise their hand and say something like, “Well, we conducted a study of our students several years ago and found that _____ is the best predictor of retention.” This comment is usually meant to refute my statement about the value of noncognitive data, but rarely am I fazed because I’d just heard this about 10 visits ago.
The first problem with this statement is a normative one. “Best” is a relative term. If you have five bad variables in your model, one of them has to be the “best,” but that doesn’t mean it’s very good. Given how well noncognitive factors have shown to predict persistence, I’m really skeptical of any variable shown to be the best predictor of retention if good noncognitive data aren’t included in that study.
There’s a second, more complex problem that emerges in some of these conversations. The culprit that usually shows this issue is class attendance. People love to talk about how good of a predictor class attendance is. And they’re right, class attendance is highly correlated with a lot of factors, and often times you won’t be able to exceed its predictive efficacy with any other factor.
But here’s the problem: it’s a lagging indicator. My former colleague, Rich Roberts, used to say that attendance isn't a predictor of attrition, it’s part of it. By the time students stop showing up to class, they have one foot out the door. Putting aside scientific arguments about delineating predictors and criteria and a priori hypotheses, one thing to consider in a predictor of success is its malleability. If something is highly correlated with success, but we can’t change it, how helpful is it in our efforts to improve that outcome?
I really don’t mean to make this a pitch for noncognitive factors, but the major value of these skills is that we can measure them early, provide interventions, and have a chance at improving success. It doesn’t hurt that noncognitive skills also have a high correlation with student outcomes, either.
But the point is, when we say “best” there are lots of things that can mean. Sure, there’s predictive validity, but that’s just one part of it, and we’ve got to be careful when we label something the best of a bad bunch.
Reason 2: Predictor of what?
One of the other issues with identifying predictors is that you have to wonder what they’re predicting. A vast majority of research into student success has used GPA as it’s dependent variable (Nathan Kuncel has an interesting bit in his talk about this, too.) But perhaps the greatest contribution of the Robbins et al. meta-analysis in 2004 was that it showed the predictors of GPA and retention are very different. When predicting academic outcomes, factors like high school grades and standardized test scores (despite the criticisms Dr. Kuncel reviews) actually do a very good job.
There’s an old psychological adage: The best predictor of future behavior is past behavior. Thus, when talking about academic outcomes, it makes sense that academic predictors would be relevant. To be clear, noncognitive factors do enhance the predictive efficacy of academic markers, but for the most part, HSGPA and test scores do a rather good job on their own.
But when we change the “future behavior” from academic success to persistence, the “past behavior” of academic preparation becomes less relevant. This is what the Robbins et al. study (as well as later work) showed. Persisting from semester to semester isn’t the same as succeeding in the classroom. It’s not a learning activity as much as a goal-directed behavior. This is why noncognitive skills predict retention and persistence so well: they address the way students approach goals.
But that doesn’t always mean that people who ask “what’s the best predictor?” are focused on retention. We could be talking about a particular group of students, a particular attrition point, or even institutional retention, as opposed to student-centered persistence. We can’t always assume that past studies will consider these factors when answering “what’s the best predictor?” But we, as researchers and practitioners, must.
Reason 3: Best predictor…for whom? And for to what end?
The most troublesome aspect of asking “what’s the best predictor?” is actually what is implied by the question itself. Asking the question connotes the search for a simpler answer to student success efforts. When people ask for the best predictor, they’re really looking to solve the mystery of, “What’s the one thing I can do to magically solve my student success problems?”
Personally, I feel that this is one of the fundamental reasons why, as a whole, institutions of higher education struggle with student success. There are basic assumptions that underlie our work, and we're often ignorant of their existence, let alone the ways in which they impact our work. Searching for the best predictor uncovers at least two of these harmful assumptions.
The first is that we assume that our students are the same. Now, we all know this not to be true, but our actions certainly embody it. Think about the major steps a student needs to take just from the time they enroll just until the time they come back for their second semester. How many times in that process are there systemic means of students receiving supplemental support, or changing their path based on their strengths and challenges? When I say “systemic,” I mean that is an intentional part of the process, opposed to “programmatic” solutions that are added on based on issues we’ve seen (e.g., Trio programs).
If you’re struggling to find an answer, don’t fret. Almost all of education is riddled with this problem. After all, throughout centuries of instruction, research, and brilliance, most of the way we teach is still a didactic, one-teacher-to-many-students approach that assumes all students will learn well in that way. For many reasons, we’re not very good at creating systems of education that center on the student rather than the organizational process.
The second major issue uncovered by the question is the aforementioned word “magic.” For many, there is an assumption (either implicit or explicit) about how we can fix student success…. In fact, maybe the assumption is that it can be “fixed” at all. I’ve often said that there are as many reasons that students fail as they succeed. Sometimes it’s a characteristic of the student, whether it be their preparation or their commitment, that’s most attributable. Sometimes it’s the fit they have with their institution. Other times might be attributed to a lack of sufficient guidance or support. For most students, it’s a complex combination of these things.
So what should we be asking or doing?
When I talk with faculty, staff, and administrators in higher education, almost every one of them has a special story to tell about how they got to where they are.
“I actually dropped out once myself, but came back later after really finding my passion.”
“I wasn’t a very good student, but I connected with one professor who really helped me.”
“I was balancing a family, and had to work extra hard to make sure I even passed all my courses.”
Almost never do I hear, “Yeah, I was just really smart and school came naturally to me, so I thought I’d come back and help other students who are like that.”
So why don’t we effort to build a system of education that accommodates all these stories? Rather than looking for the “one thing” we can do to improve success for all students, why don’t we seek out ways to better understand each individual student and the things we can do for them?
Just think if the student success process worked in the same way as scheduling and advising. If we knew, for example, that students needed to develop their organizational skills, we could "schedule" them to work with an institutional resource that could help. Feeling socially disconnected? Here's a student club or organization that matches your interests. In this way, we could tailor our student supports for each student, rather than trying to offer the one magical program or service that we think will solve the problem.
I know, I know, this sounds like I’m just making an argument to support the work that I do: holistic assessment, advising, and support. But the fact is, I’ve come to that work because I see it as the best answer to the problems we have.
In the end, I understand precisely why people ask for the best predictor of student success. But, as with most things worth asking, it’s a weighted question without a simple answer. The next time you find yourself pondering this query, or working with someone who is, hopefully this will help point you to more fruitful efforts.