top of page
CKB Logo.JPG
Search

What Matters Most to Student Success?

Updated: Oct 25, 2023


In a previous role, nestled within a large, standardized testing organization (like, a really big one… I mean, I’m not going to name any names but you can probably guess), I was often tasked with mediating between our research division, the business unit, and the marketing team. The business unit wanted to put out information to support our tools, the marketing team was given the charge of creating messaging, and research was responsible for deciding whether or not we could support the claims that marketing wanted to make. This is quite common in just about any industry, and I give the assessment industry credit: research always has the final say. For the most part, reputable organizations in our world stick to claims that can be supported by research.


There were two perennial topics over which the marketing team and I would often butt heads. The first was the use of oxford comma. As an ardent user of the mark, I would never approve anything that didn’t include it, with near religious fervor. The marketing team, however, bound by MLA standards that do not include the use of the comma, would always push back. The cat-and-mouse game of edits, track changes, and general pettiness around this topic is equal parts childish and principled. As long as I’m telling the story, I never lost one of these battles.


But the other area of consternation involved a very simple word: “most.” We were constantly putting out pieces referring to noncognitive skills as the factors that were “most predictive of student success.” My issue was always that this was a very nebulous statement, and depending on which study one was citing, could be incorrect. Yes, noncognitive factors have lots of strong, meta-analytic evidence to support their value in predicting student outcomes.


According to several large-scale studies (and countless more that I’ve conducted with individual institutions), they are often the best predictor of student retention and persistence (e.g., Markle et al., 2013; Markle, 2015; Robbins et al, 2004). But, I could also look at studies designed to predict academic outcomes and find that academic markers do a pretty good job in predicting academic success, comparatively (e.g., Robbins et al., 2004; Kuncel & Hezlett, 2010).


I’ve written before about the folly of this question, however I did want to take time to give you a few whopping facts about the empirical value of noncognitive skills.


First: they do predict… whoppingly well.

People will often say things like “significant” or “large effect size,” but I often doubt that this really captures the gravity of some of the extant research. For example, in the Robbins et al. study, they found that traditional markers (i.e., HSGPA, admissions/placement test scores, and measures of socio-economic status) accounted for about 22% of the variance in first-year GPA. The addition of noncognitive factors increased that prediction to 26%. These numbers might not sound like much, but that’s a strong relationship when you consider similar studies. In sum, noncognitive factors boosted the ability to predict GPA by about 20%.


When predicting retention, there was an 88% increase in predictive efficacy – nearly doubling the ability to predict who would leave and who would stay. The initial model only accounted for 9.1% of the variance, while the holistic model accounted for 17.1%. (What’s perhaps even more interesting: a model that started with noncognitive factors and added traditional markers only improved by 30%.)


In 2016, I wrote a paper for AIR (Markle, 2016) that looked at the variance in predictive models across the institution. We found, in alignment with Robbins et al.’s research, the prediction of GPA was rather inconsistent. Academic markers, generally, did the best job, and noncognitive factors often boosted predictive ability, but these effects varied. In some schools, HSGPA was a powerful predictor. In others, no relationship at all.


But what was fascinating was that, in all of the seven schools for which we had data, noncognitive factors were always the strongest predictor set when predicting retention. In some cases – and we’ve recently validated this with several of our ISSAQ users – there was absolutely zero relationship between measures of academic preparation and retention. This means that, if these schools were providing support for students with low academic preparation, such programs would probably be just as effective if given to a random set of students. Looking at academic prep as a “risk factor” for retention, for many institutions, is done without any evidence to support its relevance.


The second important finding about prediction is that these variables can actually change.

One of the major criticisms of “noncognitive factors” revolves around interventions. Can, or even should we seek to change what some might construe as personality or values? Well, before getting into that debate, let’s talk about one thing that’s pretty clear from the literature: we’re certainly not very good at changing the academic variables.


I’ll spare you the psychological debates about the malleability of intelligence or the effectiveness of such interventions that seek to do so, but here’s what I can say: Research into developmental education – which highlights abysmal rates of completion, advancement, and attainment – has shown that the system higher education built to try and “remediate” deficiencies in academic preparation was certainly not effective and, in all likelihood, detrimental to student success (Bailey et al., 2008; Scott-Clayton, 2011).


But there are two important studies to which I’d like to refer you to emphasize the value of noncognitive factors as truly valuable predictors of success, even beyond their predictive efficacy…


First is a paper by David Yeager and Greg Walton (2011) which has one of my favorite titles of all time: Social-Psychological Interventions in Education: They're Not Magic. The studies reviews multiple studies, showing the effectiveness of theory-based psychological interventions on discrete student success outcomes. What’s more, they use the review to identify and discuss which factors are key to designing meaningful interventions—they must be theory-based, discretely implemented, and engaging.


The second is work from CASEL – the Collaborative for Academic, Social and Emotional Learning at the University of Illinois, Chicago – which has looked at large-scale interventions in the K12 space (Catalano et al., 2002; Durlak et al., 2011). Their work shows that interventions focused on social and emotional factors have as much or more impact on academic outcomes as those interventions specifically focused on academic outcomes.


That being said, a common misconception about noncognitive factors is our ability to change them. “Can we even build effective interventions?” These two lines of work show us not only that we can, but specifically how to do it.


Lastly, noncognitive factors might be the key to finally serving those “traditionally underserved” populations.

I’ll be honest here and say that this is an area where the research is still lacking a key consensus on the issue. Not to say that the claim isn’t clearly defensible…


Go into Google Scholar and search for “noncognitive factors traditionally underserved populations.” I just did and the list of relevant studies is long. What’s more, pick a group on which you might be focusing – first-generation college students, women in STEM, student veterans, students of color – and use that to replace “traditionally underserved populations” in your search. A mountain of research will appear, I’m sure.


challenge is that all of these studies are typically limited. They don’t share a common framework, so one might talk about growth mindset while another will exclusively consider sense of belonging. Additionally, these studies are most often single-institution studies, so they lack the empirical weight to drive a cohesive narrative.


But the truth is, indeed, out there. All of these studies tend to show that when you consider the needs of underserved populations, noncognitive factors typically help to either articulate a challenge that encapsulates higher education’s failings (e.g., sense of belonging) or identifies an intervention that is particularly effective within a certain population.


So what are we waiting for?

Back to my original issue with the question, “what matters most?” The issue with that question, much like the one about best predictors, is that it only focuses on the empirical aspect. It neglects the issues of intervention and equity that are so important to any understanding of student success. Perhaps most frustrating is that, even if you want to talk about empirical value, noncognitive factors make the most sense as the foundation of your predictive model.


So my question stands: what are we waiting for? What is preventing us from using a holistic framework of student characteristics (like those used in our ISSAQ platform, perhaps?) as the foundational understanding of student success?


References

Bailey, T., Jeong, D. W., & Cho, S.-W. (2008). Referral, Enrollment, and Completion in Developmental Education Sequences in Community Colleges. CCRC Working Paper No. 15. In Community College Research Center, Columbia University. Community College Research Center, Columbia University.


Catalano, R. F., Berglund, M. L., Ryan, J. A., Lonczak, H. S., & Hawkins, J. D. (2002). Positive youth development in the United States: research findings on evaluations of positive youth development programs. Prevention & Treatment,5(1), 15a.


Durlak, J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., & Schellinger, K. B. (2011). The impact of enhancing students’ social and emotional learning: A meta‐analysis of school‐based universal interventions. Child development,82(1), 405-432.


Kuncel, N. R., & Hezlett, S. A. (2010). Fact and fiction in cognitive ability testing for admissions and hiring decisions. Current Directions in Psychological Science, 19(6), 339-345.


Markle, R.E. (2016, June). Noncognitive skills and predictive modeling: Integration and implementation. Presentation to the Association for Institutional Research Annual Forum: New Orleans, LA.


Markle, R.E., Olivera-Aguilar, M., Jackson, T., Noeth, R., & Robbins, S. (2013). Examining evidence of reliability, validity, and fairness for SuccessNavigator. (ETS RR–13-12). Princeton, NJ: Educational Testing Service.


Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin, 135 (2), 322-338.


Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138(2), 353-387.


Robbins, S. B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A. (2004). Do psychosocial and study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130, 261-288.


Scott-Clayton, J. (2011). The Shapeless River: Does a Lack of Structure Inhibit Students’ Progress at Community Colleges? CCRC Working Paper No. 25. Assessment of Evidence Series. In Community College Research Center, Columbia University. Community College Research Center, Columbia University.

66 views0 comments

Recent Posts

See All
bottom of page