In the first part of my series on noncognitive skills, I discuss why I think this domain is such a critical area of educational research and practice.
I vividly remember the precise moment that my interest in noncognitive skills was cemented. It was early August, 2007, and I’d just begun my work as a doctoral student in the Assessment & Measurement program at James Madison University. Then Executive Director of JMU’s Center for Assessment and Research Studies, Donna Sundre, was addressing the new crop of students at CARS’ Graduate Assistant Institute.
In her opening remarks, Donna emphasized that, in understanding and assessing what students learn in higher education, CARS was interested in “both the cognitive, and the conative”* components of what students learn in the classroom. She noted that, when taking a chemistry class, we’re as much interested in understanding our development of students’ love of chemistry as we are their knowledge and skills in that field.
(*NOTE: In another post, I’ll come back to this point about terminology. This is perhaps the only time I’ve heard the term “conative” used in this setting, and I will, until my next post, be sticking to “noncognitive.”)
You might presume that, entering this program with two degrees in psychology already, I might have an existing interest in understanding such constructs. Admittedly, this background may have granted a certain proclivity to the noncognitive realm. Nevertheless, I was struck by a thought in that moment: most, if not all, of the graduate students in that room were likely interested in assessing knowledge and skills. In fact, most of the field of education had spent the last century assessing “reading, writing, and ‘rithmetic” as the foundation of what students learn.
But I was far more interested in understanding this other “stuff” that was so integral to the educational process. It’s funny to think back on it now, but in that moment, I somehow understood - even with this vague and novel term “conative” used to capture my attention - all that it meant. The dispositions, attitudes, behaviors, interests, and many other factors that weren’t traditionally captured by our measures of learning and achievement. THIS was where my studies, and in fact, my career, would focus. Indeed, my research and practice at JMU would focus on student affairs, rather than academic programs and interventions.
Subsequently, my first job was as a Director of Student Affairs Assessment at Northern Kentucky University, where I worked to help colleagues measure cocurricular student learning outcomes, and to use noncognitive data to support our student success efforts. And the remainder of my career to this point, in both research and operational roles at Educational Testing Service, have been inextricably linked to the noncognitive domain.
When I talk about noncognitive skills, there are inevitably a series of questions that emerge. In fact, my first blog posts for DIA will be a series (of which this is the first), each addressing a particular question such as “Why are they called noncognitive?,” “But can you even measure that?,” and “Can we change them, and if so, how?” These questions are important and rarely can be addressed with a simple answer, but I first thought that it might be worth addressing a rather central question: “Do noncognitive skills even matter?”
One of the ways to answer this question is, “because people say so.” Under the umbrella of what I call “espoused importance,” there are two particular studies that use stated importance as a source of evidence. One of my personal favorites is a study conducted by Neil Schmitt and colleagues (2008). In considering innovative admissions approaches, one of the first questions involved the constructs that might be measured as part of the admissions process. In order to identify the characteristics colleges and universities might value in incoming students, Schmitt and colleagues examined a rather logical source of input: mission statements. In conducting a qualitative review of the mission statements from 35 colleges and universities across the country, Schmitt et al. found 12 constructs:
Knowledge and mastery of general principles
Continuous learning, intellectual interest and curiosity
Artistic and cultural appreciation
Multicultural appreciation
Leadership
Interpersonal skills
Physical and psychological health
Social responsibility
Career orientation
Adaptability
Perseverance
Ethics
You will notice most of these domains (I would argue all, with the exception of “knowledge and mastery of general principles”) could be described as noncognitive. I’ve always interpreted this finding as such: when we, as institutions of higher education, try to tell the world what it is we do, we describe our impact on students using noncognitive skills. I find that quite powerful.
The second source of espoused importance is employer surveys. When I was studying industrial/organizational psychology, I often heard that employers commonly hired on “soft skills.” When it came to technical skills or field-specific knowledge, one of two things was common to most jobs: either the technical aspects would quickly expire, emphasizing the need for continuous learning and commitment thereto, or employers expected to train new employees on industry-specific factors.
Indeed, any survey of employers I’ve seen shows noncognitive skills to be in highest demand, not technical skills. One such example was a survey put forth by the Conference Board and several other organizations (Casner-Lotto & Benner, 2006), where - after communication skills - teamwork, work ethic, and social responsibility were ranked most important. At the bottom of the list, by the way, were English, mathematics, and science.
What’s interesting about these two studies is that they deal with admissions standards and workforce skills: the inputs and outputs of higher education. The remainder of the evidence I’ll cover deals with the in-between. Put differently, whenever I’ve tried to answer the questions about the importance of noncognitive skills, I’ve not only focused on “important to higher education,” but more specifically what matters to student success, and how noncognitive skills relate to student learning, persistence, and completion.
Beyond espoused importance, several large scale studies provide empirical evidence that relate noncognitive factors to student success. There are some basic meta-analyses that correlate various noncognitive factors and student success, primarily in the form of college GPA (e.g., O’Connor & Paunonen, 2007; Poropat, 2009). Certainly, these studies among others show notable correlational evidence. But two other studies warrant particular mention in my opinion.
First is a meta-analysis done by Arthur Poropat (2009). The notable distinction about this study is its relationship between noncognitive factors (using the Big Five personality model) and student achievement across the student lifespan. In examining primary school outcomes, Poropat found that academic factors (i.e., measures of intelligence) had the strongest relationship with student achievement. While several personality factors showed significant predictive validity, the coefficients were roughly half that of intelligence.
However, when reviewing studies of higher education, two notable factors stood out. Intelligence and conscientiousness - those behaviors associated with factors such as organization, reliability, perseverance - were the strongest predictors, and of equal strength to boot. Thus, while academic preparation was important for success in college, it was no more important than effort, organization, and other behaviors associated with conscientiousness. Here, my interpretation of this has always been that, had we known this a century ago, the SAT might be as much a personality assessment as it is a measure of academic achievement.
One of the major limitations of all three meta-analyses mentioned thus far is their dependent variable. In all cases, the dominant measure of student success is student grades/GPA, which only represents part of the student success picture. These studies ignore enrollment related outcomes such as retention or persistence, which - for many institutions - is likely the more important variable.
A meta-analysis by Robbins et al. (2004) improves upon other studies in several ways. First, the authors looked at incremental validity. They were able to show that noncognitive factors not only correlated with student outcomes, but that this relationship was significant even when you controlled for admissions test scores (e.g., ACT/SAT) and high school grades. This is critical to understand because, practically, this shows that measures of noncognitive skills can give you insights into students’ potential that simply aren’t captured by academic measures.
Second, Robbins et al. looked at both academic and retention outcomes. When predicting academic success, they found what most might expect: academic preparation was the most important predictor, with noncognitive factors providing moderate incremental predictive validity. However, when they looked at retention outcomes, the opposite picture was true. Academic factors (particularly test scores) were relatively weak predictors of retention, whereas noncognitive factors added the most predictive validity, even above and beyond socioeconomic status.
There’s a third source of evidence that, in my opinion, is simultaneously the most important and the least explored. That is compensatory evidence, or studies that show the presence of noncognitive factors can help students who arrive in college with certain challenges (e.g., academic, socioeconomic) succeed despite those hurdles. There have been a handful of studies that have looked at students with low academic preparation, first-generation students, and other traditionally underserved populations (e.g., Li et al., 2013; Tracey & Sedlacek, 1986; Dennis, Phinney, Chateco, 2005; Ting, 2003). While these studies have shown promising results, they are often limited by employing low sample sizes and/or single-institution samples. Moreover, there have not been the meta-analytic studies that are often the coin of the realm for evidence in this field. Thus, I’d say the evidence in this compensatory area is promising, but less conclusive.
This is however, the key question. When I talk with institutions about noncognitive skills, invariably someone mentions a student group for whom they think “this will work.” It could be returning veterans, minority students, or any traditionally underserved population. This issue really ties back to that of the popular grit concept - can students who have these “other” skills (i.e., noncognitive factors like perseverance, self-efficacy, etc.) succeed, even when other challenges present themselves?
As you can see, discussing this body of research only raises more questions. Personally, the more I learn about noncognitive factors, the more research questions I generate. Perhaps that’s why my passion for this area persists (pun intended). However, there are a few things that are important to take away from this particular topic…
1. These factors are important. For me, the evidence is in. We know that these factors are at least as related to student outcomes as academic factors, and in many cases more so. There are several reasons why we don’t discuss, assess, and address these factors more in higher education (e.g., faculty-based cultures that focus on academic learning, limitations in assessment and data), but the limiting factor is not sufficient evidence.
2. It’s a broad set of skills. When talking about noncognitive factors, people often ask, “but which ones are the MOST important?” As if to say, “I see you’re measuring 10 different skills here, but could we just measure motivation and do just as well?” To this I would ask, “I see that students are taking a range of different courses, but is there one area of learning that’s most important?” There are many frameworks to define the noncognitve space, but I would encourage you not to be fooled into thinking that one construct, theory, or TED talk explains the factor that students need. It’s far more complex than that.
3. Determining importance depends on… I’ve spent lots of time running various predictive models with noncognitive factors, looking how models perform across institutions, within student groups, etc. Many times, the results are different. Assuming that a given factor - whether it’s students’ study skills or their ACT scores - will have the same relationship to success at each institution or for each group of students is far too simple a picture. This is why data fluency and predictive modeling efforts are so important. Even when you measure a wide array of skills, you need to be able to understand how those skills matter at your institution and for your various groups of students. But how to do that is for another post!
Thank you for taking the time to read through this, my inaugural DIA blog post. I’m proud to discuss an issue that’s been so central to my career, and look forward to continuing to do so.
References
Casner-Lotto, J., & Benner, L. (2006). Are they really ready to work? United States: The Conference Board, Corporate Voices for Working Families, Partnership for 21st Century Skills and Society for Human Resource Management. (http://www.p21.org/storage/documents/FINAL_REPORT_PDF09-29-06.pdf)
Dennis, J. M., Phinney, J. S. & Chuateco, L.I. (2005). The role of motivation, parental support, and peer support in the academic success of ethnic minority first-generation college students. Journal of College Student Development, 46(3), 223-236.
Li, K., Zelenka, R., Buonaguidi, L., Beckman, R., Casillas, A., Crouse, J., ... & Robbins, S. (2013). Readiness, behavior, and foundational mathematics course success. Journal of Developmental Education, 37(1), 14.
O’Connor, M. C., & Paunonen, S. V. (2007). Big Five personality predictors of post-secondary academic performance. Personality and Individual Differences,43(5), 971-990.
Poropat, A. E. (2009). A meta-analysis of the five-factor model of personality and academic performance. Psychological Bulletin, 135 (2), 322-338.
Richardson, M., Abraham, C., & Bond, R. (2012). Psychological correlates of university students’ academic performance: A systematic review and meta-analysis. Psychological Bulletin, 138(2), 353-387.
Robbins, S. B., Lauver, K., Le, H., Davis, D., Langley, R., & Carlstrom, A. (2004). Do psychosocial and study skill factors predict college outcomes? A meta-analysis. Psychological Bulletin, 130, 261-288.
Schmitt, N., Billington, A., Keeney, J., Reeder, M., Pleskac, T.J., Sinha, R., & Zorzie, M. (2011). Development and validation of measures of noncognitive student potential (College Board Research Report 2011-1). New York: The College Board. (http://professionals.collegeboard.com/profdownload/pdf/10b_1555_Dvlpmnt_and_Validation_WEB_110315.pdf)
Ting, S. M. (2003). A longitudinal study of non-cognitive variables in predicting academic success of first-generation college students. College and University, 78(4), 27.
Tracey, T. J., & Sedlacek, W. E. (1987). Prediction of college graduation using noncognitive variables by race. Measurement and Evaluation in Counseling and Development,19(4), 177-184.
Comments