Academia: Here's What I'm Against: The Ruinous Addiction to Undermotivated Change
Thursday's Child Has Far to Go
The former president of Macalester College, Brian Rosenberg, has a book coming out that repeats a familiar, seemingly unchanging refrain that higher education (meaning, faculty) is or are stubbornly attached to the status quo, unwilling to explore any changes at all. There’s an excerpt of the book available at the Chronicle of Higher Education.
Rosenberg puts an incident involving a collaboration between him and a respected faculty member at the center of his critique. As he recounts it, about four years into his tenure as president, he and the faculty member had a discussion about research that suggested that there was very little reason to think that the production of scholarly knowledge had a proportional relationship to the quality of teaching. They decided that this was an issue worth bringing in front of the faculty for a conversation. “Neither of us expected rapid or dramatic change, but the first step toward improvement of any kind seemed to be consideration of the evidence,” he writes. He wrote to his faculty:
Have the expectations for scholarly productivity at Macalester grown to the point where they are distorting the professional development of our untenured and some of our tenured faculty and are working against other interests — such as teaching, advising, and course development — that we recognize as critically important? Have we moved closer to a university model and further from a liberal-arts college model of the teaching/scholarship/service balance than is healthy?
He then complains of the sharply negative reaction from the faculty, which was even worse “over email and in hallways” than in the open meeting. He and his faculty partner concluded it wasn’t worth the hassle of continuing to push forward on the question and gave it up.
What annoys me about this story as he lays it out is the lack of reflection about what happened. At a really basic level, when you start a conversation about research findings in a way that presupposes that they’ve identified a problem that both needs solving and that can be solved, that change is needed (even if not rapid or dramatic), you’ve foreclosed any opportunity to reflect openly, in the spirit of critical thinking, on what those research findings actually mean.
Every once in while, data coming in through research is so dramatic, or the situation so urgent, that there isn’t time for that kind of reflection. When data from HIV studies in progress in the early 2000s verified that male circumcision created a major reduction in infection risk, the researchers involved decided to immediately publish this data because so many lives were at stake.
But this isn’t that kind of data and it wasn’t that kind of situation. If faculty tend to react negatively when administrative leaders want to “start a conversation” in this mode, it is because the leader who proposes to do so fundamentally assumes that the conversation must end in what Rosenberg calls deep, transformative change, that this is the goal of all such conversation. Whereas when liberal arts faculty imagine convening a discussion about interesting findings or data, we often do so with a sense that “what is to be done?” is the next discussion and it is not inevitable, that whether anything should be done, could be done, and in what direction the doing should be in, is highly contingent on what we think about the research findings. Discussing whether we think those findings are sound and how we interpret their meaning is contaminated if we begin that discussion saying, “Let’s figure out how to make some deep changes based on these findings.”
Take the research that Rosenberg is addressing. He’s right that there is research that suggests that it’s not a given that the production of scholarship has a direct and proportional relationship to the quality of classroom instruction. But read the three articles he cites in his essay1 and you at least discover that this is a complex question to research and the conclusions are equally complex, as one of the articles notes:
The (inter)relation between research quality and teaching quality is complex and multidimensional. Based the literature, two main sets of mechanisms can be distinguished that can be behind this relationship. Depending on which of the mechanisms dominates, the relationship can be positive, negative or null. (Palali et al, p. 40)
The mechanisms in question are on one hand the view that research produces skills and knowledge that feed into teaching (that’s positive) and on the other hand that the time required to do research is rivalrous with the time devoted to developing and sustaining high-quality teaching (that’s negative).
It turns out that there’s a body of scholarship that supports the complementarity of research and teaching in terms of skills development and keeping the content of teaching up to date and a body of scholarship that details how incentives that drive faculty towards investing time in research do seem to erode the quality of instruction. Which first off would have to form a complex branch in any discussion of whether there’s something that needs to change and if so, what that might be. But higher education leadership often comes into the room only with the finding that drives an agenda, not with the full picture, and Rosenberg certainly sounds like he’s in that company in this essay (and his forthcoming book). The incuriousness that comes with the call to change is from the start at odds with basic ideas about the mission of academia in both teaching and research. (Macalaster’s current slogan on its website? “Where ideas and intellectual curiosity matter”.)
You’d also have to question on reading the articles that he cites whether they’re fully applicable to Macalaster, a small liberal-arts college. Palali et al studies students at Maastricht University in the Netherlands in the School of Business and Economics, both bachelors and masters students. Hattie & Marsh is a metareview of research overall on the relationship between research and teaching that finds “The overall relationship between quality of teaching and research was slightly positive” but also that colleges and research universities are distinctly different as groups and that colleges have a lot of internal variability in how they show in these studies. (Rosenberg might want to read his citations more closely). Figlio and Schapiro (the most recent of these studies) examines Northwestern University and notes that whether their findings are applicable to other institutions is an open question.
All three studies also point out how profoundly difficult it is to measure teaching quality and effectiveness and that there are also substantial difficulties in deciding what the metric for scholarly impact ought to be (which is why many institutions just fall back on quantity as the metric, even though almost everybody understands that to be a bad proxy for quality with a lot of perverse incentives packed inside). Notably, they disagree fundamentally with each other about strategies for measuring and evaluating quality and the relationship between the two domains. Figlio and Schapiro are attentive to the analysis of the difference between charisma and deep learning in teaching and the problem with approaches that might only be measuring charisma, but their strategy for assessing deep learning also only works well with a highly sequential curriculum where you can tangibly track the retention of particular skills or information. They also point out that the value of research prestige to the students at a given institution may not be confined entirely to pedagogy—that there are other important signals being sent, and social networks formed.
Suggest that any conversation must first dwell on these kinds of issues, and most higher education leaders are at best impatient, more likely derisive or dismissive. There isn’t time; these are trivial concerns. But they’re not: they’re at the heart of any claim that decisions need to be evidence-driven. And if you have a faculty of Ph.Ds, one thing most of them are good at is paying attention to the specifics. Wanting to start a conversation where all the specifics are banished before it even starts in favor of a determination to make changes runs counter to their training and to their preference.
To start with the thought that the research on the link between research output and teaching quality must drive changes also begs the question: why? To argue for change, you have to argue that what you’re doing is in fact a problem. That in some important way, you are failing right now, that you are less than you should be. And yet when higher ed leaders come to faculties urging that they embrace change, they do not say anything of the sort. I cannot imagine Rosenberg in 2007 walking into that faculty meeting and saying, “We are much worse at teaching than we should be” and having that conclusion be a public statement. If I looked at Macalaster’s webpage on the Wayback Machine from that fall, I feel confident I will not see “Macalaster College: much worse at teaching than we should be”.
This is the surreality of the constant call for change from leaders and their favored consultancies, that at the same time they extoll their institutional excellence, celebrate the quality of teaching, take pride in scholarship produced under their aegis, and note their fiscal health if they are in fact healthy. And yet they insist, urgently, that change must come. Adaptability! Nimbleness! Flexibility! Without accounting for how, mysteriously, immaculate excellence has blessed them despite the alleged resistance to change, the stubborn inadaptability of the faculty. Excellence in the present came from nowhere, and has nothing apparently to do with existing practices and mindsets. Excellence in the future can only come from being comprehensively different from the present.
Take a moment to breath and imagine that Rosenberg had had the patience to stick it out and insist on his conversation. (It is hard to reconcile the evident importance he places on this research finding and the fact that he just decided after a very short time it wasn’t worth the trouble to stick with it.) Suppose the conversation had unfolded as I suggest it might, with a patient evaluation of the research and the issues and a conclusion had at last been reached with, in the immortal words of the former president of the University of Virginia Teresa Sullivan, “incremental buy-in” from most faculty. Suppose that conclusion was that the pursuit of scholarship does not affect the quality of teaching, even at a small liberal-arts college.
Does that mean a change is gonna come? Consider the further frontiers of discussion that open at that juncture:
Are there reasons to value scholarship for itself rather than for its effects on teaching? Is subsidizing the production of scholarship a commitment to the public good?
What happens when you tell professionals whose work you valued yesterday that tomorrow you don’t value them in the same way? When that hits at strong intrinsic motivations that drive the labor of your professionals, at treasured ideas they have about themselves and their community? What do you get for forcing that shift?
Where are you going to recruit professionals who teach well but do not have training as scholars? Aren’t you going to end up recruiting people who were trained as scholars and telling them to drop that if you hire them? Are you going to compensate them for the loss of their general value in their labor market if they follow your lead, or are you going to wait for this change to come everywhere at once? If it doesn’t come, are you going to end up sitting on a conclusion that you can’t meaningfully implement? Or are you just going to hire people, ask them to change their professional identity, and not pay that off, perhaps via adjunctification of your faculty. (The Figlio and Schapiro article comes pretty close to endorsing that last strategy.)
How much better can teaching get if you remove scholarship from the professional obligations of faculty? None of the research that Rosenberg is referencing makes claims that this is a really huge effect size—all of them struggle to identify evidence for any relationship. What would happen if the teaching was that much better? If better teaching is the singular goal, what else needs to be studied? Do sports really help or are they a distraction? How about residential life? Do academic success staff who are not faculty measurably improve teaching outcomes? Do the possible improvements in teaching justify the huge disruption of changing the entire incentive system and value system of the institution?
How exactly did small liberal-arts colleges end up embracing the idea that scholarship and teaching are positively related? Was it a result of a research-driven decision-making process which led to clear calls for change and new policies enforcing that change? Any historian of higher education knows that’s not the way it happened. It happened slowly. It happened because of competition for a dwindling number of positions. It happened because presidents of places like Macalaster allowed their sense of “quality” to be driven by comparison to R1 universities. It happened because faculty became more ambitious in their scholarly goals and because scholarly knowledge in general grew. It happened because funding bodies demanded more and better work. It happened because scholarly publishing became a multi-billion dollar industry. It was a cultural and social change, much of it accidental in the fashion described by David Labaree in his book A Perfect Mess. How do you unravel that kind of change? Not by making a big decision after a careful discussion. And not without saying what problem it is that you actually want to solve.
That in the end is why faculties tense up when presidents, provosts, vice-presidents and board members assail them with demands for change. Because they know that all of those people have a prior belief in change for change’s sake, an incentive structure that doubles as a religious conviction—and yet few of them will say with any degree of confidence or specificity what exactly it is that they think needs to be changed and what the consequences are of failing to do so. That is what they want committees and meetings and consultants to do for them, so that they do not have the burden of advocacy. So that they can fix something that they never had to concretely diagnose as broken.
Of course we tense up in that circumstance. Ask yourself how you’d feel if you went to see the doctor and the doctor said, in fact always says, “We need to do some surgery on you”. When you ask why, they reply, “I don’t know, but I’m pretty sure we’ll find something if we open you up.”
Ali Palali, Roel van Elk, Jonneke Bolhaar, Iryna Rud, “Are Good Researchers Also Good Teachers?”, Economics of Education Review, Volume 64, 2018; John Hattie and H. W. Marsh, “The Relationship between Research and Teaching: A Meta-Analysis,” Review of Educational Research 66, no. 4 (1996); David Figlio and Morton Schapiro. "Staffing the Higher Education Classroom." Journal of Economic Perspectives, 2021, 35 (1).
Thanks Tim. I’m thinking also that when someone gets herself into a PhD program, a process begins where this prospective academic is entering a guild, craft, field in which there may be a paymaster like Macalester but there is also the beacon or hand of colleagues beyond the college of university with whom she/you/we engage in research. I don’t think this broader commitment is rendered void by signing an appointment letter. It seems presumptuous for a college leader or president to assume that all the “work” of the faculty member is under the direction of the college leadership. That “conversation” should never have happened without acknowledging that commitments to research are not simply pieces of college time.