Academia: No, I Won't Email My Students + Peer Review Redux + Teaching Study Skills
Thursday's Child Has Far to Go
A grab-bag of thoughts for this week’s entry.
There is nothing that irritates me more, in a small irrationally intense way, than unsolicited emails that ask me to please forward or announce some opportunity from the sending organization to my students.
This is partly because 99% of these emails are scams in one respect or another. There used to be an expensive and likely ineffective coding bootcamp whose for-profit conveners would ask faculty to please tell students to apply for the fellowship they were offering, when in fact they were offering no such thing—they were offering a small discount on the price for three or four applicants, that’s all. (Woefully, a few of my colleagues fell for this and forwarded news of this very much not-a-fellowship to students.) More conventionally, the scams are something like PIRGs, with their exploitative labor model that hasn’t changed much for four decades.
Even when it’s an organization that seems basically ok, I’m not going to use my students as a captive marketing list. That’s not why they’re taking my class. I’ve literally never had someone contact me about an opportunity that is highly specific to the course that I’m teaching—at best, organizations are reading off the faculty listed as members of departments and programs and mass-sending these emails to everyone so listed, regardless of what they’re teaching this semester. I grant you that if in a semester when I’m teaching a course on the first decade of African independence a fellow professor who is completely legit sends me a note saying he’s looking for summer research assistants to work on a new anthology of writings by postcolonial African intellectuals and political leaders, I will enthusiastically pass news of that opportunity on to the class.
Otherwise, don’t send me that shit.
On reading at digressionimpression’s Substack about a controversy among philosophers over peer review, I’m struck once again at a really basic problem in academic life: nobody teaches anybody how to peer review.
You get your doctorate, you get a job, either contingent or tenure-track, you start to get requests to do peer review. Thirty years ago, those requests generally included some minimal instructions about what the journal or press wanted to know; today, the instructions are often more specific and detailed.
Whatever the instructions, anybody who has received peer review knows that the variations in what you get are in no way predicted by or produced by those instructions. Some reviewers stick narrowly to assessing the factual content and a minimalist evaluation of probable “impact” (e.g., is this worth publishing?) Others will focus extensively on whether the author is citing all the scholarship that the reviewer deems relevant (sometimes including the reviewer’s own work). Some reviewers will workshop the prose really hard. Others will take issue with the argument or analysis, not because they think it’s factually wrong but because they don’t agree with it. Some reviewers will gatekeep against an author they think is out of their lane, others will encourage and welcome someone who is extending their work into a new field.
Whatever the peer reviewers do, sometimes it’s completely valid and useful—there is a problem with the argument or the methods that can be addressed with revisions, the prose is a problem, the manuscript isn’t acknowledging an important body of scholarship, the evidence is kind of sketchy. Sometimes it’s a nuisance—not quite wrong, not quite right, not really something that has to be fixed but the editors insist on it anyway and it’s hard to tell whether that’s because they want to maintain a relationship with the reviewer, because they have a model that inflexibly requires that they follow a reviewer’s lead, or because they agree with the reviewer. Sometimes it’s bizarre or malevolent—which editors may or may not agree is the case. (I’ve been asked on occasion to be a third reviewer to mediate a sharp disagreement; when I’ve found one of the reviewers to be just crazy wrong I’ve asked editors why they didn’t just throw it out, and heard that they just can’t because of policies, despite agreeing that the review is nuts.
Every single time the issue of peer review as a practice floats up as a controversy in public culture, whether in social media or in higher ed journalism, the lack of profession-wide norms around peer review gets noted. Moreover, I think we discover that collectively we don’t know very much in any way about peer review: who does it? who doesn’t? does anyone at all remember who does it? do editors share knowledge about who does it well with anybody else? is there a list of “never ever ask this person to peer review” that is quietly passed around between publishers?
Reading Daniel Willingham’s interesting essay on how to teach better study skills (and how students’ assumptions about studying often lead them to privilege less effective approaches), I was struck once again with by the thought that in higher ed, many of us don’t really teach directly about some of the skills that our students are expected to use proficiently. Study skills are one of those, reading is another.
Part of the problem is that even when you’re convinced that you should teach more explicitly to those embedded, essential skills, even when you read an account like Willingham’s that helps you think about the most effective approaches to those skills, it can be hard to teach if your own use of those skills has become masterfully second-nature.
I think this is pretty well understood as a dimension of teaching by scholars who study pedagogy. Imagine on this graph that the x-axis is “degree of mastery” and the y-axis is “clarity of instruction”.
When you’ve just started to learn something, trying to teach it to others is straight-up Dunning-Krugerville: you may feel confident enough about what you’ve just learned to communicate it, but you’re likely to be wrong about what you’ve learned and to communicate it poorly because you’re trying to cover gaps in your knowledge. I remember finally grasping what bokeh was while teaching myself photography and then trying to teach another person what it was. Only later did I realize that I didn’t fully understand it and that my description was technically unhelpful.
You hit a point where you’re quite expert but you can still remember not being expert and there are still things you’re working to understand fully. It’s not second nature to you. That’s the point where your teaching is maximally clear: you can use your memory of the journey to this point to communicate to students how to take the next step.
Good teachers often do work to continuously hold themselves at this point—they defamiliarize the subject matter, they unsettle their expertise. (I think this is part of why many of us get tormented by imposter syndrome later in life—our habit of productively unsettling our expertise for the sake of teaching gets out of hand and starts to undercut our confidence.)
But sometimes a skill becomes so second nature, so embedded in your work, that you stop being able to explain it clearly. You start to just say “do these five steps in order” without explaining why—and often forgetting that there’s a hidden sixth or seventh step in the process that you don’t even think about any longer.
It is not just that some practices get embedded but also that when they are, they become more rapid. We may know intellectually that we’d understand a subject better if we had the time to study it together with colleagues, or to externalize our design for a particular procedure but we also know that we just don’t have the time for that work very often. A STEM faculty member or lab instructor might do that for a new lab, for example, but if there’s an established lab that maybe has developed a problem or is creating confusion among students, it’s hard to call a halt and painstakingly review the lab bit by bit to see what’s causing the issue.
Reading is like that for a lot of faculty, and the work of formal studying in preparation for some sort of public presentation or demonstration of knowledge even more. It seems to me this is where we’re likely to suggest or validate some of the practices that Willingham suggests are less effective, like simply re-reading a textbook or other material prior to a test. There’s also the sense sometimes that temporarily disembedding a skill that you practice as second nature creates a vulnerability—you worry suddenly that you’ve unilaterally surrendered a claim to mastery in front of people who expect you to be masterful—which brings us back to the roots of late-career insecurity.
Image credit: "In Peer Review We Trust" by Sarahmirk is licensed under CC BY-SA 4.0.