I apologize for the sparse content in the last month. This has easily been the single most stressful semester of my professional career. Some of the reasons are obvious and in the news almost everyday. Some might be obvious if you know my own institution—or can infer from the stresses that are similarly unfolding inside many other universities and colleges. Some is maybe just self-inflicted, a consequence of caring too much about too many issues and sticking my nose into too many of them.
Also, I’m just getting older.
It’s not all about Trump, activism about Israel/Palestine, concerns about faculty governance and the like. There are other meteors cratering into the landforms of academia right now as well, and at least some of them might be extinction events in their own right. One of those is generative AI. I am hoping to spend some time in the next month or so of columns answering back to some of the voices inside academia who I think are uncritically or naively embracing AI while trying to do my homework on the status quo enough to respond intelligently.
Yesterday’s essay in New York that features unrepentent, almost gleeful, boosters of generative AI as a way to cheat, evade, avoid and reconfigure work in college has lit a fire under a lot of its academic readers. It pissed me off, too. But I want to work up slowly to my maximum level of pissoffedness over multiple columns. Think of this one as the very first minute or so of the “1812 Overture” with a foreknowledge that eventually the cannons are going to boom.
In this warm up, I’m thinking on what I’ve done with writing this semester, which has been something of a departure for me. Over the years, long before anyone worried about generative AI, I tried plenty of experiments with prompting, with assignment structures, and with types of assessment. I’ve done contract grading, I’ve done other forms of ungrading. I’ve done live writing, open-book take-home finals, little assignments, one big assignment chunked down into pieces, projects and presentations. Nothing’s been foolproof, nothing’s been an obvious failure. In saying that, I’m not talking here about preventing cheating, which has never been a problem that worries me. I’m talking about feeling like the work I do with writing and expression has a meaningful impact on the skills development of students and helps them think about and process the course content.
Last fall, for the first time, I felt I was seeing essays written in whole or in part by generative AI. The writers were at least careful enough to get rid of non-existent references, but a few of the final research essays felt significantly disconnected from the discussions and readings in the course to the point that they were a bit surreal, particularly when the writer was looking at a topic that was so closely connected to what we did. I didn’t like that feeling because it seemed so pointless. Here you are in this class where we’ve talked a lot about this issue, and you can’t even make use of that when you’re writing about that issue? I sleep easy at night thinking about the students who will forget one of my courses a month after it is over, just as I feel at peace knowing that some students will suddenly click into the right headspace ten years from now and realize in a good way what they actually learned from my class that wasn’t available to them up until that moment. I am even ok with someone who is plainly just 100% checked out of a course as it unfolds. Life happens, sometimes you’re in a class that you don’t have the spare energy for, it’s ok. But using generative AI to write a research paper that simulates attention from a student who has been seemingly present and accounted for in discussion feels unsettling—it’s a kind of erasure in real time, right before your eyes. I’d rather see a sneer or a weedhead’s blissed-out haziness than a seemingly present student who is just a duckblind for automated mediocrity created by a Big Tech company. I’ve never had that feeling before in teaching at my small, selective liberal arts college. I didn’t like it.
So here’s what I did this spring, using roughly the same instructions for both of my courses:
Our writing is in the form of short reflective papers. These are due at the beginning of the class session in the week where they are listed. Please try to submit them on time, as we will reference the response papers in the second half of the discussion.
All of your writing should prioritize the following: 1. Your personal perspectives and views. You must use first-person and you must talk about how you see these issues, these histories. 2. Your understandings of the class material and our discussions. Your writing must reference the work we are doing in class, both the things we’ve read and viewed, and where pertinent, what we’ve said in our discussions. Your writing is a form of witnessing, a transcription of how you’ve experienced this course in this semester.
Writing that is third-person, abstract, vague or does not connect to the course will be graded no better than a C regardless of the effort you put into it, the quality of your prose, or additional research you did. You should be using your writing to think about the questions asked for the reflective essays and to express your thinking to me.
Do not use generative AI to write these reflective essays.
Along with these instructions, I committed to doing a live in-class final exam for each class that asks a simple question that all the students have in advance. What did you learn from this class? What do you think you’re taking away from it? Do you care more or less about this topic after finishing this course? They can think about that question as much as they like beforehand, but they have to write about live and in-person at the end of the semester. The first of those exams is being written this afternoon, the next one tomorrow.
What do I think about this experience, here at the end of the semester? First, I’m extremely happy with the writing I received. Even in a few cases where I suspect the writer may have used generative AI a bit, what I asked for really did seem to hamstring any attempt to do so. If you have to talk about what we said in class and you have to be personal and you have to be expressive rather than descriptive? But it’s not so much checkmating ChatGPT that pleases me, it’s that the writing was almost uniformly much more interesting as a result. Not only more interesting as writing, but exciting as a transcript of thinking about the course content. I fully stopped worrying about the writing as formal product, as a thing in itself, and made it 100% about the experience and process of the course.
One consequence that I didn’t foresee is that my responses to the papers took considerably more time than normal. In the past, I’d say that I generally could evaluate a 4-5 page paper within 30 minutes. As all professors know, that adds up quickly—a class with15 students at that rate will take about 7 and a half hours to grade. This time I’d say I edged towards 45-60 minutes for papers that were only 750-1000 words and in many cases wrote comments back to students that were almost as long as their papers. What I found was that my comments weren’t evaluative in the same way as I might normally have written. They weren’t about the form of the writing, they were a conversation about the thinking in the writing. Certainly my comments in the past addressed that too, but I was also often talking about formal concerns—split this paragraph here, write this topic paragraph better, improve the flow of your analysis. Nobody loves grading, but I have found grading in this spirit this semester to be more engaging.
I don’t know that this will work for everything I teach. The course I applied it to are more inviting to this approach than some of my African history courses might be because it is considerably harder to entice students into expressive and personal reflections on that subject matter given that almost all of it is unfamiliar to them.
Am I offering this as advice to my many anxious, angry and melancholy colleagues elsewhere who are also coming to grips with generative AI? Unfortunately it is not advice that many of them could plausibly take. My approach is predicated on being a professor at a small college that has small classes, considerable resources and students who are generally well prepared for higher education. It’s also a result of a high degree of pedagogical freedom.
Generative AI is in some ways only the last—and maybe most fatal—of a series of injuries and ailments stemming from decades of bad management in higher education, though faculty own some shared responsibility for some of those wounds. If you had invented a time machine in 2021 intending to bring ChatGPT into existence and you went to the year 1980, you couldn’t have done better to prepare universities for the massive adoption of generative AI as a method for cheating than to do what administrators proposed and faculty assented to over the. years since 1980.
Massive classes, built around a lecture pedagogy. The conversion of most teaching to contingency, with the loss of mentorship and curricular governance that resulted from that change. The spread of over-specialized credentialism and the narrowing of a wage premium to a smaller and smaller number of professional and white-collar jobs linked to credentialism. The willing acceptance of our role as gatekeepers and weed-outers, of being the alibi for the accelerating failure of the American political economy to provide a decent living standard for most of the country’s residents. Higher tuitions and the risk adversity they have helped to cultivate. The punishing accumulation of bullshit work processes within the academy and the disconnects between them and the core labor of faculty. Visions of austerity slamming into teaching and scholarship while administrative ranks grow seemingly without end. It all has led to many students at large universities feeling as if the university and its curriculum is little more than a credentials pinata to be whacked until it gives up the candy, and GPT is only the best and biggest stick ever provided for that purpose.
So whatever else we do now, one thing should be plain: in most of our institutions, just begging our students to love what we teach and to eschew cynicism is not going to work. The deck has been stacked against that message for a long time. Perhaps for the same reason, I think we can’t punish our way out of this.
Image credit: "pinata" by VeganHeart Always is licensed under CC BY-NC-ND 2.0.
Useful piece; thank you for writing with some detail about your pedagogy. Sharing the specifics of what we do in teaching and what results we see is vital.
As a teacher of writing (recently retired) and part of a professional community of teachers/professors, the unfolding of generative AI since the debut of ChatGPT is a core focus. Crafting assignments that mitigate the benefits of AI and reward student intellectual effort is part of it, and much as in your example, more in-class verbal presentation and debate. We understand that AI will be part of work going forward, but we very much want students to experience, internally, the difference between prompting and submitting results and thinking something through for themselves . They will often do the former, but they should know the difference. With hope, they will decide to work those muscles a bit more.
Generative AI certainly feels different from previous technological changes. The advent of Wikipedia may have made it easier for students to get basic information about a topic, but it couldn't replace critical thinking, or learning to distinguish between rigorous and frivolous sources. Calculators could replace having to work out answers to equations by hand, but high level math was never about getting to answers; it was about understanding concepts and when and how to apply them.
Generative AI can, on some level, replace basic writing. But it also erases critical thought processes-- being able to read, engage with, and understand text; dissect ideas and identify logical flaws; think creatively and critically. These kids who show up to schools and feed prompts into ChatGPT really aren't shortcharging the institution; they're shortcharging themselves and society. They're not developing any of the critical skills that they'll need to be successful personally or professionally. They'll just check credential boxes.
That's not entirely unique-- I had high school friends who went to the most hard to get into colleges who hadn't read a book in their adult lives and grew up to be spectacularly banal and uncritical thinkers. School to them was about collecting credentials that would open the door to the next elite school, and then the high-paying job. But they inevitably hit a wall when they grew up to be, effectively, cogs who could follow instructions but didn't ever figure out how to think critically or learn.
I think generative AI is putting us on that path on steroids. Some of that you can mitigate with things like in person exams. But it'll be an uphill battle.