Academia: The Questions Are Big! It's the Curricula That Got Small.
Thursday's Child Has Far to Go, But He At Least Read T.J. Kalaitzidis
In the spirit of “never waste a good crisis”, there’s yet another way to think about AI’s impact on the work of academia. You can be agnostic about whether generative AI has anything positive to offer higher education—or even outright certain that it doesn’t—but if you are, then you need to attend urgently to the weaknesses it has exposed in our existing practices.
That’s not an easy thing to do at the moment, because there are yet other weaknesses that we have to work on that have been exposed by the brutally indiscriminate attack of the federal government on all of higher education. It’s a bit like running around a house in the middle of a zombie apocalypse to nail more boards over the windows and doors while sending someone down to the basement to find out what that bumping noise might mean. That’s just the way it is, though: adjust the defenses while looking for a way out to some better and more secure situation.
I have read a number of analyses that point out that some of the best responses to generative AI are things we ought to have been doing all along, that unreflectively defending various pedagogies and curricular structures as necessary and functional is a mistake to begin with. Think back to most of the writing students did for your classes twenty years ago, if you’ve been in academia that long. (Or the writing you did as an undergraduate before AI.) Are you that confident that it was an important part of how students learned the subject matter? Or learned skills that they developed further in life? John Warner’s 2018 book Why They Can’t Write put an exclamation point on something I’d felt ever since I sat on a committee that saddled my institution with required writing-focused courses, which was that most of us didn’t use writing to stimulate thinking or as a medium of communication, but as a wheel of pain that forced students to read the books and learn the material. We should have been collectively a lot clearer about why and how to use writing in our teaching back then. Generative AI is to some extent a punishment for pushing that off, and it’s not just with regard to writing—any computer science department that didn’t have an understanding of what you learn by learning to code other than the practical art of making computers do things is now facing a pretty serious teaching problem.
Of the things I’ve been reading lately, I’ve found a lot of interesting wisdom in an essay by T.J. Kalaitzidis, who works at Brown University. (I’m grateful to him for bringing it to my attention.) Kalaitzidis’ essay, “How Generative AI Fixes What Higher Education Broke”,1 doesn’t convince me that AI is actually the fix, but it does convince me that AI exposes what was already broken about higher education, especially in institutions that claim they’re built around the idea of “liberal arts”, and that no response to generative AI that stands pat on the status quo version of higher education circa 2015 or so is going to pass muster.
Kalaitzidis starts off with exactly this point.
The ideas within [this essay] do not present GenAI as a savior or a scourge, but as a mirror, forcing us to confront what education has become and inviting us to rebuild something better.
The Cold War American university, he argues, became incoherent in its “systemic architecture” and fell back with increasing intensity on a “reductive concept of learning” that did not at all resemble the extravagant, utopian description of “liberal education” that had become commonplace in the imagination of faculty as well as wider publics. That process of falling back on a constrictive systematization produced universities that absolutely did not encourage exploration, creativity or synthesis, but instead relied on surface level memorization and performance in service of external validation. “What students ‘remember’ in such systems is either context-bound or quickly forgotten,” he concludes. We’ve ended up trapped in what he calls “epistemological pathology”.
In this view, faculty acted like students were engaged in open-ended exploratory learning. We told ourselves that students committed gradually to working in disciplines in depth through a process of conscious selection of the right tools for the right task. We imagined that a liberally educated student started off thinking about a big problem, an open-ended question. And then we imagined that the student was drawn to an established way of thinking about and working that problem, and committing to being able to do that kind of inquiry in terms of both its skills and understanding the best that has been said and done within that academic tradition.
Which is, as Kalaitzidis observes, absolute nonsense as a description of the actual system that almost all four-year universities and colleges use to organize undergraduate learning. Most of us force students to quickly commit to the course of study that a discipline offers and then, as he puts it, “enforce behaviorism”, e.g. to perform the signs of disciplinary commitment in advance of actually being able to reflectively consider or understand that discipline, and those signs turn out to be measurable repetitions of what the discipline knows and does, so that we can prove via tests, grades, metrics and assessments that the discipline has been learned step by step, in measured increments. Kalaitzidis writes, ““Assessments measure retention, reproduction, and formal compliance. Rubrics reward correctness within predefined bounds. Curricula scaffold students towards compliant outcomes, not transformative ones…despite overtures to critical thinking, students find success in stimulating insight, not generating it. Successful students understand the game and play it well.”
At a minimum, I think the essay hits with appropriate and well-deserved brutality on an especially sore point, which is the exaltation of “critical thinking” by professors and administrators alike. The problem with “critical thinking” conceptually is that it is strikingly difficult to concretize either in pedagogy or in our actual practice. It frequently functions as an unseemly kind of self-flattery, a belief that we are perpetually questioning our own authority, that we subject our own practice to reflection and skepticism, that we take nothing at face value. Which is not really a description of most of the professors I have known in my life. That’s understandable: the more comprehensive your commitment to that definition of critical thinking, the more exhausting it becomes—and often the more it cripples your ability to communicate with anybody. Every statement provokes an endless series of qualifiers and reflections, to the point of negating anything and everything that was said. But this means that “critical thinking” is a very bad hill to die on as a definition of what it is that we teach and what it is that we do. It’s either vaporously undefined or it’s narrowly constrained to the point of violating what the concept seems to promise.
More generally, the weakness of “critical thinking” as the distinctive outcome of liberal education underscores Kalaitzidis’ wider critique of the disjuncture between the valedictory word cloud most of us offer to talk about what liberal education means to do and what the actual curriculum and teaching of most universities entails. The famous Bloom’s Taxonomy is a beautiful illustration of this point. The top of that pyramid of learning, the exalted state of being we want our students to achieve, is what we say they’re doing from the first day they matriculate, but in practice it is at best something that most of them only get to do near the end of a four-year course of study. reversed image of what many of us say it is, what many of us claim it does. Which is why, even before AI, so many students struggled to connect what they were doing in a course of study with the messages surrounding liberal education and with the career outcomes being promised to them from the credential it provides.
For Kalaitzidis, this observation explains the intensity of the reaction of many faculty to generative AI and the sense that it poses a deadly threat. In this analysis, generative AI is almost a Brechtian device that reveals the mismatch between our self-understanding and our practices. Generative AI in “excelling at the rituals mistaken for learning: symbolic reproduction, surface compliance, and decontextualized recall” forces a crisis among educators, and regretfully, in his view, many of them “double down” and commit with even more intensity and rage to “militarization”, to the maintenance of an “academic police state” that seeks with even more fervor to prevent AI-enabled cheating.
That thought is certainly a description of the mindset of many faculty in large research universities, who have already realized that they are more or less helpless to prevent AI usage in large introductory courses that fit Kalaitzidis’ critique. You can’t very well weed students out in a class with 750 students in it if all of them can just subcontract all the work you assign to ChatGPT. The problem is the class itself and its relationship to an overall curriculum, but it’s easier to blame the students and the AI.
In my own teaching and writing, the solutions I’ve advocated—change the nature of the writing, focus on what gets learned in the live experience of discussion, be more exploratory and more Socratic, open up the syllabus to be less about the mastery of a discipline and more about a broader understanding of a topic or theme—are only possible in a very small classroom and only possible in a relatively non-scaffolded curriculum that de-emphasizes the repeatability or standardization of a given learning experience. Strategies that are only readily at hand in institutions like my own.
Kalaitzidis’ essay imagines a more thorough reorientation of higher education, intended to align its deep aspirations with its practical operations. Before he specifically promotes the usefulness of generative AI, he makes what I recognize as a familiar constructivist argument about the overall problem, an argument with deep roots in American educational thought in particular—that a liberal education should be situated in the real lives of students and faculty, in the world as it is, and it should be experienced, it should be hands-on and real-time. In service to this advocacy, he envisions higher education in terms of four quadrants in a matrix: low-function/low-formalism; low-function/high-formalism; high-function/low-formalism; high-function/high-formalism.
The last quadrant is both what he thinks a liberal education should strive to be, where its characteristic epistemologies and pedagogies match its declared aspirations and values and where he thinks generative AI can be a net positive, a “thought partner”. Some examples of high-function, high-formalism curricular design, in his reading: “Cognitive Apprenticeship; Case-Based Learning; Problem-Based Learning; Design Thinking; Socratic Method; Research Practicum; Simulation”.
Here we hit the bump in the road that I’ve discussed in my writing about generative AI, an issue I’ve seen discussed in a number of other essays and postings by other authors. Whether generative AI is involved or not, what Kalaitzidis characterizes as high-function, high-formalism learning is extremely hard to carry out if you don’t already know a fair amount about what you’re doing or learning, unless you have the luxury of many years and a tremendous amount of individual attention from a highly experienced facilitator or instructor.
Take formal apprenticeship, which he mentions both as high-function, high-formalism and in some contexts, high-function, low-formalism. In the older crafting sense of apprenticeship, when I am apprenticed to a master craftsperson, I will have to spend years of my life doing in order to learn the craft. I won’t be starting with experimenting or exploring, either—I will do the tedious, basic tasks that have to become second nature for me later on. I can’t be allowed to just experiment with expensive materials or operate dangerous machines. You could learn brain surgery by apprenticeship, but nobody is going to crack open a cranium, even of a lab animal, and tell you to see what happens when you run a scalpel through the brain. If I don’t have the vocabulary to describe what a brain is or how it works, or to understand the kinds of brain dysfunctions that surgery might correct, I’m not going to be able to do that work unless I spend decades opening up brains and learning by trial and error.
Here it helps a bit to not treat some problems as entirely new—constructivist pedagogies typically run into the problem that there are a lot of things that people can’t learn just by experiencing them in an unhurried way. There are some smart theoretical and practical strategies for working around the issues, ranging from radical rejections of everything that can’t be learned experientially, Socratically, slowly, to using constructivism as part of a mixed methods approach to learning. (That’s what labs are, whether in STEM or humanities: a bit of hands-on experience as a backstop to other modes of learning. Kalaitzidis dismisses that as “scripted” and classes it as low-function, high-formalism.)
You can’t get away from the time problem in particular. Whether you’re using higher education as a credentialing device that is a gateway to skilled labor or as a way of preparing young people for the lives they’re going to live, nobody’s going to put up with a redesign of the system that formally obliges a student to a decade or more of learning in the system. You can’t just redesign the university as an institution if you’re serious about a 100% high-function, high formalism approach, you have to disperse ‘education’ over the entire life course while retaining some sense of intentional design over its processes and some sense of expert responsibility for its outcomes. It would be great for each of us to have an assigned mentor for life who operates like a professorial version of Jiminy Cricket, guiding us through experiences into more and more knowledgeable and imaginative manifestations of our learning, but that’s not practical if we’re talking about a human teacher and it’s not possible if we’re talking about an LLM-based generative AI. That’s not what those AIs are and it’s not what they do, nor what they can do in some imaginable short-term future.
Barring a post-scarcity society that can pay for each of us to have our own Jedi Master, universities (or some successor institution) are going to have to perform some form of temporal compression with learning experiences. That means something that isn’t completely open-ended and 100% experiential. Somehow the young people of tomorrow have to be able to benefit from the accumulative wisdom, knowledge and skill of many generations of people. They can’t just recapitulate all of that history in their development as adults. Plus a pure experiential approach just doesn’t work for everybody. Yes, if you throw people into the deep end of the pool without preparation, most of them learn to swim well enough to get to safety. If you do that enough times, most of them get pretty good at swimming. But afterwards, at least some of them will resent the trauma of that experience enough that they’ll never get in the water again.
In any event, I think Kalaitzidis oversells generative AI as the answer to those old problems. He points out that the critics of generative AI are desperately trying to use conventional evidence about negative educational outcomes within the “epistemological pathology” of higher education to justify banning the use of generative AI, which he considers defending a model that is already indefensible. I would point out that some of the proponents of generative AI are also citing conventional evidence drawn from conventional research to say that generative AI is productive within that same “pathology”—many of the people trying to sell higher education on AI aren’t trying to sell a revolutionary redesign of education itself, just a way of making its dysfunctions cheaper and more efficient.
There’s also the problem that when he gets down to brass tacks, he ends up with the idea that incorporating generative AI into a less pathological form of liberal education ought to be primarily in terms of teaching “literacy”, in terms of metacognition about AI. I have to say that this strategy feels like something of a conventional move on the gameboard of ed-tech—that to quell a fierce argument about whether a particular technology is necessary or helpful, we should shift and accept that it is good to acquire literacy in that technology, to be knowledgeable about what it is and whether or how to use it. Literacy in this sense is always seen as the necessary predicate to student agency—that you cannot decide as a free and informed subject whether to do a thing until you are literate in it—but this is a move that scoundrels can make as well as saints. Raskolnikov more or less talks himself into having literacy in murder in order to decide whether to do more murders. Generative AI does seem like something that more people need to understand so that they don’t mistake it for something it’s not, but the more you structure that process into a curriculum, the more you’re actually just cementing it into the old pathologies that Kalaitzidis is criticizing, the more it will be difficult to disembed it if and when LLMs are no longer what we mean when we talk about AI. There are still people teaching “media literacy” who are in effect talking about the Internet of the 1990s, for example.
Where I’d start with the revolutionary redesign that Kalaitzidis hopes for is not with generative AI, even if I might actually agree in the end with his proposition that it could be useful. What I think liberal education needs is for the first year of a four-year program to look structurally like what we claim is happening in liberal education. Big questions, exploration, the cultivation of student agency, rich conversation, open-ended experimentation and experience.
And not in the conventionalized Great Books format that St. John’s, Columbia and Reed offer, because those are just hiding the lack of agency, the lack of exploration, the lack of real “critical thinking” behind some axiomatic propositions that are not up for questioning. In different ways, they are all asserting that there are some ontologically privileged texts that need to be read and talked about that will for all students open up the widest range of big questions and deep thoughts. But the canons being used as just as closed in their way as conventional disciplinary structures are, and they are similarly not subject to “critical thinking” in a way that might negate or contravene their selection. Try to tell a professor who thinks that Augustine’s Confessions speaks with perfect transhistorical clarity to a young man or woman today that this style of reading it requires both disregarding all the material in it that is deeply opaque in its historicity and treating a young reader of today as an obviously universal subject. (I made this point directly to another scholar at a workshop in terms of his assumption that the young people discovering those universal lessons would all be men and perhaps at the least a woman might read it differently, and I’m pretty sure that did not commend me to him as someone usefully engaged in ‘critical thinking’.) We could just say that you can’t read everything, and that there’s a benefit to students reading the same things together, so in practical terms, why not use the Great Books and gain the benefit of having many centuries of other readers and readings to draw upon. Ok. But that is not really a revolutionary opening up of liberal education or an open-ended development of agency.
How about this instead? What if the first year of a liberal education was just asking all the questions that arise out of being alive without immediately wrestling them into manageable, reduced, compartmentalized, organized, time-compressed pathways of study and skill development? What if some questions were asked but not answered, and all of them were treated more or less Socratically, by a professor who was not there as the expert in all the big questions but as someone who could (eventually) help point the way to where they are most commonly raised and worked on? What if a new taxonomy better than Bloom’s wasn’t a pyramid but an hourglass? Start big, go narrow, go big again?
Every time I get to talking in this vein—long-time readers know this is not the first time I’ve indulged these blue-sky fantasies—the few remaining people polite enough to pretend to take me seriously will bring me crashing to earth quickly by pointing out that this approach is nearly impossible to carry out for two reasons: a shortage of professors who could hold up their end of it, and the absurd luxuriousness and improvidence of such a course of study in a four-year institution that costs as much as colleges and universities do, in a society where neoliberalism shrank everything down to return-on-investment, austerity, and measurability.
Here is maybe where I hop on board with Kalaitzidis, with one wary eye on the emergency exit. Maybe in this time of multiple crises, this is how we stop just trying to defend business as usual—and maybe this is actually the kind of work that an often-wrong, often-misunderstood tool like generative AI really could help with. If we could appropriate generative AI, liberate it from the shackles of “efficiency” and “short-cutting”, and sell the idea of an exploratory first year via the perceived catchet and marketability of generative AI, that’s a deal with the devil that I’d be tempted to sign.
Apropos of this proposal, here’s Claude’s list of the ten most interesting questions to think about and learn about:
What is consciousness?
How should we live together?
What is our place in the universe?
How to distinguish truth from falsehood?
What does it mean to be human?
How do we find meaning and purpose in existence?
What is the nature of time and change?
How do we balance freedom and responsibility?
What are the limits and possibilities of human knowledge?
How do we navigate the relationship between technology and humanity?
So ok, those are good questions, even if I can think of ways in which they are also malformed, banal, or less important than other questions. (I asked Claude then to give me an objection to each of these questions that says, “No this is a bad question, badly asked”, and it came up with some good objections that could all push a conversation forward. )And they are questions that I think any 18-year old, regardless of their lived experience to that point, could at least begin to say something about if a skilled teacher or conversationalist worked through it with them. They arise out of life even if they’re not answered simply through the act of living.
It doesn’t matter if an AI is “right” that those are exactly the right questions. They’re good enough. It matters that the AI here is helping with the process of getting this in the right order—not reading a text or studying a discipline in order to adduce out of the material some buried, encoded, already-thought version of the answer to such a question and thereby to realize that the question could be asked in other ways. That always feels a bit like a pedagogical version of The Hitchhiker’s Guide—asking for the answer to the meaning of everything and realizing afterwards that you should have asked for the question first.
The real point, where I think I agree 100% with Kalaitzidis, is that the moments where you get to ask those questions in the company of peers and under the guidance of a knowledgeable and thoughtful person without being immediately pushed towards a particular way of domesticating that question into some specific philosophical, political, artistic or scientific way of handling it are extraordinarily rare in life, and perhaps even more so in most university educations.
But shouldn’t our students have a chance to explore at the beginning what those questions might be, what they could mean, where they could go? I hate the thought that the necessary precondition of asking “What is consciousness?” and having a serious conversation about that subject would be doing a major in cognitive science, psychology, neurobiology or philosophy. I’d rather a liberally educated student feel like they had a moment right there at the start where a question like that felt almost infinitely contingent, that the answers could include “Who cares?”, “I know it when I feel it”, “I think plants have consciousness”, “I think I understand it best when I read a novel”, and “It’s a historically specific product of particular kinds of social relations, not an individual psychological phenomenon”. I’d like to imagine that there would be students who could ask, without fear of feeling stupid, “What do you mean by the word ‘consciousness’”? And I’d like to imagine a design for liberal education where the end of a year of asking questions like that would then lead to students saying “The most interesting question we asked was X, and the most interesting approach to answering it is Y, so that’s what I am going to study more deeply.”
Maybe AI could be one way to make some faculty comfortable with a time—a month? a semester? a year? of more open-ended exploration like that. And maybe then we’d have better outcomes both motivationally and substantively with the specialized majors and courses that are the main substance of our existing work.
Image credit: William Blake, Night VII, preface page (vii), illustration to Young's 'Night Thoughts'; Socrates, reclining in foreground, receiving cup of hemlock from bearded figure., https://www.britishmuseum.org/collection/image/17924001
Kalaitzidis, T. J. (2025). “How Generative AI Fixes What Higher Education Broke: A White Paper for Reimagining Pedagogy in a World of Thinking Machines”. SocArXiv. https://osf.io/preprints/socarxiv/yh6rs_v8



Your post today has me thinking of two things. On apprenticeship, Brad Delong often describes how people being trained as early scribes or medieval monastic librarians were first trained to mix the clay and prepare the ink and that the writing came later. To me, this feels like a corollary to your point on apprenticeship and not letting total novices handle expensive or dangerous machinery or do brain surgery. The novice mixes the clay and learns to achieve the perfect consistency for the journeyman or master to then make the mark. The initiate monk prays and prepares the ink for the senior brother who illuminates the page.
But the larger connection I made was with Annie Abrams book about advanced placement, Short Changed. She traces the history of AP, the Jeffersonian liberalism that underpinned its origins and growing mid-century influence. Today, AP mostly serves as a way for students to *avoid* learning any more about a subject. The STEM kid takes AP Language or AP World History because she doesn't want to devote any time or effort to those subjects in college. This is part of the systemic architecture that reduces the concept of learning. Concomitant with AP as learning avoidance, the standards by which the College Board judges AP tests, especially essays, have become narrower and more formulaic. Compliance with the rubric is paramount and any quality writing or critical thinking is of secondary importance, at best. It is exactly as you quote from Kalaitzidis, "Successful students understand the game and play it well." This is what we are teaching successful high school students that college is all about. Hoops. Getting out of learning. Finding ways to go through school and university as efficiently (and cheaply?) as possible.
But another component that Abrams brings up is that the rise of AP is in part a political and legislative phenomenon with many states requiring all high schools to offer AP courses and requiring their public universities to give course credit for APs if the students score average or above. She reminds us that AP is a private curriculum developed by a company and that it's actually quite rare for governments to compel citizens to use and pay for private services. That these laws are more common in conservative states is also telling. Perhaps, though, we are about to see states mandating the use of AI products, too? Calls for AI literacy abound.
The issue of critical thinking is an interesting one. The issue is that genuine critical thinking that's new is exceptionally difficult. Academics are, as a whole, pretty good and thoughtful about what they do. The vast majority of those who imagine that they have some piercing insight that an entire field hasn't grasped are pretty well distilled by Keynes: "Practical men who believe themselves to be quite exempt from any intellectual influence, are usually the slaves of some defunct economist. Madmen in authority, who hear voices in the air, are distilling their frenzy from some academic scribbler of a few years back."
So I think if you put students in a position to ostensibly challenge some professional orthodoxy, odds are that you'll get a mildly dressed up version of the phlogiston theory of fire. Now, there are some genuinely innovative thinkers, and most of them tend to be young (because they aren't intellectually bound by the constraints of professional orthodoxy). But those young thinkers go on to win Nobel Prizes and such. And for everyone one of them, there are a few thousand of the former type.
So academia has, probably correctly, rewarded those that can understand and get comfortable with orthodox tools. And for decades those orthodox tools have more or less matched what workplaces require. But now the issue is that AI is likely to soon outstrip people's ability to use those orthodox tools. And we'll need to figure out what people will productively do going forward that a machine can't do just as well. As Paul Krugman put it, even if generative AI amounts to souped-up autocorrect (and Krugman seems to believe that's what it is), a lot of highly-paid white collar work also amounts to souped-up autocorrect.