Is generative AI just a bunch of meaningless puffery being pushed forward by Big Tech following a familiar blueprint?
My first answer is yes. (Later this week, I’ll have another answer.) Yes, right now. Yes, in the ways it is being deployed.
I’m going to focus in this column on generative AI’s use in research, writing and visualization. I don’t have the expertise to discuss its use in coding and programming, but I do see people I trust saying that generative AI does a pretty good job at writing code quickly and accurately following good prompting. I’m also not going to talk about AI usage in much more specialized kinds of data analysis in the sciences and in industry, where I am also convinced that there are current and near-term future uses that aren’t hype.
The world is mostly encountering generative AI in writing and image-making, however, and as a next-generation search engine. Right now, the boosters of AI, including academics, tout the current generation of AI tools as being “accurate” first and secondarily as a source of efficiency in research.
I find many of these appraisals to be at odds with the actual use cases of generative AI out there in the world. First, AI is mostly functioning as a kind of marketing hook—the quintessence of what we mean by “hype”. It’s not that different in this usage from “special ingredients” added to foods, cosmetics, toothpaste and the like that did little to actually change the function or effectiveness of the product. AI is being bundled into applications and platforms whose users did not ask for AI, whose purpose is not related to AI. I don’t need AI to conduct a Zoom meeting nor to read a PDF of a student’s essay in one of my courses.
Second, there is growing evidence that AI is being used as a tool for cheating in education, as a cost-effective generator of disinformation, as a way to generate routine communications or handle routine tasks (often obviously so and thus ineffectively), and in some cases, by people who are trying to save money on professional services (in generating legal briefs or legal representation, etc., again often ineffectively so.) Many of these uses remind me of people who believed that “cruise control” as a feature on cars, before autonomous driving, was itself an autopilot and who had accidents as a result. AI in these contexts “works” in some approximate sense because of a lack of scrutiny of the output (say, in a course where the instructors are grading five hundred essays that respond to a highly routinized prompt). Its users are not particularly well-served by AI in these contexts, but then again, in a fair number of these instances, the processes that are generating a need for AI are badly designed in the first place. If we were more self-aware collectively, we might almost use generative AI in these circumstances the way you use a bloodhound to track a criminal. Where we can use current-generation AI, we have a process or procedure that functions as an unnecessary obstacle with little purpose or we have a predatory political economy collecting rents of some kind. This is not what generative AI is being touted for—identifying excess bureaucracy, mindless communication, or punitive make-work. But it is what it’s being used to handle.
Third, at least some of the promoters of generative AI inside academia are identifying an efficiency function where none was asked for nor needed. Here I’ll focus in particular on claims I’m seeing about historical research, since that’s one thing I know well. There is no great crisis requiring historians to process a larger volume of archival data at increased speeds. Nothing depends on us doing so. More importantly, historical research requires constant adjustment of interpretations and questions in the process of reading a document. We don’t need and didn’t want a single final meaning of a given document to be dumped on our doorstep as a service.
Later in this series of essays, I’ll have more to say about what I do think generative AI can deliver, and why I ultimately agree that it’s not hype in the sense of its current and near-future capabilities. As you will see, however, the most useful deployment of current and near-future generative AI in research and expression absolutely requires that you already know a great deal. This is not a new problem in research or in creativity. You couldn’t use a card catalog without knowing what you were looking for as well as knowing what a card catalog is. You couldn’t use Google search back at its height of effectiveness without already knowing enough to iterate your keywords, refocus your searches, or mine out the materials you consulted from one search to refine the next.
The problem with the hype about generative AI, and its headlong insertion into many tools and platforms, is that it is brutally short-circuiting the processes by which people gain enough knowledge and expressive proficiency to be able to use the potential of generative AI correctly. Many of the boosters of generative AI inside academia seem to me dismiss this problem altogether, just as they ignore the likely consequences of AI-generated slop filling up existing databases and archives. People who don’t know what they want to know—and don’t know how to spot the difference between slop and knowledge—are being pushed to use AI as a substitute for processes of learning, acquisition and agentive creation. By the time we collectively understand why that was a terrible thing to do, it will be too late to undo it.
That’s the hype. Companies making AI are desperate to have it seem needed and they are working to create a simulation of that need via indiscriminate deployment of their products and through the same kinds of networks of boosters, promoters, and institutional entrepreneurs that dutifully assembled during the 1990s to predict that digital technologies would usher in political and economic utopias through the intrinsic capabilities of those technologies. Those networks served us poorly then and they are serving us poorly now. The real use cases of generative AI are not as entry-level tools but as sophisticated extensions of human capabilities and skills that take years of intensive effort to develop.
Yes, yes, this. Not everything needs to be so rapid and so vapid in its rapidity. Real historical research takes time and it is a full sensory experience. The quality of manuscripts, the style of (hand) writing or even of earlier typescript with annotations, the pursuit of meaningful images in photographs meant for very different purposes (e.g., casual snapshots of colonialists "at play" or "at home" in tents or local dwellings), even the penciled notes on briefing documents or the resilience of beeswax in seals--all of that is open for interpretation. It's not just the words but how the words are formed and on what medium and in what technical language, etc. Fast isn't the way to go; it's the way to fail.