Of Course Ganz Is Right About Polls
75% Of Readers Agree
John Ganz is a deservedly successful Substack author. I really like his recent book and I always enjoy his newsletter. That makes him a target for other public writers who want to establish their gunfighting credentials. And sometimes the arguments people are trying to start really are not as thoughtful or reflective as they could be.
Recently, Ganz pushed out an essay that to him made a rather obvious, even banal, observation: that polls and polling are at best unreliable measures of what they purport to measure, that polling as a business is something of a sham, that polling data is frequently cited by political actors and mainstream pundits as reasons to favor particular positions or shun them in ways that are more or less bullshit, that “the statistical fixation of the early 21st century that’s made so many bad predictions and fathered so many puzzling defeats must be abandoned”.
When you are a writer who is typically known for logorrhea, for exploring complexities and ambiguities, for getting into the details, and then you say something plainly, bluntly and directly that hits hard on somebody else’s bread-and-butter, you get a big reaction that may surprise you, that you weren’t necessarily looking for. I once wrote about why I thought the Library of Congress subject headings had become useless and I said it in a fairly intemperate way at a point where faculty and librarians felt like they still had some measure of control over knowledge management and bibliographic authority. I was coming straight out of a long day of frustrating discovery work and I didn’t really think anybody would read it or care much. Since I didn’t track my blog statistics at all, it took me a while to realize that I had touched a nerve. Ganz also seems to have been surprised that what he wrote wasn’t obvious and was seen as something to argue about.
I’m not surprised that he got pushed back on because in my long experience with blogging and online conversation, I’ve run into the same kind of response. The epistemological underpinnings of a fair amount of social science have some serious problems that become far more serious when they’re deployed in think-tanks, punditry, consultancy and policy advocacy. When you’re doing scholarship in many disciplines, you’re not exactly in a hurry to talk about serious philosophical, theoretical and methodological shortcomings in your discipline or your field of specialization that you can’t resolve or even really address without bringing your work to a screeching halt. But scholars usually know where the bodies are buried and usually will talk about them if they have to, and if the challenges to a particular paradigm are growing urgent enough, that discussion will often trickle up into the actual work they produce.
In advocacy and punditry, on the other hand, you’ve often got practitioners who are using those methods and paradigms a bit like a cargo cult uses whatever gets parachuted onto their island. They don’t necessarily understand the priors and they’re not prone to thinking about the epistemological hazards. The data they make and consume is not an end in and of itself, it’s a means to an end. It’s a rhetorical weapon, an ideology, a justification, and in light of all those purposes, someone with an epistemological objection is not just an enemy but something akin to an extraterrestrial alien doing something uncanny and freakishly menacing.
There is in this kind of struggle over polling a recurrent moment that I can’t help but find hilarious in its way: a defender of polling as a reliable method for measuring what people think or believe will say that pollsters have measured and accounted for the number of respondents who are answering randomly, the respondents who are answering what they think the pollster wants them to hear, the respondents who are picking an answer that they think is funny or silly or shows their contempt for the poll itself, the respondents who are answering something noncommital because the view they consciously hold to is unrepresented by the poll or is expressed poorly from the respondent’s standpoint, the respondents who are afraid of what the pollster will think of them if they answer honestly, the respondents who have more than one thought about the question, and the respondents who are aware of the uses of polls and who are trying to answer in a way that instrumentally inclines towards the use of the poll with which they agree. The defender of polling’s straightforward empirical usefulness will say, “We have measured and accounted for all those motivations and thoughts.” Which is to say, when you see that response, that all of these really challenging epistemological problems that have no simple resolution are being treated as concretely measurable errors, and guess what is usually used to make that measurement? A poll or some method adjacent to it.
If you move back into scholarly domains, you find more satisfying engagements with these issues—social scientists who use polling or related modes of survey research are quite aware of these issues and talk about them a fair amount in a way that respects the nature of these problems. There’s an excellent 2011 essay by Andrew Perrin and Katherine McFarland that not only reviews this terrain but brings it into dialogue with other ways of representing what “publics” think.1 But the pollster and the people who consume polling data to justify, authorize or structure decisions and advocacy that they’d be committed to anyway can’t afford to dwell on these kinds of issues, and will frequently dismiss them as “merely” scholarly. “We have measured and accounted for all these problems.”
Ganz goes right for the jugular in noting that if polling had even a small measure of the reliability imputed to it, we wouldn’t live in a moment where polling has been so consistently wrong about so many things—elections, sentiments, preferences—or where polling when it is correct were not so easy to ignore. The most ardent consumers of polls-as-justifications casually wave off polls when the polls tell them something they don’t want to hear. They use the excuses that pollsters themselves use when the poll doesn’t match the outcome: there was a unique circumstance. There was an unmodeled variable. There was unforeseen noise in the data. There was a question unasked, just this one time. Actually, the poll was mostly right, this is what “margin of error” means. Except when the poll delivers a strong signal you don’t want to hear, you can also say: in this one case the public that is measured is a public that doesn’t know what they’re talking about. And you get to say it without having to talk back at or confront that public, because this is an expert judgment, an annotation of the data.
Looking at what publics “think” is an exercise where polling is going to be at best what Perrin and McFarland set out in their article: “We argue for a conceptualization of public opinion that relies upon polling techniques alongside other investigative modes but that understands public opinion as dynamic, reactive, and collective. Publics are shaped by techniques that represent them, including public opinion research”. E.g., polls by themselves not only have limited usefulness in gauging what publics think, but they also conventionalize and groom “public thought” towards the ways that polls imagine to be thinkable. Social scientists sometimes talk about the “Hawthorne effect”—a measurement that affects what is being measured in the act of producing the data. The natural sciences have their own versions of this problem, and maybe it is the fact that they can (sometimes) correct for it that gives social scientists unwarranted optimism about doing the same.
As a historian who also thinks a fair amount about ethnography and textual analysis as ways of knowing, it seems apparent to me that publics as a whole are in many ways fictions, or at least that they only become empirically real subjects who think together within very specific infrastructures and situations, and that the seams between what they think, what they say they think, and what they do that is supposedly shaped by thinking are at best jagged. At least sometimes, when pollsters imagine they are measuring publics who think together they are actually measuring people who do not think together at all, who are not operating within any kind of shared communicative, philosophical or practical frame. If I set out to poll what American farmers, African peasant cultivators and cattle-herders in Central Asia think about superhero movies from the first Iron Man film in 2008 to 2025 to measure whether there is “superhero fatigue” affecting the genre, it would be obvious to everyone that I was convening a public that did not think together (or perhaps at all) about the subject of the poll. But many polls assume publics that are almost as incoherent, frequently using the borders of a nation-state as if they describe a people whom can be safely said to be “thinking together”. That was not even a safe assumption in the heyday of modern nation-making and at least with the United States, it plainly is not a safe assumption right now.
Since my own research practice is much more focused on individuals and on smaller “communities of meaning”, polling’s assumptions about opinion, belief, and preference make even less sense to me at that level. What pollsters want to know about are often opinions that I think exist in indeterminate, contingent and contradictory forms in most people, perhaps even in people who understand themselves to be highly opinionated in conventional frames or situations. What they think can change dramatically relative to who is asking, to their reading of why they’re being asked, to their understanding of the consequences of answering, to their acceptance of the framing of the question, and to the circumstantial priors at the moment that they have to produce an answer.
I get surveyed all the damn time by my own institution, by researchers at other institutions who are studying higher education, by researchers who aren’t studying higher education per se but for whom faculty are a useful survey population, and so on. A lot of the time, I don’t think the questions are well-formed relative to what they’re trying to know, but usually I’ll just sigh and answer the questions. Sometimes I say “fuck it” and I stop a couple of questions in. Sometimes I know exactly what the researchers are really looking into, which is not what the survey pretends to be about, and I’m so annoyed by that bit of conventional social scientistic misdirection that I stop answering. Often times, especially with local surveys, I take the time to write long answers when I’m allowed to that explain why I think the questions are wrong or what the survey should be asking about, even though I’m 100% certain that I’m going to get ignored because I’m not saying what the surveying interests want to hear or are prepared to act on. They’re looking for confirmation of something they already think, or for an instrument that can be plugged into a pre-validated rhetorical frame, not for an actually-existing opinion that disrupts what they’re doing or challenges their conception of a “public” that they’re required to know about. Polling and surveying is only rarely looking to learn about something it doesn’t already know. But sometimes I’m also surprised by a question: I don’t know how I feel about it. I don’t know quite how to select the answer. I choose one way in a moment, I might choose another way later. Yes, I know this is yet another thing that pollsters believe they “account for”—you can see a simplistic version of how some surveys do it if you take a personality test like Myers-Briggs, where questions that are coded as being the same are asked in different ways throughout the test to see if the answers are different, which is seen as indicator of weak response. But many polled people aren’t stupid, and they intuit what’s up with this sort of thing—on a lot of surveys, you understand perfectly well what someone’s sniffing around and can shape the version of “you” that’s answering.
Put me in a conversation with a human being I trust and who has standing in my life, and I guarantee you’ll hear opinions that are either more nuanced or more ambivalent or more contextualized than anything I could or would say in any form of “public”, whether that’s Bluesky, Substack, or to a pollster. If really want to know both what people think and might do about what they think, if the answer really matters, I’ve got no choice. If the kind of knowledge I’m privileging is research, I need to do something like what Arlie Hochschild has done in her work on Appalachia and the Deep South. Alternatively, I need to work with my own intuitions and across the most heterogenous information flow I can handle, I need to read public culture and visible human subjectivity like a text. Like a human. Polling is in the in-between of all that. It’s not useless, but it’s among the most limited instruments I can think of for understanding the vast terrain it purports to measure. Like Ganz, it seems to me incredibly obvious that the heavy investment in polling in various practical domains is about suppressing a practical understanding of the epistemological messiness involved in gauging what people think and when thinking leads to acting where that practical understanding would disable the ideological or professional priors that poll-consumers of various kinds take as necessary and non-negotiable.
Image credit: George Gallup, https://commons.wikimedia.org/wiki/File:George_Gallup.png
Perrin, Andrew J., and Katherine McFarland. “Social Theory and Public Opinion.” Annual Review of Sociology, vol. 37, 2011, pp. 87–107. JSTOR, http://www.jstor.org/stable/41288600. Accessed 26 Aug. 2025.



In thinking about how you might work at this understanding of "public" as compared to polling, one unstated element--is it huge?--is that polling is an elaborate industry, with economic contingencies galore. Your approach may be the best possible, but who would buy it if you were selling?
I think this quite overstates the case that Ganz was making. His critique applies really to issue polling. And that's not new-- we've long known that issue polling is prone to all kinds of errors, most of which Ganz identifies. It's not exceptionally difficult to massage poll questions to get the results that the poll takers want. And once people see policies in action, their preferences are certainly likely to change. But even issue polling has plenty of uses, for those interested in gaining insight rather than pumping out propaganda. For instance, ask people about the state of the economy a year ago, and people would say that it was bad. But ask about their personal financial situation or the state of their local economies, and they would say that both were quite good. That says something very different than the same polling from, say, 2009, which would have uniformly told us that people thought that the economy was bad, and their personal finances were bad, and the local economy was bad. And that really matters for policymaking. That's one thing.
The other is that polling regarding people's intentions is quite good. Is it perfect? Of course not. But it's generally directionally pretty good. Is it perfect? Of course not. There are sampling errors, people change their minds, people lie, etc. But simple political polling has generally done quite well. Even its alleged failures are mostly a media creation. Take the archetypal "failure"-- the 2016 presidential election. People acted stunned that Trump won, and declared that polling was useless. Reality is... the national polling was pretty good, as it always is. But those that were stunned by the result were those that relied on vibes, not an understanding of the polling. Nate Silver's aggregator is a good window into what the broad polling was saying. That aggregator put Clinton's chances of winning that election at 72% on election eve. Yes, 72% is a lot. But what that means is that no one should be stunned that something that is expected to happen 72% of the time doesn't happen. That's about 5% more likely than a given NBA player missing a free throw. And while NBA players make most of their free throws, they also miss a lot of free throws. The media acted as if that result was the equivalent of missing an open dunk. And high quality internal polling from the campaigns was apparently even closer than that.
Now, the fact that elections are one-off events and basketball free throws are repeatable makes election polling much less useful-- but it's a huge stretch to say that the polling doesn't serve any purpose. Rather, what's broken isn't polling-- it's the rhetoric around it and how people think about it.