In thinking about how you might work at this understanding of "public" as compared to polling, one unstated element--is it huge?--is that polling is an elaborate industry, with economic contingencies galore. Your approach may be the best possible, but who would buy it if you were selling?
Ganz points out that you'd think people who want to win elections--and perhaps other things, like sell products--would buy it. My colleague Daniel Laurison makes a much more modest argument about the "winning elections" part of it, which is that the ways that publics are imagined and measured is limited by the sociologically compressed world of campaign professionalism, and therefore there's not much interest in (polling or otherwise) exercises that might expose that compression as a weakness to overcome. (I think he's more charitable than that: it's more like it's a thought that doesn't even occur to them.) But if I were going to start at another end of the problem, what keeps people who want to sell something from imagining a public bigger (or even differently constituted) than the public they imagine selling to? It's because on we sell things--even candidates--from a whole set of priors that then weigh heavily on the imagination of a public that might greet what is offered in a friendly or welcoming way.
So the buying here is a double-leveled thing: we would have to start by asking why the people who want to sell something imagine their publics along highly conventionalized lines--in terms of nations, zip codes, wealth brackets, perceived self-interest, gender, race, and so on. And thus why they only want to know how those publics think in a way that doesn't unsettle the predicates of that imagination, and whether it is possible to convince those people to buy the possibility that there are other publics who would buy what they're selling.
I'm sorry to say that this has been a great success of Trumpism. Trumpism hasn't mostly used polling in the conventional sense, Trump himself has scorned most advice from campaign professionals, and it's because--much as I am loath to concede it--he does not imagine his publics in bounded terms. His publics are whomever is there for his message, whether it's the people in the T-shirts and hats in his rallies or the people who would never say it but actually want his vision or the people who show up and press the lever for him not entirely sure of why but sure that they don't want to pull any other lever. He gets people curious to see what will happen, he gets millenarians who just want an end to business as usual, he gets racists, he gets fascists, he gets people with grievances so inchoate that they could scarcely name it. It's because he is a kind of semantic overflow, unmediated by polls, undesigned by a "victory lab" that wants to engineer a frankenstein monster by zip code-precise issue polling.
To sell what I'm selling here, I'd have to convince the buying customers that it's better to overflow a different kind of semantic field that invokes hope, that promises pervasive social change in favor of the living conditions of ordinary citizens, that offers a life of meaning that builds up rather than tears down, and to say: poll none of this, at least not until you've said it. Say it not just like you mean it, but because you mean it, say it, and then see what happens. I think--I hope--that's a pretty fair description of how Mamdani has accomplished what he's accomplished, and a pretty fair summary of why most of the Democratic Party is not following his lead but instead watching their poll numbers drop in tandem with Trump's. Here at least the polls may be telling us something: they do indeed have their uses.
I think this quite overstates the case that Ganz was making. His critique applies really to issue polling. And that's not new-- we've long known that issue polling is prone to all kinds of errors, most of which Ganz identifies. It's not exceptionally difficult to massage poll questions to get the results that the poll takers want. And once people see policies in action, their preferences are certainly likely to change. But even issue polling has plenty of uses, for those interested in gaining insight rather than pumping out propaganda. For instance, ask people about the state of the economy a year ago, and people would say that it was bad. But ask about their personal financial situation or the state of their local economies, and they would say that both were quite good. That says something very different than the same polling from, say, 2009, which would have uniformly told us that people thought that the economy was bad, and their personal finances were bad, and the local economy was bad. And that really matters for policymaking. That's one thing.
The other is that polling regarding people's intentions is quite good. Is it perfect? Of course not. But it's generally directionally pretty good. Is it perfect? Of course not. There are sampling errors, people change their minds, people lie, etc. But simple political polling has generally done quite well. Even its alleged failures are mostly a media creation. Take the archetypal "failure"-- the 2016 presidential election. People acted stunned that Trump won, and declared that polling was useless. Reality is... the national polling was pretty good, as it always is. But those that were stunned by the result were those that relied on vibes, not an understanding of the polling. Nate Silver's aggregator is a good window into what the broad polling was saying. That aggregator put Clinton's chances of winning that election at 72% on election eve. Yes, 72% is a lot. But what that means is that no one should be stunned that something that is expected to happen 72% of the time doesn't happen. That's about 5% more likely than a given NBA player missing a free throw. And while NBA players make most of their free throws, they also miss a lot of free throws. The media acted as if that result was the equivalent of missing an open dunk. And high quality internal polling from the campaigns was apparently even closer than that.
Now, the fact that elections are one-off events and basketball free throws are repeatable makes election polling much less useful-- but it's a huge stretch to say that the polling doesn't serve any purpose. Rather, what's broken isn't polling-- it's the rhetoric around it and how people think about it.
You get here at a whole different problem with polling around people's intentions, actually, which is that when it comes to elections, the dataset is not only strikingly small but is subject to significant changes in the basic conditions governing how those intentions express into action. Let's not even talk about Presidential elections of which there have been (generously) only 20 or so since something like modern polling methods took shape where the environment in which the intention to vote for a candidate and the opportunity to vote have shifted notably across that time period. Presidential primary elections only began to have weight on outcomes in 1960, the Voting Rights Act only began to allow many disenfranchised voters to participate after 1964, significant private money only began to pour into campaigns in the 1980s, and so on. If we talk about House races, where there's a much larger dataset to talk about, in many parts of the U.S. the outcomes were highly manipulated in various ways until maybe the 1970s and then affected again a great deal by the new wave of gerrymandering engineered by Karl Rove in the early 2000s etc.--the question of whether what publics knew about House candidates translated into action in elections is throughout that period beset by questions of whether the thinking of voters was in fact what determined election outcomes at all. (One reason there was a short wave of work by political scientists just before 2016 arguing that the two political parties were still what really shaped electoral outcomes rather than voters as such.)
Broaden out from elections--which are, after all, an incredibly narrow and specific class of events that match a very particular kind of action with a very particular kind of choice within a very established 'script' for civic thought--and I don't think polls and surveys are established at all as good predictors of likely near-term future actions. Even in market research, there's a long record of researchers being surprised by consumer reactions in situations that they would regard as "normally" predictable. That's part of what happens here in applied social science with polls and surveys--every unpredicted result is then said not to challenge the validity of these methods of prediction but to identify an abnormal circumstance. There's a body of research I do trust that looks at expert political prediction (and some other nodes of predictive work) systematically, and what it suggests is that at best the significant majority of forecasters are only marginally distinguishable from a coin flip (and are sometimes worse, which is actually interesting, because it means there's something meaningful in their error), and that this at least has something to do with an overreliance on narrow or particular instruments (like polls) for making predictions. The few "superforecasters" who do better tend to use a really wide variety of methods (including just reading the vibes), have heterogenous information sources, and aren't stuck in relationships where they've got to tell a particular client what the client wants to hear.
I'm not sure the change in who is allowed to vote would matter a huge amount as to the reliability of polling data. A consistent relevant variable, and perhaps the most significant one that pollsters grapple with, is what the composition of the electorate will end up being. They may be slow to capture shifts, but they do generally try (at least the reputable ones do). And that still almost certainly gives you a better sense than vibes. Of course, in a close race, polls may tell you very little. Nate Silver's last polls-plus forecast literally forecast a coin flip (50.015-49.985). His last aggregator had Harris +1 nationally. It ended Trump +1.5. A poll that tells you something is a coin flip isn't satisfying, but that doesn't make it inaccurate.
And while unpredicted results CAN be dismissed, that doesn't mean they typically are, which is why aggregates are useful. To get back to the NBA free throw analogy (because it's lopsided, unlike a coin flip), a single missed free throw or a pair of missed free throws wouldn't cause any consternation. But if you watch a game and players miss 10 free throws in a row, it's probably a good reason to look under the hood and wonder if there's something amiss. Now, if it is the case that pollsters are no better than a coin flip, then I'd be curious to see those results. Because that seems obviously wrong. You don't need polls to predict 80% of, say, House of Representatives races-- they're mostly safe seats. But to the extent a race is a coin flip, then the valuable insight is that the race is close, not that polling can distinguish which side of a razor's edge it will land on. Going back again to the 2016 election, the conventional wisdom was that the race WASN'T close. But the polling said that it somewhat was.
At which point the relevant question becomes whether the pundit who was zagging and declaring that Trump was definitely going to win in 2016 was actually correct, or if they were that archetypal maverick who declares victory when something unexpected happens (but is curiously quiet when they get it wrong). And I'm strongly inclined to suspect the latter. You see that in all kinds of places, notably among investors. There was a lot of backslapping in 2008 among a few contrarian hedge fund managers who made a lot of money betting against the housing market. What was almost never talked about were the fact that the vast majority of those people had spent years predicting all kinds of other booms and busts that had failed to pass. John Paulson was famously lauded for making a few billion dollars off of housing... but few talk about the fact that he proceeded to spend the following, oh, maybe better part of a decade or so finding new and exciting ways to lose money while the broader market boomed. And that's pretty reliably what happens to contrarians of all stripes-- there is the occasional one who figures it out and is systematically insightful in ways that others aren't. For every one of those, there are a thousand that are just dupes. In polling, as in anything else, I think you'll find pretty strongly that professionals have a significantly better track record than contrarians.
Jeanne Dixon, the prominent psychic of the 1970s, would often laud her accurate guesses and of course frequently not discuss her inaccurate ones except the ones that were close enough that she could claim to have been partially right. What wasn't part of the narrative at all was: 1) how many predictions did she make? and 2) was there a probabilistic difference between the ones she got right and the ones she got wrong? If you do surprisingly well in predicting events that most people regard as highly improbable, I think that might be an indicator that you've got a method that's better than intuition, whether it's psychic powers or social science research. So, say, predicting that a Kennedy would get into legal and/or sexual trouble (as Dixon did several times) did not exactly take psychic powers, any more than saying in April 2025 that Jared Leto might be accused of sexual misconduct in the near future would have.
I'm just not convinced that even in the exceedingly narrow case of elections, polling by itself is of much predictive or analytic use, and it certainly doesn't help people running for office to understand why people are voting the way that they are, which is the major thing you need to know. That's partly because voting isn't entirely or maybe even mostly determined by what people think--it's constrained very significantly by the legal and institutional structure of voting, by access to voting opportunities, by the sociological priors around voting in a given place and time, etc.--some of which polling captures in a snapshot and some of which it doesn't. Even if polling has this narrow use, as a method for understanding how the public thinks more broadly, which is what Perrin and McFarland are evaluating, it's a limited tool which many non-scholars use far more assertively than they should.
To the first point, frankly, I think that prognostication is wildly overrated, period. I think it was Paul Krugman that noted that if you're rarely wrong, then you're not taking any risks. What's valuable is understanding the logic and methodology behind the prediction. It's much more valuable to be wrong but for sound reasons than to be right for arbitrary reasons.
As for polling-- I think there are at least three feasible uses, in descending orders of usefulness. First, there is the way campaigns use it, which is to get a snapshot of where they think they are, what messages are resonating with voters and which aren't, etc. That should inform what voters they target, in which areas, and with which messages. The purpose there is to check priors, not confirm them. It's quite useful. Second, there is the way that the media uses it, which is to ostensibly inform the public about the state of a race at a given moment. That is prone to horse race narratives, and is broadly rather than granularly useful; a poll can generally tell you if a race is close, but it can't tell you with any reliability who will win that close race. At best, it can give you a fairly good probability that, in a one off event like an election, isn't really of any use.
Finally, there's activist issue polling. That is used to try to convince politicians (well, really themselves) that the public agrees with them on a broad swath of issues. That's generally worse than useless. It's prone to change, easy to game, and often internally contradictory. Issue polling will reliably tell you that voters want a more generous welfare state that covers everything that they could feasibly want (vote Bernie!), but that they want to pay less in taxes (vote Romney!). They want "Medicare For All," however you define that (vote Bernie!), but they want to keep their private health insurance (vote Romney!). Then activists cherry pick the portion they like and yell at each other for awhile.
But then I suppose there's a fourth category, which is mostly used by academics to tease apart people's perceptions, and which I've alluded to before. It says something much different if people think the economy is bad and that their personal financial situation and the local economy are good vs. if people think the economy is bad, and their personal financial situation and local economy are bad. The latter points to a bad economy. The former points to a disconnect between perception and reality, and usually to disinformation. Which does get us to pretty significant insights.
In terms of your three feasible uses, I think it's not hard to demonstrate that some campaigns--the Democrats especially--don't use polls to check priors, they use them to confirm them (or even as blunt instruments to force others to accept the preferred priors). But it's the second use that is the real issue, and it's not just the media acting by themselves, it's a collaboration between media and party elites, who use polls to try and shape who is seen as a legitimate contender early on and therefore who draws the money that will let them continue. Trump should have upset that applecart for good in both parties; Mamdani right this moment should be helping Democrats to get beyond it. Because it's that rather obvious collaboration, where polls are used as persuasive instruments designed to keep a race under party control, that has made the public so pervasively (and frankly accurately) mistrust polling altogether, which in turn is producing a strong Hawthorne effect (e.g., people who are polled are consciously thinking about the instrumental uses of the poll).
Tim, I've had doubts about the social "sciences" and their numbers since I spent two years in a sociology class in England as a teen, exchanging amusing notes with a future anthropologist of eminence, and then got a rare "A" at O Level when I answered a question about polling by desperately writing about Dewey and Truman, which I'd read about somewhere in my dabbles in American history. Later, I ended up doing a PhD at a US university known for its quantitative history (I didn't know when I applied from a lowly state U), and under the mentorship of a quantifier who reassured me that "quantification is dead." So yay to this post. Let the haters dismiss me as a numpty armed only with anecdotes, I no longer care.
In thinking about how you might work at this understanding of "public" as compared to polling, one unstated element--is it huge?--is that polling is an elaborate industry, with economic contingencies galore. Your approach may be the best possible, but who would buy it if you were selling?
Ganz points out that you'd think people who want to win elections--and perhaps other things, like sell products--would buy it. My colleague Daniel Laurison makes a much more modest argument about the "winning elections" part of it, which is that the ways that publics are imagined and measured is limited by the sociologically compressed world of campaign professionalism, and therefore there's not much interest in (polling or otherwise) exercises that might expose that compression as a weakness to overcome. (I think he's more charitable than that: it's more like it's a thought that doesn't even occur to them.) But if I were going to start at another end of the problem, what keeps people who want to sell something from imagining a public bigger (or even differently constituted) than the public they imagine selling to? It's because on we sell things--even candidates--from a whole set of priors that then weigh heavily on the imagination of a public that might greet what is offered in a friendly or welcoming way.
So the buying here is a double-leveled thing: we would have to start by asking why the people who want to sell something imagine their publics along highly conventionalized lines--in terms of nations, zip codes, wealth brackets, perceived self-interest, gender, race, and so on. And thus why they only want to know how those publics think in a way that doesn't unsettle the predicates of that imagination, and whether it is possible to convince those people to buy the possibility that there are other publics who would buy what they're selling.
I'm sorry to say that this has been a great success of Trumpism. Trumpism hasn't mostly used polling in the conventional sense, Trump himself has scorned most advice from campaign professionals, and it's because--much as I am loath to concede it--he does not imagine his publics in bounded terms. His publics are whomever is there for his message, whether it's the people in the T-shirts and hats in his rallies or the people who would never say it but actually want his vision or the people who show up and press the lever for him not entirely sure of why but sure that they don't want to pull any other lever. He gets people curious to see what will happen, he gets millenarians who just want an end to business as usual, he gets racists, he gets fascists, he gets people with grievances so inchoate that they could scarcely name it. It's because he is a kind of semantic overflow, unmediated by polls, undesigned by a "victory lab" that wants to engineer a frankenstein monster by zip code-precise issue polling.
To sell what I'm selling here, I'd have to convince the buying customers that it's better to overflow a different kind of semantic field that invokes hope, that promises pervasive social change in favor of the living conditions of ordinary citizens, that offers a life of meaning that builds up rather than tears down, and to say: poll none of this, at least not until you've said it. Say it not just like you mean it, but because you mean it, say it, and then see what happens. I think--I hope--that's a pretty fair description of how Mamdani has accomplished what he's accomplished, and a pretty fair summary of why most of the Democratic Party is not following his lead but instead watching their poll numbers drop in tandem with Trump's. Here at least the polls may be telling us something: they do indeed have their uses.
I think this quite overstates the case that Ganz was making. His critique applies really to issue polling. And that's not new-- we've long known that issue polling is prone to all kinds of errors, most of which Ganz identifies. It's not exceptionally difficult to massage poll questions to get the results that the poll takers want. And once people see policies in action, their preferences are certainly likely to change. But even issue polling has plenty of uses, for those interested in gaining insight rather than pumping out propaganda. For instance, ask people about the state of the economy a year ago, and people would say that it was bad. But ask about their personal financial situation or the state of their local economies, and they would say that both were quite good. That says something very different than the same polling from, say, 2009, which would have uniformly told us that people thought that the economy was bad, and their personal finances were bad, and the local economy was bad. And that really matters for policymaking. That's one thing.
The other is that polling regarding people's intentions is quite good. Is it perfect? Of course not. But it's generally directionally pretty good. Is it perfect? Of course not. There are sampling errors, people change their minds, people lie, etc. But simple political polling has generally done quite well. Even its alleged failures are mostly a media creation. Take the archetypal "failure"-- the 2016 presidential election. People acted stunned that Trump won, and declared that polling was useless. Reality is... the national polling was pretty good, as it always is. But those that were stunned by the result were those that relied on vibes, not an understanding of the polling. Nate Silver's aggregator is a good window into what the broad polling was saying. That aggregator put Clinton's chances of winning that election at 72% on election eve. Yes, 72% is a lot. But what that means is that no one should be stunned that something that is expected to happen 72% of the time doesn't happen. That's about 5% more likely than a given NBA player missing a free throw. And while NBA players make most of their free throws, they also miss a lot of free throws. The media acted as if that result was the equivalent of missing an open dunk. And high quality internal polling from the campaigns was apparently even closer than that.
Now, the fact that elections are one-off events and basketball free throws are repeatable makes election polling much less useful-- but it's a huge stretch to say that the polling doesn't serve any purpose. Rather, what's broken isn't polling-- it's the rhetoric around it and how people think about it.
You get here at a whole different problem with polling around people's intentions, actually, which is that when it comes to elections, the dataset is not only strikingly small but is subject to significant changes in the basic conditions governing how those intentions express into action. Let's not even talk about Presidential elections of which there have been (generously) only 20 or so since something like modern polling methods took shape where the environment in which the intention to vote for a candidate and the opportunity to vote have shifted notably across that time period. Presidential primary elections only began to have weight on outcomes in 1960, the Voting Rights Act only began to allow many disenfranchised voters to participate after 1964, significant private money only began to pour into campaigns in the 1980s, and so on. If we talk about House races, where there's a much larger dataset to talk about, in many parts of the U.S. the outcomes were highly manipulated in various ways until maybe the 1970s and then affected again a great deal by the new wave of gerrymandering engineered by Karl Rove in the early 2000s etc.--the question of whether what publics knew about House candidates translated into action in elections is throughout that period beset by questions of whether the thinking of voters was in fact what determined election outcomes at all. (One reason there was a short wave of work by political scientists just before 2016 arguing that the two political parties were still what really shaped electoral outcomes rather than voters as such.)
Broaden out from elections--which are, after all, an incredibly narrow and specific class of events that match a very particular kind of action with a very particular kind of choice within a very established 'script' for civic thought--and I don't think polls and surveys are established at all as good predictors of likely near-term future actions. Even in market research, there's a long record of researchers being surprised by consumer reactions in situations that they would regard as "normally" predictable. That's part of what happens here in applied social science with polls and surveys--every unpredicted result is then said not to challenge the validity of these methods of prediction but to identify an abnormal circumstance. There's a body of research I do trust that looks at expert political prediction (and some other nodes of predictive work) systematically, and what it suggests is that at best the significant majority of forecasters are only marginally distinguishable from a coin flip (and are sometimes worse, which is actually interesting, because it means there's something meaningful in their error), and that this at least has something to do with an overreliance on narrow or particular instruments (like polls) for making predictions. The few "superforecasters" who do better tend to use a really wide variety of methods (including just reading the vibes), have heterogenous information sources, and aren't stuck in relationships where they've got to tell a particular client what the client wants to hear.
I'm not sure the change in who is allowed to vote would matter a huge amount as to the reliability of polling data. A consistent relevant variable, and perhaps the most significant one that pollsters grapple with, is what the composition of the electorate will end up being. They may be slow to capture shifts, but they do generally try (at least the reputable ones do). And that still almost certainly gives you a better sense than vibes. Of course, in a close race, polls may tell you very little. Nate Silver's last polls-plus forecast literally forecast a coin flip (50.015-49.985). His last aggregator had Harris +1 nationally. It ended Trump +1.5. A poll that tells you something is a coin flip isn't satisfying, but that doesn't make it inaccurate.
And while unpredicted results CAN be dismissed, that doesn't mean they typically are, which is why aggregates are useful. To get back to the NBA free throw analogy (because it's lopsided, unlike a coin flip), a single missed free throw or a pair of missed free throws wouldn't cause any consternation. But if you watch a game and players miss 10 free throws in a row, it's probably a good reason to look under the hood and wonder if there's something amiss. Now, if it is the case that pollsters are no better than a coin flip, then I'd be curious to see those results. Because that seems obviously wrong. You don't need polls to predict 80% of, say, House of Representatives races-- they're mostly safe seats. But to the extent a race is a coin flip, then the valuable insight is that the race is close, not that polling can distinguish which side of a razor's edge it will land on. Going back again to the 2016 election, the conventional wisdom was that the race WASN'T close. But the polling said that it somewhat was.
At which point the relevant question becomes whether the pundit who was zagging and declaring that Trump was definitely going to win in 2016 was actually correct, or if they were that archetypal maverick who declares victory when something unexpected happens (but is curiously quiet when they get it wrong). And I'm strongly inclined to suspect the latter. You see that in all kinds of places, notably among investors. There was a lot of backslapping in 2008 among a few contrarian hedge fund managers who made a lot of money betting against the housing market. What was almost never talked about were the fact that the vast majority of those people had spent years predicting all kinds of other booms and busts that had failed to pass. John Paulson was famously lauded for making a few billion dollars off of housing... but few talk about the fact that he proceeded to spend the following, oh, maybe better part of a decade or so finding new and exciting ways to lose money while the broader market boomed. And that's pretty reliably what happens to contrarians of all stripes-- there is the occasional one who figures it out and is systematically insightful in ways that others aren't. For every one of those, there are a thousand that are just dupes. In polling, as in anything else, I think you'll find pretty strongly that professionals have a significantly better track record than contrarians.
Jeanne Dixon, the prominent psychic of the 1970s, would often laud her accurate guesses and of course frequently not discuss her inaccurate ones except the ones that were close enough that she could claim to have been partially right. What wasn't part of the narrative at all was: 1) how many predictions did she make? and 2) was there a probabilistic difference between the ones she got right and the ones she got wrong? If you do surprisingly well in predicting events that most people regard as highly improbable, I think that might be an indicator that you've got a method that's better than intuition, whether it's psychic powers or social science research. So, say, predicting that a Kennedy would get into legal and/or sexual trouble (as Dixon did several times) did not exactly take psychic powers, any more than saying in April 2025 that Jared Leto might be accused of sexual misconduct in the near future would have.
I'm just not convinced that even in the exceedingly narrow case of elections, polling by itself is of much predictive or analytic use, and it certainly doesn't help people running for office to understand why people are voting the way that they are, which is the major thing you need to know. That's partly because voting isn't entirely or maybe even mostly determined by what people think--it's constrained very significantly by the legal and institutional structure of voting, by access to voting opportunities, by the sociological priors around voting in a given place and time, etc.--some of which polling captures in a snapshot and some of which it doesn't. Even if polling has this narrow use, as a method for understanding how the public thinks more broadly, which is what Perrin and McFarland are evaluating, it's a limited tool which many non-scholars use far more assertively than they should.
To the first point, frankly, I think that prognostication is wildly overrated, period. I think it was Paul Krugman that noted that if you're rarely wrong, then you're not taking any risks. What's valuable is understanding the logic and methodology behind the prediction. It's much more valuable to be wrong but for sound reasons than to be right for arbitrary reasons.
As for polling-- I think there are at least three feasible uses, in descending orders of usefulness. First, there is the way campaigns use it, which is to get a snapshot of where they think they are, what messages are resonating with voters and which aren't, etc. That should inform what voters they target, in which areas, and with which messages. The purpose there is to check priors, not confirm them. It's quite useful. Second, there is the way that the media uses it, which is to ostensibly inform the public about the state of a race at a given moment. That is prone to horse race narratives, and is broadly rather than granularly useful; a poll can generally tell you if a race is close, but it can't tell you with any reliability who will win that close race. At best, it can give you a fairly good probability that, in a one off event like an election, isn't really of any use.
Finally, there's activist issue polling. That is used to try to convince politicians (well, really themselves) that the public agrees with them on a broad swath of issues. That's generally worse than useless. It's prone to change, easy to game, and often internally contradictory. Issue polling will reliably tell you that voters want a more generous welfare state that covers everything that they could feasibly want (vote Bernie!), but that they want to pay less in taxes (vote Romney!). They want "Medicare For All," however you define that (vote Bernie!), but they want to keep their private health insurance (vote Romney!). Then activists cherry pick the portion they like and yell at each other for awhile.
But then I suppose there's a fourth category, which is mostly used by academics to tease apart people's perceptions, and which I've alluded to before. It says something much different if people think the economy is bad and that their personal financial situation and the local economy are good vs. if people think the economy is bad, and their personal financial situation and local economy are bad. The latter points to a bad economy. The former points to a disconnect between perception and reality, and usually to disinformation. Which does get us to pretty significant insights.
In terms of your three feasible uses, I think it's not hard to demonstrate that some campaigns--the Democrats especially--don't use polls to check priors, they use them to confirm them (or even as blunt instruments to force others to accept the preferred priors). But it's the second use that is the real issue, and it's not just the media acting by themselves, it's a collaboration between media and party elites, who use polls to try and shape who is seen as a legitimate contender early on and therefore who draws the money that will let them continue. Trump should have upset that applecart for good in both parties; Mamdani right this moment should be helping Democrats to get beyond it. Because it's that rather obvious collaboration, where polls are used as persuasive instruments designed to keep a race under party control, that has made the public so pervasively (and frankly accurately) mistrust polling altogether, which in turn is producing a strong Hawthorne effect (e.g., people who are polled are consciously thinking about the instrumental uses of the poll).
Tim, I've had doubts about the social "sciences" and their numbers since I spent two years in a sociology class in England as a teen, exchanging amusing notes with a future anthropologist of eminence, and then got a rare "A" at O Level when I answered a question about polling by desperately writing about Dewey and Truman, which I'd read about somewhere in my dabbles in American history. Later, I ended up doing a PhD at a US university known for its quantitative history (I didn't know when I applied from a lowly state U), and under the mentorship of a quantifier who reassured me that "quantification is dead." So yay to this post. Let the haters dismiss me as a numpty armed only with anecdotes, I no longer care.