I’m going to expand a bit here on a Note I just published this morning, which is in turn a reaction to this Substack essay by Henry Farrell which is in turn a reprise of an argument made by Farrell, Abe Newman and Jeremy Wallace in Foreign Affairs in 2022.1
I really like Farrell’s point (and that of his co-authors; Wallace has an interesting 2022 book that expands on the point with a focus on contemporary China) as a rejoinder to Yuval Harari (who I heartily endorse rejoindering to at all times), which is that it is by no means clear that AI-mediated social media will improve the quality and quantity of accurate information about citizens in a way that strengthens authoritarian regimes. Farrell basically stresses that “garbage in, garbage out” remains an important principle: that what you can learn via social media, AI-mediated or otherwise, has a questionable relationship to what is actually true about national populations and their actions (real and potential). “Software eats the world, consuming the structures that produce more or less reliable human knowledge - and excreting [r]ubbish instead,” he points out.
But as I noted to Farrell at his Substack, I think there’s a problem with Harari on the other side of things, and it puts Farrell et al’s argument in a different light. I’m not sure that any state apparatus all the way back to the early modern period actually wants accurate information to inform the decision-making processes taken at the top of its hierarchies. For that matter, I’m not sure any large-scale institution in the modern era wants to know much of what it could know at the point at which it is supposedly making decisions.
That observation opens up further avenues of skepticism about how much our common understandings of how large-scale institutions actually decide to act resemble the reality, extending out to a question of whether they ever in some meaningful sense “decide”, with the cognitive-agentive capacity that implies. I’ve always been partial in some ways to the old argument that the late-19th Century version of the British Empire formed in a “fit of absent-mindedness”, for example—not the version meant as an apologetic but as a real empirical shorthand for the processes by which particular territorial claims, particular acts of violent conquest, particular administrative precedents, etc. were established in a process of emergence, through local agents and actors, rather than by the command of a conscious, instrumentally programmatic central structure in the metropole, and that the seeming coherence of the outcome of all those emergent actions was a post-facto narrative crafted in the metropole which then led to dictates passing from the center to the periphery.
That point aside, let’s stick with what is ostensibly the top layer of any large organization’s hierarchy where the final decisions are taken: the executive of a nation-state, the board and CEO of a corporation, the board and president of a university, the leadership of a large church denomination, and so on. Leaving aside Farrell et al’s absolutely accurate point that information is not getting more accurate or more reliable in the present world information order, do the decision-making levels of large-scale organizations actually want more accurate information and if they get it, will they act in ways that more closely match the purposes of the organization to the outcomes it seeks and interests it favors?
I think the answer is very much no, in most cases, the decision-makers do not want all the information they’re entitled to have, and they may especially not want to know some of the accurate knowledge they might have available to them. On some level, the people who assume that decisions are made on best possible information and that it is only information asymmetries that produce bad outcomes are people who believe organizations are “normally” rational, whereas I think people like me who approach organizations as cultures, as sociological, don’t assume there is any particular optimal or rational character to the decisions they enact. (I also don’t assume there is some kind of fitness landscape that cuts down organizations that act non-optimally: there are all sorts of ways for organizations to exert power to protect themselves from the consequences of plainly non-optimal decisions.) I also think even in cases where the decision-makers do want that information, there are others who do not want them to have it—indeed, that the flow of information within large-scale organizations almost guarantees that there are, by intent and by accident, informational bottlenecks. Leave that point for another day, though, but it’s a deep part of how large-scale organizations came to see information-collecting as important, as Jacob Soll demonstrated in his book The Information Master.2
Let’s focus on leadership, though. If you think about large-scale organizations in terms of their internal cultures and sociologies, it’s pretty plain that sometimes the people who occupy the top of their hierarchy and are vested with some form of sovereign authority over the organization use that authority capriciously to satisfy personal interests, whims, and psychological dispositions. In those circumstances, authoritarians, executives who serve by appointment or contract, and even elected officials may make it quite clear that they only want to hear what they want to hear. Elon Musk plainly doesn’t want anybody disagreeing with his take on things when you’re inside the space of his decision-making. At Tesla and Space-X, some suggest, his underlings have become expert at distracting him or performatively agreeing with his worst whims; at Twitter, he pretty much annihilated that strata of the company. The common take now on why some intelligence suggested Saddam Hussein might have WMDs is that some of his underlings placated him by saying they’d built the WMDs he’d erratically called for building whenever it was that he erratically asked to hear about them, but that nobody actually undertook that work. This is a pretty basic part of dictatorial power in particular: you tell the leader and his inner circle that whatever they want or believe is true, and never tell them anything that contradicts that. I think it’s not just dictators and erratic billionaires who get told what they want to hear, though. Liberal democratic leadership and the leadership of civic organizations are just as prone at times to hearing what they want to hear simply because of personal desire.
Perhaps more commonly, however, cannier kinds of leaders, whether authoritarian or democratic, are boxed in to some kind of belief or ideology that they see as necessary to their command over the organization and will make it clear that they need information that supports that view and will not welcome information that contradicts it, and will reward or punish people based on how they accord with those wishes. The leadership may know that there are things it doesn’t want to know, but it will refuse to authenticate them as knowledge within their official deliberations or any narratives about those deliberations.
There are a lot of famous examples tied to famous decisions, particularly those that turned out badly, where information in hand was not transferred upwards because the people minding the gateway to the decision-makers judged that it would not be welcomed precisely because it would disrupt or delay a decision that was going to have to go a particular way. I’m not sure it matters in some of these cases if that information is laid out on the table, even: leadership in many situations involves cultivating deliberate informational blindness. You could have sat down with Paul Wolfowitz in 2002 and provided him endless reams of unimpeachable evidence that his entire conception of the likely outcomes of invading Iraq was incorrect and he would have been able to listen without ever being in danger of actually hearing it: he’d spent the previous decade inoculating himself against that information at the Project for a New American Century. That wasn’t so much dictatorial whimsy as it was a disciplined sense that the only good information was information that supported a necessary decision that had long since been committed to in his circle of political actors.
There are more subtle versions of this kind of informational censorship that I think are nearly ubiquitous in most organizations. Leadership hierarchies are deeply inoculated against knowing that they know something that would produce legal jeopardy for them at a later date should it be verified that they knew it, though a fair amount of the time it’s impossible to avoid producing that evidence trail, as the legal proceedings around Theranos and Frank demonstrate. The defense for top leaders in such cases is often that they didn’t know what was known further down the hierarchy: they’d rather look like people too stupid to have been in charge than be guilty of knowing what they very likely did know.
Even more commonly, leadership adopts preferred forms of discourse around many decisions that make certain kinds of knowledges within the decision effectively impossible to represent, make it impossible to reverse-engineer how the knowledge and the decision interrelated. Penn State staff may well be demoralized by a current program of “compensation modernization”, but Penn State’s human resources team have all sorts of ways to translate any knowledge of that demoralization into something that alchemically transforms it into its opposite, or neutralizes it by discounting the source of that information. Even if “compensation modernization” is meant in the end as a project to reduce total compensation by eliminating staff positions and increasing the workload of other staff accordingly (or just plain degrading the institution’s overall functionality) there are ways for that goal to remain unspoken in those terms. You can bring persuasively documented news to any president or provost of a university or college that faculty are feeling demoralized and marginalized and they have a host of ways of dodging that information without outright refusing to hear it, and especially without having to say that that’s a goal, mission accomplished.
A lot of large-scale organizations, including governments, in the age of neoliberalism approach high-level decision making as a kind of cipher: you assemble a decision by putting together several cryptographically scrambled components held by different members of a leadership team, and you do it in a way where no one is responsible for their assembly. Many decisions are like Monty Python’s Funniest Joke In the World, too dangerous to see in their entirety, with all their rationale and underlying knowledge put together as a whole.
I’m not arguing that this is something that has to be reformed about modern organizations, either, as someone who thinks that they ought to be beautifully designed conduits for raw truth pouring in from the world, alchemically turned into data and then knowledge, and then conveyed to a board room so that the most rational decision gets made. Whenever organizations do build systems of absolutely inescapable upward reporting with harsh consequences at all levels for the withholding of information, they tend to go a little mad, because the world is full of contradictions and there often aren’t any unambiguously rational decisions to make based on knowledge alone. The clear decisions rest more on values, ethics and mission, and you don’t get clarity on those by collecting more data in and about the world. If you haven’t built a strong values-driven culture (for real, not just as p.r. fluffery), better data collection (AI-aided or not) and more transparent communication up and down the hierarchy about what is known isn’t going to help with decision-making.
Farrell, Henry, Abraham Newman, and Jeremy Wallace. “Spirals of Delusion: How AI Distorts Decision-Making and Makes Dictators More Dangerous.” Foreign Affairs (New York, N.Y.) 101.5 (2022): 168–181.
Soll, Jacob. The Information Master : Jean-Baptiste Colbert’s Secret State Intelligence System. First paperback edition. Ann Arbor: University of Michigan Press, 2011.
A "microscopical" moment of perfunctory interrogation when empire concedes some element of predictable rule isn't right: "IMRAY had achieved the impossible. Without warning, for no conceivable motive, in his youth and at the threshold of his career he had chosen to disappear from the world—which is to say, the little Indian station where he lived. [. . .] He had stepped out of his place; he bad not appeared at his office at the proper time, and his dog-cart was not upon the public roads. For these reasons and because he was hampering in a microscopical degree the administration of the Indian Empire, the Indian Empire paused for one microscopical moment to make inquiry into the fate of Imray. Ponds were dragged, wells were plumbed, telegrams were dispatched down the lines of railways and to the nearest seaport town—1,200 miles away—but Imray was not at the end of the drag-ropes nor the telegrams. He was gone, and his place knew him no more. Then the work of the great Indian Empire swept forward, because it could not be delayed, and Imray, from being a man, became a mystery—such a thing as men talk over at their tables in the club for a month and then forget utterly. His guns, horses, and carts were sold to the highest bidder. His superior officer wrote an absurd letter to his mother, saying that Imray had unaccountably disappeared and his bungalow stood empty on the road". -Rudyard Kipling, “The Recrudescence of Imray,” Life’s Handicap (1891) <https://www.telelib.com/authors/K/KiplingRudyard/prose/LifesHandicap/imray.html >