Read this:
This press release may contain forward-looking statements including, without limitation, statements concerning plans, objectives, goals, projections, forecasts, strategies, future events or performance, and underlying assumptions and other statements, which are not statements of historical facts or guarantees or assurances of future performance. Forward-looking statements may be identified by the use of words like "expect," "anticipate," “believe,” "intend," "forecast," "outlook," "will," "may," "might," "see," "tend," "assume," "potential," "likely," "target," "plan," "contemplate," "seek," "attempt," "should," "could," "would" or expressions of similar meaning. Forward-looking statements reflect management’s evaluation of information currently available and are based on our current expectations and assumptions, our business, the economy and other future conditions. Because forward-looking statements relate to the future, they are subject to inherent uncertainties, risks and changes in circumstances that are difficult to predict. Factors that might cause future results to differ from those expressed by the forward-looking statements include, but are not limited to, our ability to successfully investigate and remediate chemical releases on or from our sites, make related capital expenditures, reimburse third-party cleanup costs or settle potential regulatory penalties or other claims; our ability to successfully execute our business and transformation strategy; increased costs or disruption in the supply of raw materials; increased energy costs; our ability to successfully generate cost savings and increase profitability through asset restructuring initiatives; compliance with laws and regulations impacting our business; conditions in the global economy and capital markets; and those discussed in our Annual Report on Form 10-K, under Part I, Item 1A —"Risk Factors" and elsewhere in our other reports, filings and furnishings made with the U.S. Securities and Exchange Commission from time to time. As a result of these or other factors, our actual results, performance or achievements may differ materially from those contemplated by the forward-looking statements. Therefore, we caution you against relying on any of these forward-looking statements. The forward-looking statements included in this press release are made only as of the date hereof. We undertake no obligation to publicly update or revise any forward-looking statement as a result of new information, future events or otherwise, except as otherwise required by law.
That’s not written by an AI, not yet. It’s written by some chimeric fusion of a lawyer and a public relations executive, a human being (or several) working for a corporation called Trinseo. Formerly owned by Dow Chemical, then by Bain, now a publicly traded company owned mostly by the big hedge funds.
Why did they feel the need to caution readers that prediction is very difficult, especially if it’s about the future? It’s not a point that Trinseo makes otherwise in other parts of its web presence: generally it’s very upbeat about its innovation! About its undying devotion to Responsible Care Ⓡ! Its Thought Leadership!
They felt the need to caution us against relying on any of these forward-looking statements because a Trinseo plant dumped 8,000 gallons of a potentially toxic chemical into the Delaware River. Trinseo is willing to make one backward-looking statement not hedged by this kind of spectacular disclaimer: that an accident of this sort should not happen. There are other ways, if necessary, to walk that back, most of them involving waiting out the news cycle, or in a pinch, finding someone to throw under a bus.
“Language,” sings Laurie Anderson (quoting William Burroughs) “is a virus”. The uncertainty gripping us now is about just how much bigger the textual biomass is about to get as we unleash potentially autonomous large language model AIs on our text-making, text-consuming spaces. We may be thinking less about language infecting our minds and more about language as a metastatic cancer swelling to vast unreadability in every communication.
But in some fashion we’ve already done a fair job of infesting our written spaces, turning them into empty wastelands of prose designed to do everything but communicate or imagine or describe. In an engaging essay at the Sidney Review of Books, Andrew Dean looks at two versions of a mission statement from an Australian university, one written by human employees, the other written by an AI. Dean judges, and I would concur, that the AI-written version is more human, more communicative, more meaningful, more expressive.
We talk about the insidious reach of disinformation, as if we are here and now suffering from this scourge far more than ever before. But most of the people who talk of disinformation’s sudden ubiquity are forgiving so very much that we have lived with for decades, centuries, for the entire history of writing and speaking. All advertising has had at least a touch—and often far more—of disinformation. Corporations and governments routinely misdirect and misrepresent. More potently and complicatedly, we enshrine a vast phylum of disinformation that we know by another name: fiction. Because often we can say things that are truer than the world itself by making things up.
We see only what is new in our information ecosystems—both in volume and type—because we are accustomed to reading through and around familiar systems of communicative untruth. Most of us know what Trinseo’s analytically fascinating bit of prose is saying, what its function is. It is the lawyer’s equivalent of festooning garlic all over a peasant’s hut in Transylvania: to ward off any attempt to take Trinseo’s assurances about the harmlessness of its spill at face value. “It’s harmless,” Trinseo assures the public of Philadelphia, so go ahead and drink your tap water. “But maybe not,” says the disclaimer about predicting the future. It’s Schrodinger’s Assurance: half-right and half-wrong until you open the box and find out whether anyone got sick. Or whether anyone found a lawyer willing to say that they got sick.
Literary scholars prefer to stick to fiction, to essays, to poetry, to branches of writing, true and otherwise, whose expressive character is more interesting, more supple, more full of possibilities, more richly appointed in its aesthetics. But there’s nothing more powerful than a lawyer’s prose. Not as persuasion, not aesthetically (though there’s a few rare humdingers out there in judicial rulings). Words matter in law and yet they also don’t matter. They matter as fetishes. Trinseo can’t just say simply, “The situation may change and we will change our guidance if it does”: it has to place every imaginable word that might be used elsewhere in its press releases into a kind of penalty box where its meaning is suspended if anyone tries to interpret it as a promise or a certainty.
Clarence Thomas can write a perverse majority opinion about gun rights that enshrines history in a completely novel way and kick off a wild entrepreneurial scramble through archives, hunting for long-irrelevant and justly forgotten laws and codes which now ascend to binding law that has potent impact on our contemporary lives merely because those former rules are just old enough (and yet not too old to belong to some other history altogether).
Law is a domain where language matters most, but also where language habitually corrupts and demeans meaning and destroys any hope of using writing descriptively or dialogically. In institutions built around the resolution of disputes, real people become mute, unable to talk to one another or testify to the world they live in. In court or in discovery, we all become part of a ventriloquist act: we can only say what is allowable within the formalisms of law. A sonnet has fewer punitive constraints on what can be said than testimony in a court. But law is also a domain of brutal and violent interpretative will, where words are prowled around as sharks circle a makeshift raft crowded withd shipwreck survivors.
It is this kind of writing that is the most corrupting spirit of our neoliberal moment. How law’s language rests on conceptions of liability, risk and managerial authority regularly works to empty other writing, other speech, of any hope of authentic connection to life as we live it. And life as we value it in ways other than economistic value: as experience and sensation, as morality and ethos, as mission and vision. As hope. As Dean notes in comparing AI and human-written institutional prose, “management-speak cannot survive reality because it is designed to shield its speakers from it. This is normally its strength.” Dean observes that the dictate that there is nothing outside the text works for forms of writing that never had any referent to the world as it is or might be—corporate writing, the writing of risk management, the writing of the law in its most powerful and yet perversely unreal forms. As he says, the surprise is that “poetry is real and testable, while university mission statements are not”.
No wonder Trinseo fears the moment where they have to actually speak to a real spill of real chemicals in real water that real people drink. They have to somehow push the writing that makes that address back into an unreal language about liability and risk, where the momentarily concrete dissolves into the stomach acids of a vast discursive system that swallows our lives every day and fouls our landscape with the excremental aftermaths.
It’s almost possible to imagine, as Dean notes, that generative AI might drag the unreality of those words towards something real simply because it has been trained on so much of what we write and say to one another. But the functions of that unreal writing are informed by a clear imperative: use writing to protect the dragons’ hoards, to ward off any obligation to the public good or to the everyday life of human beings. Every time an AI generates prose that shades accidentally towards plain speech or real life, there will be a master scribe to yank it back. People who are in charge of writing empty messages that say nothing at all may lose their jobs, but there will be work for those who ensure that nothing gets said and nothing is promised by man or machine.
Image credit: "Trinseo goes public" by badenton is licensed under CC BY-NC 2.0.
Timothy, I hear that generative AI can be prompted to 'figure' its reply in the "style of" or the persona of an authoritative, canonical writer. We can expect that the AI programs' ability to emulate mannerisms of many writerly styles will soon integrate complex interpretations of how any one canonical author's style serves & constrains [her] meanings and purposes.
-I agree with you that the jargons of legalese and institutional/bureaucratic procedure obscure key meanings defensively, both because their uncommon idioms are impossible for all but expert specialists to parse and because the institutions of societal hierarchy that propagate them trade on the 'sublime' authority of their words' Origins to cow "ignorant" laypeople. In that light, plain/common English is always the more democratic medium. Plain expressions are supposed to avoid complicating ornamentation and incidental confusions. Further, because it's assumed that plain English works with---not necessarily 'simple'---but conventionally understood and agreed on meanings, we shouldn't be tricked by what it says.
-But the old rhetorical studies have always been concerned that preference for simple authoritative statements is often moved by deference to familiar authoritative slogans and by aesthetic preference for the 'transparently plain', the 'minimal', the 'classically' unadorned expression over the effete, elaborate style. As both these 'preferences' (ethical and pathetically aesthetic) are recognized and cultivated in our contemporary technical-writing practices and, especially, I think, in principles of effective institutional communication and advertising, we can anticipate that generative AI will become adept at distracting and obfuscating meanings with plain-language tactics *more quickly* than it will take to adapt institutional subterfuges for other different specialist idioms. The people who use generative AI most will learn most quickly and precisely how they should communicate and think.
-Orwell was wrong about many things, but, truly, the pressure to bring all public meanings into clear, simple common language *could* quickly develop into kinds of censorship that condemn complicated explanations as inherently duplicitous and actively teach that we should only trust plain (recognizable, well publicized) truths. The idioms of expertise are double-unplus bad. The Big Demagogue is a Plain-talkin' Man, and his talk is always acceptably familiar, for those of us with the common sense to understand what everybody ought to know.
-Common language can obscure. We all experience that many key words of our language worlds are crucial, acceptable, but so complex as to be inexplicable unless, in teaching them, we simplify and say, "You'll understand better later." Contemporary rhetorical studies argue that over-used, conventional figures become inert "dead metaphors": no one needs to go to much trouble to explain that "the White House" is a metonym for the authority of the President and his cabinet and, with less force, the whole executive branch's administration. But many people will fiercely attest that "basic human rights" are "self-evident" and "universal." And they won't allow that "basic rights," and that what is foundationally "self-evident," and that which everyone agrees is "universal" (except for the people who are profoundly, perversely wrong) are historical linguistic concatenations---extremely compressed and fragmentary legacy objects that were never, and cannot ever be fully unpacked or made plain, except in those specialized, privileged conventions where using them as accepted absolutes is absolutely conceded.
-