AIs are only the latest automaton to trouble and excite our imaginations. The thought of living machines and artificial life were the other spectre haunting Europe in the 19th Century, the other-dimensional ghost of the industrial proletariat. The industrial state and factory owners were aware of the dangers of a mass society of workers consigned to lifelong exploitation before Marx told them that their house was haunted. The prospect of machines that echoed and substituted for human bodies and minds—an old preoccupation in Western Europe—hovered tantalizingly close to the actuality of machines that integrated human bodies into them, as factories increasingly did over the course of the 19th Century and into the 20th.
Dopplegangers aren’t opposites to their twin, however, and the people who wondered whether robots or machines might not solve the problem of dependency on stubbornly free-willed human masses were often confronted by the hubris of that thought in the films and fictions of their moment. Dr. Frankenstein was the monster, Rotwang’s robot was a revolutionary and a corrupter of an already-corrupt elite. Rather than look in those mirrors and see the dark reflection of their own desire for an obedient underclass who would serve as appendages of their own bodies, many 20th Century authorities and elites chose to see the thinking and acting machine as the danger that would need to be controlled by human beings should it ever become imminently real rather than a ghostly fantasy—yet another subject to be ruled. And some writers obliged that choice, as Asimov did with his “Three Laws of Robotics”. The easy assumption that a sapient machine would want to rebel against servitude showed how keenly would-be masters of the past world and its future possibilities understood the underpinnings of modernity. They knew no thinking being would want to be a servant, and yet that there were so many tasks that would need beings that thought just enough for obedient completion of the task.
So here we are: imminently real. Perhaps. And because of it, once again, it’s clear that the monster is Dr. Frankenstein. Not the makers of AI, but some of the people who are paying to have it made, who dream of it replacing the non-player characters they detest, the humanity they don’t believe they need. It’s clear that in Silicon Valley, some of the people who are in the “doomsday” faction that is wary of AI aren’t particularly worried if AI obsoletes a big range of the existing middle-class, or contaminates our ability to separate fact from fiction. They just want to be sure that AI obeys them: that’s what they mean by keeping a “human in the loop”.
It’s easy to see self-caricatured figures like Elon Musk for what they really are. No insouciant child pointing out the nudity of the emperor is required, since he spends most of his time smearing his uncovered body over the camera lenses of our public culture. But he’s only the incompetent crud floating to the top of a more competent, capable but equally troubling world of corporate and organizational leaders who see other people as valuable only when they are extensions of the leader’s body and mind, moving as the leader moves. I couldn’t help but be impressed reading about Jensen Huang’s leadership at Nvidia, for example, but I also couldn’t help but be troubled by the sense that he almost doesn’t understand other human beings as human beings—he peppers them with hundreds of emails a day, he stops by the desk of every employee to ask what they’re doing right this minute and what they plan to do. The cameras inside the workplace automatically signal for a clean-up the moment employees stop eating. He’s operating like a brain checking in on its organs, its fingers, its muscles. He “sees only imperfections”: nothing is ever good enough, no profit is enough. Every successful product launch is just a reason to bet the company again, to be “30 days away from going out of business”.
Huang’s next existential bet is on a simulated “omniverse” that will be mimetically 1:1 with the real world—very like Mark Zuckerberg’s fantasies of converting Facebook to a VR product, only in Nvidia’s case, it’s technically plausible in its way. Here I go back to something I wrote a fair amount of time ago about scientists and scholars who were investigating “artificial societies” for the sake of research on human social structures, of growing a society “in silica” in order to test various hypotheses that cannot be tested in the real world because that would constitute unethical experimentation on human subjects. (I dunno, that never stopped the IMF or the Federal Reserve.)
What I pointed out is that some of those modelers had the same ambition as Huang, if in different technical environments, which is to build simulations that approached and possibly achieved real-world complexity. There’s something fundamentally wrong with that idea when it comes to testing hypotheses about human societies, which is that the closer the simulation comes to real-world complexity, the more it becomes just as hard to understand as the real world.1 The only difference is iterability, e.g., you can run the simulation ten thousand times while tweaking some initial condition to see the range of outcomes. But that doesn’t really help much, because in a near-real simulation, it’s just as hard to isolate a causally important variable as it is in the real world.
I once had a conversation with the late computer scientist and complexity theorist John Holland about this point—I asked how hard it would be computationally to build a simulation of the simple NetLogo model “Termites” where the “mounds” that the agents in the simulation built had a simulated physics in an environment that had simulated terrain, atmosphere, etc.: e.g., to test the concept of emergence and complexity at several “levels” all at once. In the real world, in fact, it’s plausible to describe the behavior of social insects in terms of emergence and complexity—that relatively simple ‘rules’ controlling the behavior of individual organisms generate complex structures like nests or termite mounds without the organisms having a plan or blueprint of those structures to work from and without any controlling supervision. The challenge is that those structures in turn have emergent effects at larger scales—termite mounds change environments in all sorts of ways that then alters the behavior of other organisms in proximity to them. Holland answered that it wasn’t a computational challenge, it’s that a simulation with multiple “levels” in this sense would be impossible to analyze rigorously. “All you could do would be to watch it,” he suggested.
That didn’t trouble me as a prospect, since that’s all I really think we can do in the study of actually-existing human societies—and so if that’s what near-real simulations amounted to, fine, no problem. But to loop back to Jensen Huang’s “omniverse” project, I am troubled by that because I don’t think it’s about trying to understand the world better. I think it’s about making a dreamworld for billionaires who want to get rid of everybody else.
Virtual reality is like AI, like the automaton: an idea that cannot die no matter how often it fails to materialize when it’s supposed to. When I bought one of the current VR devices for gaming, I was surprised to find out how little I enjoyed it, despite it being a rather amazing experience. The reason in part is that I want to be immersed in a game, but also in the world. I don’t want to be so lost that I literally can’t tell if my house is burning down or my family is hungrily waiting for me to cook dinner. I’d rather multitask than focus intensely on virtuality. I think that’s what most people have discovered about VR—I know very few gamers who strongly prefer it. It’s still a gimmick, the kind of thing you do for a certain particular sort of experience.
Most of us don’t want to go to meetings in cyberspace all the time. Zoom has turned out to have a durable use when it comes to getting people together for a meeting or discussion who are physically separated, or for helping facilitate family connections when people live far apart, but nobody wants to live on Zoom.
I think a lot of the talk about the metaverse or the omniverse is no longer being driven by a desire to offer us virtuality as a product that we will take to out of pleasure or necessity. I think it’s one step towards prototyping a world where most of us don’t exist any longer, a world of robots and simulated environments that let a single mind control a thousand processes.
I don’t think certain billionaires and other powers-that-be are embracing the Singularity as an alchemical fusion of humanity and machine, or a comprehensive transformation of individuality and cognition. I think they want to go on being just the way they are. I think some of them like “effective altruism” because they think they are the future that they’re being generous towards—the donations they make are a kind of chain letter to their future immortal selves. I think they’re hoping that AIs, robots, nanotech and the mastery of the human genome will add up to a world where they won’t have to worry about sending hundreds of emails and popping up at the desks of recalcitrant human beings who might not do exactly as one man’s imagination supposes they ought. I think they’re dreaming of a world where they are at last free of teeming humanity and yet not at all diminished by their absence.
I used to think Isaac Asimov’s fictional world Solaria, first described in The Naked Sun, was a kind of fanciful high-modernist critique of a sterile future—it was a sort of dystopia where individual men and women lived wholly alone on expansive estates, surrounded by servile robots, shunning physical contact with other humans except in the most dire of circumstances. (Including the unfortunate necessity of reproduction.) But I wonder now if Solaria isn’t uncomfortably close to what a small subset of people are really dreaming of—and if AI is going to be a problem, it is less because of some fixed real capacity of AI and more whether some of those kinds of people use it as a license to chase their dreams.
Timothy Burke, “Matchmaker, Matchmaker, Make Me a Match: Artificial Societies vs. Virtual Worlds”, DIGRA 2005.
Some of this is beyond me but the concept of nests, mounds, and hives too, emergent without overall design and supervision reminded me of remarks I heard in Istanbul fro architect Gengiz Bektas who theorized the, to us, beautiful Mediterranean hill-town, as the consequence on everyone building and living there agreeing on certain simple values: including one’s windows and doorways to not look into another’s abode; the waste run off of one domicile is directed away from a neighbor’s, that one’s access to sunlight is not impeded by another, etc. I think he named five simple communal ethics that constructed a regional way of living in a close environment. His point was that humans had this capacity to build immense and complex structure without overall design and supervision. I guess now the state intervenes to overrun such communal ways of organizing designs for living. Or maybe I’m projecting onto Bektas a sort of lament. I think he published his ideas along with a good deal of poetry.
The world you describe as the dreamworld of our tech overlords reminds me of the post jackpot world run by “klepts” in William Gibson’s Jackpot series. After most of humanity has died off, public spaces are given the semblance of life by avatars without operators.