There are a lot of things weighing down people in academia right now. The ongoing casualization of academic jobs, the accelerating rot of corporate-style managerialism, the ongoing collapse of faculty governance, the fiscal uncertainties of higher education in the coming decade. Malicious partisan targeting, the entanglement of education in faux-meritocratic inequality, the pandemic’s intensification of detachment and disaffection among students, faculty and administrators.
So what I’m going to complain about now is a smaller matter and should be understood as such. But like everything that was just lightly wearying in 2010, this issue has graduated to being more like fingernails on a chalkboard. In general, one of the small but grating harms that the neoliberalization of academic life has produced is a kind of careless administrative encouragement of “entrepreneurial” activity by faculty, the chasing of “innovation” through initiatives, programs, centers, curricular sequences, workshops, and so on, generally designed to make an institution (and the faculty involved) seem responsive to outside trends. Frequently this is also about trying attract donor money, about marketing to some desired group of students, and sometimes to please a restive trustee or two who have a financial interest in the area of “innovation” in question.
I’ve done a fair amount of this kind of thing, so this is as much a self-criticism as anything else. Even more I’ve shown up to encourage other people at many institutions who are touting such projects. If I’m weary, I suppose it’s because in retrospect, at least some of this sort of work now seems to me to have been chasing phantom goalposts. And I should be clear: there are also projects that faculty I know have pushed forward with intense commitment and energy that have been astonishing successes that now feel as if we couldn’t possibly have lived without them. The VCAM program and building at Haverford College, for example, is a wonderful thing. The Summer Scholars Program at Swarthmore College took tremendous effort to establish and sustain, but I think it’s a huge success. There are terrific centers and new departments and transformative initiatives all over American academia that I deeply love and admire, all of them the result of dedicated work by faculty.
It’s just the churn of innovation-for-innovation’s-sake that wears me out, the projects that are trying to look like they’re getting faculty and students ready for Tomorrow’s Culture or The Work of the Future. And of entrepreneurial initiatives of that kind, there is none more frustrating than those attempting to get students to be digitally literate, informationally literate, and prepared to operate in a virtual reality future.
This is almost always exquisitely well-meaning stuff wherever and however it pops up. I don’t fault people—faculty, IT experts, librarians—for showing concern and interest in these issues. There’s two problems here, however.
First is that people in academia taking this kind of interest have been generally bad at predicting the specific future evolution of online culture and digital technology and at forecasting the risks and consequences of perceived gaps in student preparedness in relationship to that evolution.
Virtual reality, for example, has been breathlessly forecast as a near-term inevitability all the way back to the late 1990s. Partly as a result of my interest in virtual worlds, which largely turned out to be a fizzle rather than a sizzle in part due to the destructive impact of World of Warcraft’s design dead ends, I found myself at one moment around 2008 fielding queries from librarians and faculty wanting to know how to migrate their operations into the platform Second Life and how to help students prepare to study and work successfully in that platform and its successors. Second Life still exists, but it’s only for a small handful of die-hard devotees (and people who find its particular form of sexing to their liking). The idea that everybody needed to learn how to operate in that environment (and use the rather clumsy content creation tools) was at that time purely insane and it’s a good thing that almost everybody ignored the entrepreneurial call on campuses to hurry up and get ready for that future.
Then we had the Wii, and breathless coverage by tech journalists convinced that getting rid of the complex control schemes of other gaming consoles would make virtual reality possible. Then we had the Oculus and the Vibe and another wave of predictions that we’d all have to learn how to live in virtual reality.
Now we have Mark Zuckerberg determined to force us all into his vision of virtual reality, which is more or less just a repackaging of Second Life combined with a headset interface.
We do not have to get ready for any of this. It is absolutely not inevitable. Virtual reality technologies and platforms keep falling short of breathless predictions of their seamlessness and thus their inevitable displacement of all other user interfaces because they intrinsically can never be seamless, any more than language or reading or making art or performance are. Our bodies are well-adapted to speaking, for example, but we still spend lifetimes learning to work with language. There’s nothing natural about it. We still spend lifetimes learning to read and no interface (books, computer screens, etc.) makes that seamless. Virtual reality promises seamlessness on the strength of its promise to mimic physical environments, which we in fact do navigate with some degree of natural intuition. We are encouraged in our learning of walking, running, picking up objects, throwing things, gesturing, making faces, but a lot of that takes place without conscious thought. But virtual reality environments, no matter how sophisticated the representational technology involved, will still involve learning interfaces and mediating control schemes in order to actually use the distinctive affordances of virtual reality. E.g., suppose I’m having a Futuristic Office Meeting in Future Virtual Reality and I say “I am now going to take you all on a walk through the building designs for our new headquarters as drawn up by our architects”. The designs don’t exist in the physical world yet, so how do I walk from wherever I actually am and take everyone else “with me”, even if many of them are also physically somewhere else? I’ll need a gesture, a control wand, a code, a portal screen. Virtual reality has to involve enhancement to be worth anything, and enhancement will always take mediating controls. I am not a Jedi Knight but I might want in Future Virtual Reality to have my body movements in the real world amplify into Jedi-worthy body movements in the virtual world. The two are not the same—the interface will have to do something to take my real world movement and transform it into unreal movement and represent that transformation to me visually. (That’s often why they create motion sickness.) That translation will often take a mediating control scheme as well. I can’t Force Choke someone in real life, so the virtual interface will need to know the difference between me reaching for a coffee cup and me meaning to Force Choke the person across the table from me.
All of that means this is not inevitable. We may never see VR as more than a novelty or a sideline. It is not seamless or natural and it will not automagically displace physical presence, communicating via text, or any other representational medium. Arguing that it right now needs to be one of the literacies that young adults in college need to acquire is just another one of the many missed marks that entrepreneurial futurists in higher education have in their ledgers.
In the case of this particular call, the faculty involved have their eyes more on how students should learn to cultivate their identities online with an eye to the future—how to pick avatars, how to be aware of the future consequences of particular kinds of online engagement. They cite the University of Mary Washington’s genuinely useful idea of an e-portfolio that students develop during their studies which remains online and serves as a sort of “business card” that shapes their identities.
That’s a worthy proposition in its way, but as with many well-meaning things from educators at all levels, it also is two steps removed from the complexity of real life. For one, it’s not the technology or the interface or the tools that need learning. It’s not even what Ian Bogost has called “procedural literacy”. It’s mostly about the changing nature of tech capitalism, which takes avatars and identities away from us, inflicts other identities onto us, constricts what we can use to represent our interests and aspirations.
Think about it this way. I have always shaped my online identity with a view that I was an intellectual in a new kind of public sphere, which both called for me to cultivate the best possibilities of that public sphere through my own behavior and which meant I had a reputation at stake. And yet I have moved from platform to platform over a long online life where almost everything I said is now completely inaccessible. You can’t find what I wrote in the Science Fiction Roundtable on GEnie, you likely can’t find what I said on alt.society.generation-x, you can’t look at my posts in Howard Rheingold’s Brainstorms. You can’t find my early restaurant reviews that I posted as static webpages not long after starting at Swarthmore. You can’t find guest posts I wrote for a number of early blogs, you can’t find comments I made in a number of blogs, you can’t easily find my work at group blogs like Terra Nova and Cliopatria.
You can’t find my visual avatars easily except for some of my own photos and maybe my character from Star Wars: Galaxies, who pops up in one commenting service still.
Yes, sure, the way I’ve managed my online voice and persona has had consequences, mostly good. I think there are other long-online people who think fairly well of me because of my approach. They may not remember a specific conversation or instance, but there’s an accumulation of reputation over time. There are people who liked something I said and retained goodwill towards me even long after the words became inaccessible or at least punishingly difficult to find. There are people who think I’m a wanker or pretentious or politically wrong-headed based on some past engagement, I’m sure.
So you’d think I’d be in favor of teaching people how to manage their online presence. But I’m really not because there’s nothing particularly online about how I thought about it back then or now. What I used was an older conception of public. What I thought about was the question of ethics. What I mulled over was reputation. What I practiced was dialogue, discourse, conversation, writing. The rest I learned only by doing, as everybody else did, because that’s the only way to learn it in an environment where all the tools, interfaces, and platforms are subject to sudden transformations controlled by technologists and big tech corporations. Any effort spent on learning a particular digital tool in advance of using it as if it is the Key to the Future is a waste.
But much of what we write and say online (and offline, for that matter) will be forgotten. We worry too much and too broadly about the risks. It doesn’t go on your permanent record.
When we treat the virtual or the digital or the online as different and innovative and novel, we miss most of the point—but that is often what the entrepreneurial mode of institutional life requires, which is dressing something up as novel, futuristic, as-yet-undone, as requiring new resources and attention.
That opens up my second major problem with these kinds of digital literacy initiatives.
Let’s say that you’re perversely convinced by my recounting of my own acquired literacy with digital culture, information technology and virtual reality and say, no, that is exactly it, we want our students to learn that in advance rather than spend forty years learning it through experience. Join our initiative!
Faculty in higher education often mournfully tell students “you should be careful in social media, one wrong move and your reputation can be permanently ruined”.
I was overly sanguine earlier. It’s true they should be careful on social media. Even I’m scared of social media these days. One of the few things that is a material fact about digital culture is that things can happen very quickly and that they can happen at scales that feel unprecedented and unbearable.
Faculty and librarians in higher education often bitterly complain that students are informationally illiterate: that they choose the first hit in the worst possible search tool when gathering information, that they don’t know how to read for reliability, that they don’t know how to work on a topic methodically to create knowledge in depth. That social media is training them to be cruel and superficial.
It’s true that they should learn to be better users of information and better at producing knowledge and that they have to do that through digital tools.
The problem is that for the most part, most of the faculty I know across higher education are in no position to teach these kinds of literacies with digital culture, informational technology, social media, avatar management or anything else you care to name.
Subtract first the substantial subset of faculty who are highly accomplished experts in their own fields and skilled teachers but who have little direct experience with any social media platform of any kind and who rarely do discovery-oriented searches in any information space because they already know so much in their fields of interest that they have other ways to leapfrog to any new topic of connected interest. I am not knocking either of these absences: staying off of social media entirely has a great deal to recommend it and scholarly ways of knowing are one of the great continuing virtues of scholarship in a world where some people think a single Google search is good enough and reading a Wikipedia entry is asking too much. But no one in that subset of faculty has any business trying to teach about specific affordances and dangers of online environments and digital tools. I have been in numerous meetings at conferences, associations, and committees where faculty (not always older, either) who are in this subset nevertheless anxiously call for the teaching of these specific literacies. They don’t really mean it, I think: this is instead a kind of indirect way of complaining about the state of the world and attributing the causality of that state to a failure to raise the young correctly. It is not “I will take this on”, it is “somebody ought to do something”. (Which is what creates the market opportunity for institutional entrepreneurs.) If faculty in this subset do think they’re teaching about the specifics of digital literacy, they generally aren’t: what they are more likely doing is offering vague and out-of-touch warnings that are likely as risible as older faculty complaining about long hair on men or about campuses going co-ed in the 1970s, or similar diatribes against television as a uniquely corrosive medium.
What’s worse in this regard is another subset of faculty, much smaller, who have some experience with social media and informational searching and who frequently demonstrate what not to do with both. I could not teach choreography or how to play a musical instrument, so if you demanded that all faculty do that, I would be like faculty in the first subset, trying to teach something I know little to nothing about. I could in the most aching and agonizing of semi-incompetent ways teach students how to edit film in Premiere Pro or how to work with Photoshop. I’ve stumbled my way in an entirely self-taught fashion through both and I know that there are things that I do that are wrong or are based on an incorrect understanding of what is going on. If we decided that these specific kinds of tool-based visual literacies needed to be taught by everyone, I would like do more damage than the person who knows absolutely nothing. I run into faculty on social media with some frequency who are absolute suckers for fake information (usually not the really icky kind, mind you) where the interface is doing all the work for them. I talk to faculty who have strong opinions about Wikipedia who don’t know anything about it (I had to show one professor the Talk tab, which stunned them). I talk to faculty who have deeply folkloric or weird ideas about how social media or Big Tech operates.
The number of faculty in American higher education who are in a position to talk from an experienced or knowledgeable position about informational literacy in digital search environments—including what’s going on under the hood—is pretty small. The number of faculty who have sufficient experience with social media or virtual worlds or digital culture to teach students some specific literacies and guidelines is smaller still. Teaching the former is important, but it takes being willing to teach both what is proper and what actually works, comparing various ideal practices with various real ones in a way that keeps the latter available. Teaching the latter? I’m not actually sure that it matters so much. There are certain bad Facebook or Instagram or Snapchat posts or online stories that may follow an 18-year old around like a shadow for years, but I’m not even sure that some of those can be avoided even with maximum prudence. Sometimes they’re something that just happens to a person who is in the wrong place at the wrong time and it ends up getting lots of attention. Sometimes the prudence necessary is not digital but social, not technological but ethical. Sometimes what most experienced faculty would discourage turns out to be exactly what gives a young person a big break or catalyzes a career. And a lot of the time, people just forget about what seemed like an overwhelmingly consequential misstep. What seems like a big mistake, as the ratio piles up and nasty emails fill the inbox, can be something nearly impossible to recall even a month later.
Some of what would be taught as a mistake by even experienced faculty should instead be taught as a discussion, where nothing is taken for granted. That’s another problem with teaching anything as literacy: it encodes as skill what should be taught as a debatable set of choices. Who is behaving badly in the current Twitter conflict between Washington Post writers? I don’t know. Maybe nobody, maybe only one person. Maybe the editor is the one at fault.
That’s what my own experience and knowledge would suggest to me: this is not a skill, it’s not literacy, it’s not something technical. It’s just one domain of cultural and social life with no fixed answers. Which means the best thing for it is to live it out and apply a broader range of insights, aspirations and self-understandings to it as you go. Which is something I think you cannot build a special initiative to do in isolation within higher education. A student is learning as much about virtual reality or social media in a class on moral philosophy that never says anything about digital technology as they are in a class specifically devoted to the digital. They are learning as much about social media in reading Addison and Steele as they are in being patiently taught what a hashtag is and how to thread a series of tweets. It is happening all the time always. Whether students are getting better at living into their future is a diffuse thing that is everywhere and nowhere at once in what we teach and how we model the world through our own professional and intellectual lives.
Image credit: Photo by Lucrezia Carnelos on Unsplash
An amazingly engaging piece, Tim. I recognize the 1980s Tim now listening to the 2020s Tim. Hopefully, there will be some new-gangled Tims who come through the present still able to contribute in thought and practice to their 2050s people. Yes, I am wary of trying to get them there by this recent learning demand. But. . .is it possible?