On Pretentious Rhetoric
Including discussion on the following topical issues and queries: how pretentious rhetoric has been used effectively as job security. How the internet has shaped its usage over time. The problem with the “motte and bailey” metaphor. How critical theory and pretentious language has contributed to “the successor ideology.” The problem with “The Heterodox Academy.” McLuhan and Ong vindicated. The Bronze Age Pervert addresses some malarkey. Why the coronavirus will hurt universities even as our verbalist elites get worse. The need for a better, more robust elitism (ackshually a good thing).
Hey, want to read a really long, annoying, pretentious sentence? Sure you do. Look:
“The move from a structuralist account in which capital is understood to structure social relations in relatively homologous ways to a view of hegemony in which power relations are subject to repetition, convergence, and rearticulation brought the question of temporality into the thinking of structure, and marked a shift from a form of Althusserian theory that takes structural totalities as theoretical objects to one in which the insights into the contingent possibility of structure inaugurate a renewed conception of hegemony as bound up with the contingent sites and strategies of the rearticulation of power.”
This sentence by the infamous gender theorist Judith Butler was first published in 1997, and it won the “Bad Writing Contest” held by the Philosophy and Literature Journal in 1998. Because bad writing proved to triumph over good criticism in academia, that journal ended its contest before the next millennium began. Nonetheless, to this day it has been shared around by various centrist figures who generally subscribe to the axioms of liberalism yet find themselves repulsed by the excesses of the left, in the forms of both radical identity politics and hardline socialism. I’ve seen it pop up in several places, and I myself, when instructing writing, have shared it as an example of what good writers should avoid doing. It is, after all, a crappy sentence. But what exactly is so offensive about it?
To be fair, it is at least coherent should one decide to parse it constituent by constituent. To paraphrase it roughly, Butler is saying that academic theorists (like herself) have started to see capital as operating less under a fixed structure and more as something that changes over time based on circumstance, and this shift has subsequently caused people to see power and control (hegemony) as similarly flexible. Whether or not that’s a good way to go can be debated some other time. The real question is why this example of bad writing has proven to be so emblematic.
Perhaps much of the reason is that Butler herself has proven to be something of a Trojan horse in academia. For conservatives and moral traditionalists, the situation looks like this: Butler snuck her way in using pretentious rhetoric, she had some quirky ideas about sex and gender, and she published them throughout the 90s to little fanfare from anyone outside of her immediate milieu. No one really expected that her theories would carry a marked influence on society at large. And yet, as we have seen from rapid onset gender dysphoria in preteens, drag queens performing for small children in public libraries and schools, small children in drag performing for gay men in nightclubs and public parades, a Texas court ruling that a divorced husband cannot legally prevent his preteen son from being administered hormone blockers, and (perhaps most disturbingly) all of this being covered favorably by the mainstream press, it’s fair to say that she has made her mark. When that particular sentence of hers won the “Bad Writing Contest” in 1998, it symbolically meant something very different from what it means today.
But the other part of the reason as to why it’s so offensive is somehow both less surprising and yet more interesting when seriously analyzed: she is excluding you from the conversation. And she is not doing it by speaking in a different language. She is doing it with English. Most detractors of this approach to writing sense its exclusionary intent, and they vaguely sense that for all the pretense in the diction, the ideas themselves are probably vacuous. But of course, most who reach that conclusion are only guessing, and the uncertainty cannot ameliorate the frustration that such a sentence might elicit. Indeed, it probably makes such frustration worse.
Now, the second reason was all there really was back in 1998, since no one could quite foresee just how much trouble Gender Trouble could cause. At that point in time, academia had spent about a quarter-century developing Butler’s approach to rhetoric through the critical theory of other, better thinkers. Pretentious rhetoric had been around for even longer than that, but the advent of poststructuralism crystallized its newfound institutionalization sometime around the early-to-mid-70s when available jobs stopped being so plentiful. By 1998, it had become abundantly clear to a savvy group of academics (all of whom are retiring and dying right about now) that the real function of this writing was none other than job security. The subjects under discussion must be so impenetrable as to invite only a small hardcore group of devotees who can convincingly talk the talk, write some books, and then get tenure.
Elite writing is nothing new. In many civilizations throughout history, increasing advances in literacy have separated language registers to keep the plebeians discursively distant from the elites. In the popular imagination, people nowadays consider Hieroglyphics, for instance, a “universal language” that cell phone emojis may one day approximate. But it was nothing like that. It was the most elite of first two, then later three Egyptian scripts. It could be spoken, but the characters were mostly non-phonetic. It required a highly disciplined memory, and only the priestly class could read it. In our Western civilization, the medieval period saw the emergence of a split between Latin and local vernacular languages all across Western Europe. At first, Latin often blended together with the vernacular romance languages and lent itself to various gradual deformities until the so-called Carolingian Renaissance of the ninth century, which placed greater demands on priestly fluency and linguistic uniformity. By the eleventh century, when manuscript circulation had begun to accelerate across Europe, Latin was no longer an evolving, flowing, living language; it had been ossified into a tightly rule-governed abstraction rooted mostly in the style of Cicero. It was a dead language, essentially, but elites could write and speak to one another with it, and that’s what mattered. When a culture splits its modes of communication into two languages, we call this “diglossia,” i.e. “two tongues,” and it plays a wonderfully useful role in social class demarcation.
Perhaps halfway through the 20th century, academia should have looked inward, been more honest with itself, and just invented its own language. It should have understood the need for diglossia, with not just linguistic but perhaps broader cultural ramifications. Perhaps it could have invested in required lessons for professors in Esperanto, the failed universal language spoken by George Soros. It certainly would have solved plenty of problems. But alas, ours is not a society that would think highly of such a thing. The legacy of liberal ideology — itself rooted in Protestantism, which encourages everyone to engage intellectually with Bible and thus read their way into heaven — would never allow it. Neither would the thought process behind democracy, which calls for the individual to strive to make informed decisions. So the rejection of pretentious rhetoric that characterizes so much of populism is not simply a reactionary judgment, but in fact a reaction to a phenomenon that stands contrary to much of the Western, especially American sensibility. The academic world, aware of this problem, compensated by creating what we might call “pseudodiglossia.” Pseudodiglossia is a parthenogenetic subtype of diglossia. It’s when a language, almost like a single-cellular organism, splits off from itself to reproduce and make offspring. That is what our sentence from Judith Butler is.
One advantage of pseudodiglossia is that once a text written in the pretentious style has entrenched itself into the academic repertoire, it wins over converts who will stay converted. The reason is psychological. Confidence game schemers, pick-up artists, and other various kinds of predators and manipulators have learned from trial and error that the way you earn someone’s loyalty isn’t by doing things for them. Instead, it is by getting them to do things for you. The effort one expends upon a project makes it appear, in the person’s mind, far more valuable than it normally would seem had it required less work. This feeling is a cognitive distortion commonly known as the “sunk cost fallacy,” and it helps to explain the impressive loyalty that participants have for fraternal organizations with intense hazing rituals as well as religions with long and demanding conversion processes. Pretentious texts that belong to the “elite” pole in the academic pseudodiglossic split require a great deal of effort to understand. But most of the time, they do make sense and can be understood. Their actual value is often questionable, but for the man who has spent weeks or months trying to grasp their meaning on the most basic level, the effort he has spent makes him all the more ready to accept them as an influence and apply them as an interpretive lens to what he encounters.
We should give the devil its due. For a long while, pseudodiglossia was a useful makeshift way to draw lines of discrimination as the academic industry expanded from the 1970s up to the 00s, with greater numbers of PhDs than ever before, a thoroughly saturated job market, and far too many tenure-track applicants. With it, there was at least some way to sift out the people who can’t read or speak the language as well as others who cannot effectively apply it to their work. Of course, networking and communication always played a big role in the hiring process, and following 2008, those things drastically rose in importance as the humanities jobs dwindled due to the financial crisis. Though pseudodiglossia remains as is to some extent in the upper echelons, it has had to change, and I will later discuss the genesis of these changes. It is an imperfect system. But it is a system nonetheless, and it appears to have been generated spontaneously without any conscious planning or conspiring. Philosophical language in general has always been difficult, particularly with the arrival of post-Kantian continentalism, but the innovation of ostentatiously exploiting its knottiness to serve material, market-driven ends has proven to be a real adaptation — take it or leave it. So let’s give a big hand for pseudodiglossia, everybody. Here, here.
II. New Textual Communities
Now that we’re done celebrating, let’s return to the first reason as to why that Judith Butler sentence is so infuriating to critics of her and her style of expression. She wormed her way into academia, and her ideas have spread into the popular imagination, often rather awkwardly. We know that this proliferation has been awkward because so many of her arguments about sex and gender, and the interrelation thereof, remain obscure to most readers. And we know they remain obscure because it is not always easy to tell if her most vocal promoters always correctly interpret her theory. For instance, in a Medium.com post criticizing the YouTube video creator ContraPoints, a trans writer Alyson Escalante is careful not to indicate whether or not ContraPoints cites Judith Butler accurately. ContraPoints, according to this article, merely “interprets Butler as saying” such-and-such. She “attributes” to Butler this or that idea, but we are never told if her attributions are correct. This reticence is interesting, considering that the article appears to be critical of both Butler and ContraPoints despite the refusal to directly engage Butler’s ontology of gender and sex. It’s as if a protective halo floats around Butler’s work itself.
Escalante is hedging her bets. But who can blame her, when misprision does seem to be a problem, not only with Butler (and to be sure, there are many trans women who do not care for Butler), but so many of the writers within her academic poststructuralist milieu? It is easy to spot it in the critics of these kinds of writers, but even among their non-specialist supporters, it appears to be a problem. For instance, in 2010, a YouTube video celebrating Michel Foucault’s History of Sexuality went viral with around 100,000 views. A young man in a voiceover claims that Foucault “developed the repressive hypothesis, which says that today’s sexual repression is due to the rise of the bourgeoisie in the seventeenth century” while music plays in the background and zany images of dildos, adult cartoons, and Elmo from Sesame Street dart about the screen. But as one commenter points out, this is precisely the opposite of what Foucault was arguing; his whole purpose was to undermine the repressive hypothesis, which had already been around for some time.
Even in published material, this tendency to support a writer while not quite understanding the work can be seen. A laudatory blurb in the back of Spivak’s translation of Jacques Derrida’s Of Grammatology (Johns Hopkins UP, 1997) states that Derrida’s book “is the tool-kit for anyone who wants to empty the ‘presence’ out of any text he has taken a dislike to” and moreover that it is as destructive as a bomb that you can disguise in a brown paper bag in order to blow up a bar! Wow! But anyone who grasps the upshot of Derrida’s theory will know better. Derrida himself would routinely deconstruct texts that he liked and wanted everyone to read, as have so many Derrida-influenced scholars who have affectionately deconstructed whatever texts within the literary canon that their career specialty has stuck them with (although they always claim that the text has actually “deconstructed itself” so as to seem maximally intriguing and mysterious). I will not belabor this point, but there are probably quite a few examples for anyone who wishes to look. Certainly in the past decade, anyone who has taken a graduate seminar on critical theory will notice that it is common for politically conscious students to try and apply the theories they are reading to current events and activist causes before they are fully aware of what the theories are even saying. When the well-known male feminist community college professor and blogger Hugo Schwyzer had a public meltdown during a manic episode on social media in 2013, he admitted that while his academic background was actually in British medieval church history, he managed to teach two women’s studies courses without any expertise. In his words, “ I read one book of [masculinity studies theorist Michael] Kimmel’s and made myself an expert on masculinity.” Not that Kimmel is a terribly difficult (or terribly bright) writer, but one gets the point. Knowing the foundational work, the intricacies of the theoretical questions, and other subtleties that may arise, is perfectly optional for many.
So, how does this situation happen exactly? Well, we’ve already discussed diglossia in the eleventh century, so let’s push further into that century for a more developed point of comparison. In The Implications of Literacy, Brian Stock describes the genesis and development of several heretical Christian groups that all emerged in that time, and what he shows they had in common is striking. These heretical groups were often led by a single charismatic individual, and they were more broadly developed by a core of literate people for whom the Bible acted as the bedrock of their day-to-day lives. Their heretical interpretations of the scriptures did not need to involve an endless reexamination of key chapters or verses, since the text was well known and its broader meaning generally agreed upon, so they could then promote their interpretations of the Bible to a broader group of illiterate laymen for whom the Bible was highly important yet whose inner depth remained mysterious. A two-tiered structure of belief would emerge with literacy as its chief mark of distinction. But even during oral interactions between literates and illiterates, the Bible itself had been internalized as the primary source of meaning. The vocal discussions acted as a superstructure building upon a semiotic foundation accepted by all. For instance, in one of the episodes of an outbreak of heresy occurring at Monforte, the heretical group’s leader, when questioned by church authorities, proved to be educated in both the patristic writings and standard methods of biblical exegesis, though his statements were often intentionally vague and/or mystifying. But the other adherents in this group spanned a range of both literacy and erudition.
Stock’s explanation for why these heresies occurred differs from other explanations that typically focus on either the social dynamics of the heretics and their communities or the internal theological reasons for their disagreements. Because there are no convincing patterns that emerge from those areas, Stock instead argues from a media-focused standpoint that the growth in literacy, expansion of manuscript circulation, and unique communicative dynamics between the learned and the laymen created a series of “textual communities.” These textual communities centered upon the Bible as the primary source of meaning, but the unique directions that each heretical group went in depended upon the oral interactions between its educated and uneducated as they worked through the meaning and significance of the scriptures. In other words, the way the heresies grew had everything to do with the unique media environment that the local clerical authorities and their congregations inhabited. Stock’s focus on media is important here because it is not overly “social” in theory. That is, it doesn’t make a hard distinction between “the church” and some anti-establishment “sect.” Stock realizes that these heretical groups did not see themselves as radical revolutionaries. But his theory is also not overly “doctrinal” either, because it draws no binary between “orthodox” and “heterodox.” Instead, Stock demonstrates that the media environment created subtleties in the analysis of text that would emerge as part of a discursive process, and this process effectively eroded considerations of “orthodoxy” and “heterodoxy” for the local clerical authorities whom the church establishment would later question and condemn for heresy.
When making a comparison between media technology in the 11th century and ours today, there obviously can be no tight fit for every nuance within the big picture. It is not like throwing a silk cloth onto a cluttered table and watching as the fabric hugs every contour. But emphasizing media technology as a key component is, I think, correct for a few reasons. For one thing, our current moment does involve an interaction between learned authorities speaking with a “sort of” different language, an elevated language, creating a new diglossic split in what I call pseudodiglossia. This situation bears some resemblance to one in which clerical authorities would negotiate between the higher register of expression found in theology and the orally-derived “folk” beliefs of the lay congregation when establishing an agreed-upon meaning. And secondly, although the transmission of ideas across diglossic lines has changed its character as the media environment has changed, there is still the same top-down pattern of intellectual transmission, with the exegesis of important texts as its impetus. Now, the introduction of widely-available high-speed internet has had a marked impact on the interpretation of theoretical tracts written in this higher register, which might lead us to think it brings us further away from a solid point of comparison. But the internet has actually in some ways brought our era closer to how culture was before the invention of the printing press. The internet, because of its immediacy and widespread use of audio and video communication, represents what we might call a “new orality” layered upon a culture of literacy (more on this later), and it has allowed differing interpretations of difficult works to result not merely in a preponderance of “heresies,” but altogether new tribal affiliations based on such heresies. In the pre-internet era of, say, Judith Butler in the 1990s, her ideas on transgenderism, whether correctly understood or misunderstood, could never spread in the same way as now.
Nonetheless, there are some key differences between then and now worth mentioning. For one thing, the Bible has lost its status as the all-important text — the foundation upon which all other knowledge can only be a supplement. Instead, it has been replaced with a school-taught version of liberalism. When university students first encounter texts that might shake their assumptions in one way or another, their assumptions are characteristically liberal in an ideological rather than strictly political sense — not biblical. In fact, it is common for American Christians to cherrypick Bible verses and recite them in the service of liberalism (Galatians 3:28 is a favorite), rather than quote famous liberal theorists to show their compatibility with the Bible. I will not waste much time defending this point, as any cursory glance at a typical sermon by a megachurch preacher can do a better job demonstrating it than I could arguing it. The average American does not read the foundational texts of liberalism, nor does she need to, because the assumptions are deeply ingrained in her as part of an educative process. The true authority in America is obvious.
So when a supposedly dangerous text or idea written in a pretentious style is introduced to a student, it functionally operates as a gloss upon some liberal idea, whether or not the author (usually a postmodernist of some kind) identifies as part of the liberal tradition. The process of teaching the idea always involves frequent sympathetic referrals back to values that have been previously taught as part of liberalism such as “freedom,” “equity,” “the pursuit of happiness,” and so on. And indeed, the text itself will often make a fairly modest point despite appearances to the contrary. Often times, it really will be merely a gloss upon a previously accepted liberal idea.
University professors are often portrayed by conservatives as radical left-wing extremists, but generally the most successful and secure professors who teach pretentious material are fairly mainstream in the sum of their beliefs despite their eccentricities in one or two categories. A recent article in Jacobin pointed out that in their campaign contributions for the 2020 presidential election, both Judith Butler and Donna Harthaway, another postmodern feminist theorist, financially contributed to Kamala Harris’s failed campaign. Harris had been criticized by the left due to her past career as a prosecuting attorney, and although Harris dropped out early in the race, major liberal news outlets were covering her favorably when the contributions were made. To those who have spent some time in humanities departments, this sort of thing ought to come as no shock; it is common, though surprising to outsiders. In fact, the most radical and extreme college faculty on the left are not the tenured professors but rather the people least secure in their careers. They tend to be graduate students and adjunct instructors with no hope of tenure at all.
It isn’t hard to see why this is once you pay attention to the pedagogy behind critical theory ideas at all levels of education as well as how these ideas are disseminated in other channels outside the classroom. The transmission of the ideas is key, here. A graduate seminar in postmodern critical theory at an ivy league institution will differ from a grad seminar on the same topic at a backwater institution, though the professor will often have attended an ivy. If one of the students who attended that seminar at the backwater institution winds up teaching an advanced level undergraduate course in theory at a small-state funded university with no graduate program, the teaching will differ even more. If a student who attended that course winds up teaching a basic composition class at a community college and feels obligated to enlighten her students with whatever she got from that theory course, then will her presentation be different there, too? You betcha. And if a student who attended that composition course goes on to instruct a master’s program for certification in K-12 teaching or perhaps social work, and she wants to tell her students about this neat idea she learned, then surely you can imagine how far that idea, with all of its complexity, initially cloaked in pretentious rhetoric, will have strayed from its original meaning. We are dealing with delicate material, here!
Essentially, the professors most securely ensconced within their profession are acutely aware of what their texts truly say versus what they appear to say. And although they may have been initially drawn to the material based on their own misconceptions, they will gradually come to appreciate the richness and complexity of their work for its own sake, on its own merits. This process, together with the process of navigating the often mazelike contours of academic bureaucracy while maintaining a steady commitment to professionalism, will have a deadening effect on the professor’s radicalism, even though the professor may have a zany haircut, or have some quirky views on gender, or whatever. But the same cannot be said for that professor’s students.
I’m comparing this situation to the 11th century “textual community,” which Brian Stock identified as what facilitated the outgrowth of heresies against Orthodox Christianity, and my comparison comes from the parallel transmissions of information across diglossic boundaries. In our situation, the heresy is not against Orthodox Christianity, but against American liberalism. But our situation differs in another key way, which is that the spread of information does not take place in closed communities. The information continues to change its form and character with an indefinitely strained path outward, and at each point of mutation along the way, the environment lacks the strong social bonds necessary to be called a “community.” Instead, I would call the classroom that teaches pretentious texts an “exegesis hatchery,” in which our blessed education system allows a thousand eggs to spawn a thousand mutants, who in turn lay their own eggs, and so on.
What allows the exegesis hatchery to become an important feature of western discourse is the advent of digital media. Without the internet, any clunky and excessively radical reading of something like, say, postcolonial theory would be dismissed as the innocent eccentricities of a slow-witted adjunct instructor. But with the internet, each misreading of an idea can bond to another misreading, and when those bonds become sufficiently extensive, then a disembodied, digital community can form across geographical boundaries to reinforce that misconception. The Hellenistic philosopher Plotinus felt that the material closest to the Divine Mind, the nous, is light and subtle, while the material furthest from it is dense and rough — that is, the further a material strays from pure ideality, the more concrete, tangible, and altogether heavy it becomes. In the same way, each critical theory has its fruit and its chaff, and exegesis hatcheries will inevitably expand the chaff and obscure the fruit the further they stray from the source, leaving us with a heap of density and roughness. The internet is the mysterious glue that can bond the various types of chaff together, until each category thereof assumes its own formidable significance. In sum, the exegesis hatchery creates the liberal heresy. But the internet, with its tendency to create discrete digital “tribes,” provides the means with which these heresies might blossom into something akin to Stock’s concept of textual communities.
III. Exegesis Hatcheries, Liberal Heretics, and the Successor Ideology
At this point, I can sense that some pencil-necked dweeb will inevitably demand examples of a digital community steeped in the sort of confusion I’ve outlined. “D’aaahhhh, where are your accredited sources,” he protests with his index finger pointed upward with one hand, the other adjusting his thick pair of black-rimmed glasses, his neon green pocket protector catching a fleeting shimmer from the fluorescent lights above. I can sense that I’m stepping on some toes, here. Well, since the art of keeping an audience engaged depends partly on never glutting them with too much of what they desire, allow me to prolong the suspense. For a minute, anyway.
Any reader who has been following the topic I’m assessing will be familiar with the idea of the “motte and bailey” technique. But I’ll explain it anyway. The motte and bailey was a useful architectural arrangement for land security in the 11th century (yes, another 11th-century comparison!). A bailey was a vulnerable field or courtyard surrounded by a barrier or ditch to slow down invading enemies, while a motte was a more secure nearby tower on an elevated hill. When an enemy army would invade the bailey, soldiers could occupy the motte and proceed to rain down their missiles and projectiles on the invaders. Let’s clarify the comparison and observe the technique through example. The philosophy professor Nicholas Shackel created the term in a critical toolkit he wrote back in 2005 to help people understand the disingenuous rhetorical techniques of the (mostly French) postmodernists, the kinds of people we’re examining. Analyzing an essay by Michel Foucault called “Truth and Power” (1972), he shows how Foucault can get away with making so many sweeping and irresponsible claims about truth: he redefines the word to make it far more convention-driven and less self-sufficient than what most people understand the word to mean. So, Foucault says “the political question … is truth itself,” which is quite a shocking statement taken on its own, but a mere page beforehand, he describes truth as “a system of ordered procedures for the production, regulation, distribution, circulation, and operation of statements,” rendering the shocking sentence much less interesting to the attentive reader. In this example, “truth” creates the bailey because of its redefinition set against its standard lexical usage. When Foucault discusses truth from that point onward, the connotative weight created by the standard definition will bear down on every subsequent use of the word. He creates an implied meaning that everyone wants to have and protect, much like a bailey. But when someone says, “Come on, now, Foucault, that’s taking things a bit too far,” Foucault can (in theory) retreat from his bailey, go up to the motte, and say, “Ah! But you fail to grasp the meaning of truth as I define it!” and viciously fire arrows down on his enemies like Ted Nugent hunting wild game. And then when the critics go away, he and his friends can go back to hanging out in the bailey once more, frolicking around irresponsibly with the implied-but-not-literally-stated argument about truth.
Since Shackel’s article was written, people have profitably used the “motte and bailey” to analyze aspects of online discourse. The rationalist blogger Scott Alexander was the first, and then a bunch of people followed. It is a useful tool of analysis. But in looking at only the primary texts of Foucault and other postmodernists, Shackel himself fails to substantiate the point he makes, though others have succeeded when examining less prestigious forms of discourse. The concept of a “motte and bailey” is sociological in nature because it has to show that some deception has taken or will take place. It assumes the reaction of an audience that Shackel only theoretically constructs from internal textual evidence. If I find an example of someone angrily shouting, “Racist! Racist!” at someone else, and then I find the same person days later saying something like, “All white people are racist because racism is a form of privilege systemically intertwined with the levers of societal power,” or whatever, then I’ve come much closer to identifying a true motte and bailey, because the self-same individual is shown to hold two contrary definitions of “racist” at once.
This distinction is important because we are talking about texts written in a highly exclusive language with multiple clusters of audiences who each hold different levels of education, erudition, and raw intelligence. In order to understand how the pretentious rhetoric of postmodernists, critical theorists, and others affects society, one cannot simply extract an anticipated response based on the text alone, even if the author as an historical subject did just that (most of the time, you’ll never truly know anyway). Instead, we have to figure out the horizon of expectations for each of these various audiences that will interact with the author’s work. Not to put too fine a point on it, but not everyone will be hanging out in the bailey. For some audiences, the bailey will simply not exist. For other audiences, there will be vast, sprawling baileys, the likes of which we could never have dreamt up on our own. Exegesis hatcheries sometimes result in large groups of people clustered in these spacious baileys, completely unaware of any motte. And when one group (like, say, a teaching faculty at some random elementary school) has itself taken on influence from several exegesis hatcheries, the major thinkers that prompted exegesis themselves often become irrelevant. There is no longer any necessary connection between the theory and the group — just the vague connotations, anticipated misconceptions, and implied imaginative leaps that the theorists seem to have excited everyone with to get their attention in the first place.
With all this in mind, look at what happens when critical theory and pretentious rhetoric make their way onto the internet and create communities that impact the embodied world — this is the example that our impatient nerd at the beginning of this section was demanding. In practice, this process has led to a gigantic, formless blob of ideas that is influenced by a whole bunch of theories with their own pretentious jargon words and rhetoric. It is something that the journalist Wesley Yang has called “the successor ideology.” The successor ideology is, according to Yang, the successor to the meritocratic liberalism that has characterized much of the 20th century in the West. It is the triumphant conclusion and result of the “long march through the institutions” that the cultural left had famously undertaken following World War II. In effect, it has created a new language of power that has impacted every corner of the non-profit sphere, and its power is realized through a culture of fear and intimidation often elicited via the manipulative use of its talismanic jargon words, such as “harm,” “trauma,” “privilege,” “whiteness,” and so on. So what does it look like in practice? In one Twitter thread, Yang links to an article in the NY Post describing how the New York City Schools Chancellor Richard Carranza presented to NY public school administrators a power-point about the need to dismantle something called “white supremacy culture,” which Carranza associates with “individualism,” “power hoarding,” “defensiveness,” a “sense of urgency,” “worship of the written word,” “objectivity,” and other silly things. More research reveals that Carranza is himself a bit of a character. For instance, he has allegedly tried to demote or push veteran senior administrators into retirement simply because they are white (as of now, they are suing him). And as Chancellor of the schools in San Francisco, he tried to get rid of 8th grade algebra as part of his opposition to white supremacy, ignoring the solid majority of Asian students taking the class.
Yang correctly states that Carranza is but one example of someone with the “successor ideology,” and it involves a mentality that blossoms and thrives first in universities and foundations, then expands throughout all institutional liberalism as well as major parts of the private sector, including the training and HR policies for corporations such as Google. But where did Carranza get his ideas on white supremacy? Well, in this instance, he didn’t get them directly from critical theorists, but rather from career activists who were themselves influenced by critical theory. Carranza was influenced by “Dismantling Racism: A Workbook for Social Change Groups,” a web site by Tema Okun and the late Kenneth Jones. These authors are only academia-adjacent (Okun has a PhD from UNC-Greensboro), and the site is designed for activist organizations. Now, on the site, their definition of “racism” comes straight from Critical Race Theory, and it appears to do a serviceable job of breaking down the theoretically-derived definition. But their discussion of white supremacy is only partially influenced by Critical Race Theory, with their own definition presented as an equal and complementary counterpart. The page on “White Supremacy Culture” by Tema Okun, the main source for Carranza’s powerpoint presentation, has much less to do with theory, if anything at all. Let’s explore it, just to get a sense of how this stew is made.
The page is actually sort of cute, if you read it, because it is clearly geared toward activist groups and other nonprofit organizations. For instance, according to Okun, a “sense of urgency” is part of white supremacy culture because it fails to consider long-term goals over short-term ones, and it is “reinforced by funding proposals” that demand too much effort for too little in return. Antidotes to this pernicious white supremacy include “realistic workplans” and leaders who appreciate the length of time for a task to be completed. Obviously, in another context, one could imagine an activist deciding that low time preference and the view that “slow and steady wins the race” is white supremacist instead. But in this context, the ineffective thing is white supremacy. Whatever works is the good thing, you know? Under “worship of the written word,” which one might expect to be some sort of humble-brag about the erudition of whites, the discussion is actually about how organizations put too much emphasis on written memos as opposed to “alternative ways to document what is happening.” He even argues that if someone must write something down, it should be clear and understandable, without any “academic language” or “buzz” words. Okun may have a whole lot to say about such academic “buzz” concepts as “whiteness” and “white supremacy culture,” but when it comes to running organizations, he is apparently all business!
One can surmise that Okun isn’t exactly burdening himself with a high standard of evidence, here. But what are his sources for all this white supremacy culture talk? According to a partial bibliography that appears to order its entries by importance (either that or it’s random), the top two sources are notes on workshops held by activist communities that took place in 1999. That is, the discussion is informed not by strict theory but by the liberal heretics that dutifully try to capture at least the spirit, if not the letter, of the various sorts of theories and critical discourses that have been indirectly filtered to them through exegesis hatcheries and perhaps other tertiary information channels. Okun is trying to remain true to his belief that oral discussion is just as valuable as the written word of a single author, if not moreso. But if the whole framework for the discussion on white supremacy is grounded in theoretical writing, and the participants in these discussions are influenced by such theoretical considerations, how can these oral workshops free themselves from the burden of textual authority? Simply put, they can’t. They remain subordinate to it, and the discussions they elicit take the form of notes, which are akin to the “glossaria” that a medieval cleric might jot down in the marginalia to a commentary on some verse in the Bible. The ideological essence of liberalism, critical theory, and other pretentious academic texts might be loose in their conception, and their interpretations might be complicated by more forms of communication technologies, but the top-down yet often recursive nature of the discourse as it moves along different media lines is surprisingly consonant with the “textual communities” that we’ve discussed.
Wesley Yang, for his part, is keenly aware of the cluttered nature of this kind of thinking. And about the whole successor ideology of which that web site represents just one example, he diagnoses the clash between the need for legitimacy and the undisciplined nature of the discourse as a key aspect of the problem. He says on Twitter, “From eco-feminism to Carlos Castenada to Carol Gilligan to German Romanticism, there isn’t a single woolly-headed critique of Western philosophy that isn’t thrown willy-nilly into this stew and presented as authoritative, despite the obvious internal inconsistency.” And true though this observation may be, it raises the question as to whether the successor ideology is really an ideology at all. Rather than an ideology, which is usually understood as a more or less coherent multi-pronged system of ideas, this appears to be a generalized attitude, an accumulation of disparate ideological residues all thrown together. And the promiscuous mixture of these residues takes place on the social level, when various liberal heretics get together and form bonds with one another, however loose, based on a simplified exchange of different, often incompatible heresies. The social bonds, feelings, rituals, taboos, and emotions in these exchanges are more relevant here, whether in person or online, than the original writers and theorists from which these exchanges of ideas either directly or indirectly obtain their veneer of authority. The people can supplant their sources. So how did Carranza stumble upon this web site that he used as a source for his school administrators? He may have been linked by a university professor long ago, or he may have been linked by someone within his own digital “tribe” on a social media platform. The point is that the internet is where the original theoretical texts spread, and what he came to find authoritative was not any single text but rather a vague collection of ideas based on some theory texts: the product of readerly communities who kept some of the jargon and dismissed much of the substance. And mind you, Carranza is just one man, here. The entirety of the “successor ideology” is even more diffuse, a loose arrangement of various bureaucrats who have graduated from exegesis hatcheries and inhabit various digital tribal groups, each with its own zany “pet” issue that itself warps some hallowed mainstay of liberalism.
I am using the expression “liberal heretic” as a way to describe the graduates of these exegesis hatcheries, but one could object with some justification that it is inherently contradictory since liberalism, whatever its value, is not an orthodox creed. Rather than resting upon a foundation of incontrovertible dogmas and doctrines, it is a loose collection of guidelines and rules-of-thumb, which makes it frustratingly hard to define, as anyone who has tried to explain to the average person the difference between “leftism” and “liberalism” can attest. Because of this relative dissoluteness, there are some groups who seem to consider heterodoxy the essence of liberalism, or at least perhaps enlightenment epistemology, and fight for it on its own terms. Within college campuses, the “Heterodox Academy” is the clearest example. But the openness of liberalism as a concept does not make it altogether void of any coherent substance at all. When bad-faith actors can so easily exploit the weakness of its mere guidelines with an excess of ideological malformations and what some call “concept creep,” can the solution really be the encouragement of more heterodoxy? Heterodoxy for its own sake? If the contents of the successor ideology are characterized more by contradictions than consistencies to the point where it hardly constitutes an ideology, where is the “orthodoxy” against which a “heterodoxy” might even contend? Could it be that a lack of orthodoxy is the much bigger problem? Is this situation not like the well-worn allegory of an old man stuck in a hole, trying nevertheless to dig his way out?
IV. From Pseudodiglossia to Paradiglossia
Earlier, I justified making such long comparisons to the medieval period because digital culture is, in a sense, a “new orality” that approximates certain aspects of pre-print manuscript culture. This idea was first brought up by Walter Ong and then Marshall McLuhan in the 1960s, though Ong called it “secondary orality” to compare what he saw in the culture of electronic media with the culture of “primary orality,” i.e. a culture in which no form of literacy has been invented. After this school of media ecology was introduced and made a splash in the Western world, its supposed technological determinism gradually grew out of favor with academics, along with its tendency to rely on an oversimplified understanding of oral cultures. However, since the widespread use of the internet, this kind of media-focused thinking has made something of a return, even among academics and amateur writers who claim no influence from either Ong or McLuhan. The mass usage of the internet has confirmed some aspects of “electronic media” identified by the early media ecology theorists, while calling for others to be rethought. Among those with a background in the study of ancient epics and other forms of oral literature, John Miles Foley has been one of the most prominent to discuss the connection between primary orality and digital internet culture specifically (see his now-defunct “Pathways Project” web site). When engaging with his work, one can see that there are indeed some compelling connections between those two environments.
The first thing one should appreciate is that the culture of primary orality is epistemically process-oriented rather than definitive. No work is ever really “finished.” Ideas are built from the ground up, and people add to them agglomeratively rather than by starting from first principles and breaking them down divisionally. In order to argue from first principles, one has to inscribe those principles for preservation, mull over them for a bit, and conceptually reorganize his various ideas until all the parts all fit together into branches and subdivisions. This principle holds true for the transmission of basically all information. It applies not just to philosophy and mathematics (making deductive arguments stemming from sacrosanct laws) but also for narrative storytelling (e.g. assigning fixed, complex psychological or ideological qualities to a character rather than relying on what we might call “stereotypes,” “archetypes,” “folk wisdom,” or whatever). Unaided human memory can only achieve so much analytically, and thus it cannot do what literacy can, even if some rudimentary forms of complexity might be found among the tales or proverbs of hunter-gatherer, horticultural, or primitive agricultural tribes.
Secondly, an oral “text,” being the result of gradual adjustment, is often the product of many authors who have distributed their contributions over expanses of time. Since oral cultures have to memorize everything, important ideas are stated over and over in order to contend against the deteriorating effects of time’s passage. Since so much of the shared knowledge has become embedded within a body of people, new speakers can only add bit-by-bit to what comes before them and truly make it “stick.” Thirdly, no one will typically accuse a speaker of being excessively repetitive, because repetition isn’t understood as we understand it. That is to say, ideas don’t really “repeat.” Instead, they “reoccur,” almost as natural phenomena. In various ancient Indo-European poems, words are often conceived corporeally, as physical extrusions from the chest; things to exchange; things that a poet can keep inside a hoard for later use. Though that conception may not necessarily hold for all oral cultures (I haven’t checked), it suggests a perspective on language that treats its various syntactic and semantic configurations as concrete offerings or inputs for certain situations, not as unique expressions of the speaker’s distinct personality. Thus, a certain performance of language, like an old saying of an heroic ancestor, can be used time and time again provided its context is appropriate, and no one will complain about its staleness. To some extent, this is a feature of oral discourse that we have never lost (people still say “bless you” when someone sneezes), but the internet has expanded this usage of context-specific formulas, declarations, and so forth. Although internet subcultures will often complain about stale memes, slogans, or talking points once a crucial number of people feels that they’ve been overused, it is still worth acknowledging just how much mileage these memes and talking points can get before being discarded. The importance of originality, while still emphasized, has been reduced as people increasingly understand the transmission of digital text to be an approximation of oral conversational discourse rather than an official “publication” of some kind. There is no way this conflation does not impact the way we write on all but the most elite levels of discourse.
The fourth aspect of primary orality worth discussing is its agonism, which was first postulated by Walter Ong. Since his time, the aforementioned criticisms against Ong and his ilk appear to have stymied the observation, even among academics who have adjusted and qualified their endorsement of his basic claims. I have not seen too many serious discussions on digital media and its agonistic qualities as a re-emergence of what one finds in primary orality. There may be some concerns about stereotyping primitive cultures. And to be fair, one could perceive the agonism of electronic media only to a limited extent when the television and radio were the main representations. But since the development of social media internet apps and the performative nature of online argument, the conclusion seems completely inescapable. To be sure, the various social media platforms each have their own unique technologies of organizing human communication, but the most agonistic platforms involve a speaker communicating not merely to one other person but also, simultaneously, to a large audience of onlookers. When video, sound, and live streaming are involved, these debate performances can often get downright ugly, even despite the low expectations of material rewards and sometimes Pyrrhic consequences for the victors. Even on primarily text-based platforms, where users often spend hours daily, many have recognized that communication is far more competitive than dialogic in nature. During the enlightenment, by comparison, it was common for both the bourgeoisie and the aristocracy to spend hours per day writing correspondence for one person at a time, immersing oneself in that person’s world, tailoring one’s writing that person’s concerns, and so on. The difference could not be starker.
Still, ours is a literate culture, and such comparisons must remain limited and contextual. Ong originally felt that “secondary orality” would only reintroduce aspects of primary orality to a limited extent. But even that somewhat modest claim is much too straightforward, since it fails to take into consideration the total range of communication types that electronic media has impacted or might potentially impact later. That we still form affiliatory groups centered on certain written texts and the interpretations thereof suggests that “primary orality” may not even be the best point of comparison, since oral communication never recedes altogether from any media environment. Though book-reading, long-distance communication, and other methods of transmitting information have drastically changed over the last few decades, we still record important ideas for preservation and require the use of analytical thought processes to provide the complex technological infrastructure that enables digital culture to survive in the first place. There will never be a “return” to primary orality, or even a partial approximation. I’m suggesting instead that the internet has caused the reception of major texts to work in a manner more similar to that of manuscript culture, the time before the invention of print in which written texts occasioned and shaped the framing of an oral discourse surrounding them. Discussing things through the internet is not the same as person-to-person discussion, but the speed with which conversational discussion occurs helps to approximate it.
Now, going back to the problem of pretentious language, we’ve seen a process in which an elevated writing style designed for a highly engaged and attentive reading audience filters its way through various media to increasingly large audiences of a less sophisticated register. But the story does not quite end there. When this pretentious rhetoric gets transmitted onto the internet, people are typically grounding their speech in their own interpretations of the texts that supposedly inspire them, and these discussions prompt a further exchange of ideas and value judgments that results in a basic attitude that has been called “the successor ideology.” But in this transmission from a higher to lower register, a new problem has emerged: since this language is now spread far outside of academia and used in regular discussion by the verbalist wing of our ruling class, how do they keep the discussion relatively exclusive? Does it even need to be kept that way? The answer is simple: yes, it does. The whole allure of these jargon words in the first place was their exclusivity. Much like the word “shibboleth” in the Book of Judges whose correct pronunciation meant the difference between life and death, these jargon words, strained puns, and various syntactic maneuvers have done a fine job of separating insiders from outsiders. But by cheapening the language that was originally used in these formative pretentious texts, mostly written before the internet was common, today’s liberal heretics have made it so that now any ol’ bozo can show up and start talking like that. Which is completely unacceptable. Lines must be drawn. And the answer of how to draw them lies in the same impermanence that characterizes oral discourse itself.
In his debut article for the American Mind blog hosted by the Claremont Institute, the pseudonymous Bronze Age Pervert (henceforth BAP) argued that mainstream rhetoric against whites has escalated to a point that hasn’t been seen since the Hutu’s genocidal propaganda against the Tutsi in Rwanda during the 90s. One of his examples is a Huffington Post editorial entitled “Towards a Concept of White Wounding” that BAP takes as a straightforward call to wound whites. In a response by movement conservative Aaron Sibarium, he argues that BAP is engaging in a bait-and-switch because his examples, which include that article, in fact all say something much softer than what BAP takes them to mean. BAP is thus engaging in scare tactics to whip his followers up into a frenzy.
But that editorial engages in a strategy that we’ve already seen, and which the internet has rendered all-too-typical. Its author, Jesse Benn, is a PhD student who has certainly read his fair share of pretentious academic works, so whether or not he understands the techniques, he has at least absorbed them instinctively in his writing. The word “wound” appears 21 times in his article, but we do not get to a definition of what “white wounding” really means until roughly 2/3rds through it. As you can imagine, it is a much softer redefinition than its standard English counterpart. Though Benn ensures us that it doesn’t just mean “white guilt,” his definition suggests that it’s merely a more embellished and demanding version of what most understand white guilt to entail. Benn is inventing his own jargon on the spot, and it is most definitely pretentious and unnecessary. But he is motivated by the hope that his new alliterative phrase will go “viral” and become embedded as a regular feature of the discourse. He does this because the use of pretentious rhetoric is now accompanied by an awareness of the impermanence of its own usage. To signal your membership within the verbalist wing of our ruling class, you must not only conform to the standards of pretentious language and show fluency in it, but also demonstrate flexibility in knowing when some feature has dropped off and a new one has arisen. Obviously, some will last longer than others, and this varied level of impermanence is useful for keeping the discourse exclusive. One must stay constantly aware of frequent shifts in usage, because while this pretentious style may no longer be controlled by a central authority, it is nonetheless rule-governed.
This new approach to such rhetoric is fortified in two ways: first, by the creation of new jargon that one can use to signal in-group belonging; and second, by the creation of rules involving diction and syntax that render certain usages of pretentious rhetoric compulsory (we often treat these rules as operating under the broad category of “political correctness”). Anyone who has browsed through verified accounts on Twitter, the ones with a “blue check,” will have seen at least one tremendously popular thread or post in which some journalist, adjunct instructor, or non-profit organization worker has created a new rule or series of rules for how to speak in a certain way, usually with social sensitivity as the stated concern. The verbalist wing of our ruling class takes such rules seriously, and the ones that “stick” most certainly do become recognized (if only temporary) guidelines of verbal expression. All of this is possible due to the aspects of primary orality that become more prominent with widespread usage of the internet. People knowingly contribute new rules and jargon to a body of vaguely political ideas that doesn’t belong to any single person, growing bit-by-bit rather than branching off from one clear, logically consistent theory, and they constantly remind themselves of this body of ideas by letting it reoccur in their everyday speech, keeping it safe from the ravages of time and the erosion of memory.
Think of the budding young writer who wants her new pretentious jargon phrase to become popular and recognized. One way she might do it is by making it memorable. The internet, marking a return to a more orally prominent “manuscript culture,” allows for new jargon to appear regularly, bearing some resemblance to the mnemonic alliterative phrasal formulas used in old chivalric romances. University students may not go around discussing “white wounding,” but they do discuss “black bodies,” “slut shaming,” “fast fashion,” and other such phrases because they’re catchy and easy-to-remember. They may even change their precise definition over time, but such a change would actually strengthen the perception of the phrase’s value by demonstrating longevity. Therefore, alliteration or some form of assonance is a good idea. Another good way for our writer to ensure that her idea will go viral is by making it provocative and/or absurd — the moreso, the better. Because the internet, and particularly social media, favors broad declarations made to just about everyone, a provocative or absurd idea will gain popularity by attracting a fair number of both cheers and jeers alike, and the polarization it elicits will compel its supporters to harden in their endorsement of the idea (and, of course, absurdity shows obedience to those in power). The agonistic qualities intrinsic to online discourse will benefit the more provocative statement. When Mr. Pervert decided to ignore the stated meaning of “white wounding” in that editorial, he was absolutely right to do so, because it was a sleazy attempt to coin a new popular phrase with catchiness and, more importantly, provocation tinged with a suggestion of violence. Such language, when available to the masses, can indeed become dangerous. Remember: for some audiences, there is no motte. None whatsoever. It’s all bailey.
As you should realize, this use of impermanence and constant modulation of the English tongue is different from the use of pretentious rhetoric that we examined at the beginning of the essay. Though it is often still used as a tool for job security by allowing administrators and gatekeepers in academia and academia-adjacent professions to identify outsiders, its overall use has both softened in its standards and broadened in scope. It suggests that we are phasing out of a “pseudodiglossic” paradigm into what I’m calling “paradiglossia.” In linguistics, a verbal expression is understood both diachronically and synchronically. That is, it is understood at the nexus between the rules of the language as they exist, frozen in time (“synchrony”; we can conceive of it as an x-axis), and the ever-evolving rules of the language, always in the process of changing (“diachrony,” the y-axis). In the diglossic split between Latin and vernacular languages that characterized Europe from the early medieval period all the way up to the enlightenment, great care was taken to keep Latin as fixed as possible. From the medieval period on, it was massaged into an ossified, somewhat anachronistic Ciceronian form and kept that way for centuries. Church authorities attempted to distill the language in an unchanging synchronic structure so that elites could learn the language, go back to studying what they needed to study, write texts that others could read far away, and then convene with one another with these fixed rules kept in place. Paradiglossia involves the same parthenogenetic language split as pseudodiglossia, but the “elite” language is characterized not by a complex synchronic structure but rather by frequent modulations across a diachronic expanse. In order to know the elite language, one must constantly stay abreast of new developments, because the rules one has internalized will undoubtedly change shortly enough. Engagement and constant attention is the name of the game. “Toward the Concept of White Wounding” was but a single misfire amid an ongoing bombardment of tactical jargon creation.
The tradeoff, in this gradual shift from pseudodiglossia to paradiglossia, is that while the rhetoric is still pretentious, it has gotten rather less impressive. In fact, it has simply gotten dumber, even when coming from professional academic theorists themselves. We do not see the extreme knottiness of a Judith Butler or Jacques Derrida nearly so often these days, but we do see plenty of career professors building their careers on pathos-laden excesses, tactical deployment of momentarily popular jargon, the occasional postmodern pun (e.g. using a hyphen to break up a word in order to draw attention to its morphological structure), and other low-energy staples. It is not so different from how the aforementioned chivalric romance literature would occasionally use simple oratorical techniques found in manuals such as the Rhetorica ad Herennium to lend the work an air of classical authority, even though the writing itself was quite primitive, composed for the benefit of minstrels who would memorize the work and recite it later at pubs. Paradiglossia could work no other way.
This approach was not conceptualized and deployed in a top-down fashion. Rather, it has been a spontaneous and intuitive way to make use of the unique properties of the internet as a medium. It also helps to explain why “political correctness,” which many thought to have died during the 1990s, has made such a roaring comeback. Political correctness in the 90s was to political correctness in the 10s as the English proto-protestant Lollards were to the Lutherans. In order for Protestantism to “catch on,” it needed the invention of the printing press so that vernacular translations of the Bible could be proliferated among the population. Similarly, in order to thrive, political correctness needed widespread usage of the internet in order to allow verbalist elites to solidify a form of exclusive discourse. Despite the lack of intelligent design driving our paradiglossic linguistic split, it is still a fairly robust system because of its potential for spontaneous autocorrection. Imagine, for instance, a situation in which the plebeians start to stir up some trouble, lose faith in the ruling class, denounce their legitimacy, and get rowdy. In such situations (or situations that seem like that), the verbalist wing of our ruling class tends to panic before finding comfort and solace in the creation of rules, if only for their own sake, because rules mean order. They also tend to become more prejudiced against outsiders, and this prejudice often manifests itself in the creation of new jargon, borne out of self-righteousness. By creating new rules and jargon, they have increased the rate of language modulations, fortifying their security, allowing them to suss out possible entryists with dangerous ideological motivations. Then, when the situation stabilizes, they can bring their language back to a slower rate of modulation.
Although these verbalists may think they are creating new rules and jargon words to last for all of eternity, and one could accuse them of only holding up the pretense of virtue while ignorantly engaging in yet another way of hierarchalizing society, it doesn’t really matter. The passage of time blesses our elites, and the internal incoherence of the varied motivations behind their linguistic eccentricities can only bring them good fortune. If they say that their critics “don’t get it,” it’s because they’re typically right. The outsiders are looking for logical consistency and reliable argumentation, failing to perceive a system that works in its own way — just not in the way intended or advertised. I cannot help but admire the accidental brilliance of this system.
And so off they go, these benevolent and kind-hearted elites — these noble and conscientious exemplars of morality — blindly stumbling into greener pastures, sleepwalking backwards into the cool oak shade, hopping one-legged and tumbling mirthfully into beds of daffodils, laughing and loving all the while. What’s that? What’s that whispering along the tepid and subtle breeze? Do you hear it? Do you hear it, riding the shoulders of the zephyr winds from afar as the dandelion seeds scatter and coat the dew-drenched greenery? Why, some are screaming about “heteronormativity” on their podcasts! Some are tweeting about “dead names”! Some — callooh! callay! — are ignoring the lexical categorization of the pronoun as “closed class” and inventing entirely new ones for the nonce! God bless these verbalist elites! God bless them all!
Oh dear, I’m sorry. It’s getting late and I’ve been droning on for a while now. I suppose I’m getting a bit wound up. Let me start this conclusion over.
While staying mostly within the question of pretentious rhetoric, this essay has offered an entry point into the larger question of elite gatekeeping, and it has done so from a media-focused stance, starting with an analysis of pseudodiglossia, ending with a discussion on paradiglossia, and making liberal use of comparison to pre-print cultures that maintained a diglossic split with an elevated degree of prominence in elite oral discourse. Whether one does or doesn’t buy my analysis is clearly up to them. But there are a couple more points I want to leave the reader with in closing.
As I’m writing this, a deadly coronavirus is tearing through the country at an exponential rate of infection, with the projected death rates shifting constantly. It seems as though the infection rate curve is now being “flattened,” but the whole situation has caused colleges to switch to an online-only platform, and the future of international student exchange programs looks dim, which will certainly result in less money for higher education. The future of athletic programs also is questionable, especially if schools need to reopen and close in a yo-yo-like fashion based on infection rates rising and falling. Small private liberal arts colleges will likely have no choice but to close all over the country, and even the ivy leagues and state-funded schools will also take a hit. University hiring departments are likely going to be very careful about whom they hire, they will probably rely more on non-tenure faculty than ever, and the job market will be somehow more trashed than it has been for the last decade. The current situation has led some to ponder if we will experience the end of “wokeness” — the political correctness, language control, and cultural extremism of the aforementioned successor ideology. After all, if non-tenured instructors feel as though their jobs are constantly in danger, maybe they will ease up on the “woke” stuff. Right?
I suspect the opposite: things will only get more intense even as the public loses faith in the legitimacy of these ideas, like it gradually has for decades. The “successor ideology” has ensconced itself into the system with far too much security to go away. It has, in fact, become embedded into the economic structure of higher education, with so many university administrators having paid fealty to its authority. Although this is not the place to go into great detail, I will nonetheless suggest that what people are calling wokeness, at least taken in a broad sense, has become an irrevocable part of how the university receives its tuition and endowments, and it has been an essential way for the university to maintain a maximal amount of students with minimal competence, which means more money for the university. The corporatization of higher education is real, and in times of crises such as global pandemics, robust trends tend to accelerate, not reverse.
Academia may wind up having to shut down departments, but I suspect that the departments to go will be a combination of those that the public sees as highly frivolous as well as the departments that should ideally be kept intact. From what I’ve seen, at any rate, the first departments that administrators cut in the humanities and social sciences are linguistics and foreign language programs. The reason being, they are simply too demanding and require too much discipline and cognitive skill to turn a profit. And even though some nonsensical majors will likely be sacrificed, these theories have crept their way into the pedagogical approach to the major canonical texts and majors, which will remain fairly secure. All of the necessary conditions for the patterns I’ve described here will remain, and the verbalist wing of the ruling class’s use of pretentious language will continue to work along the same lines outlined above, perhaps with even more aggressiveness and perhaps more self-awareness and cynicism. Keep in mind that English literature tenure-track jobs, where pretentious language thrives, dropped by about one half from 2008 to 2019, but during this period wokeness accelerated. Whether or not the public accepted the morality behind these ideas didn’t really matter. But even if some adjustments are made and some excesses are restrained, the bifurcation of language as a form of social control will still remain entrenched for the foreseeable future, resulting in other unwanted excesses. Max Weber observed that before revolutionary situations, the ruling ideology must be delegitimized in the minds of its subjects. But he underestimated just for how long a ruling ideology can govern even while being illegitimate (see Richard Sennett’s exploratory treatment of this question in Authority). Though we shouldn’t overestimate the stability of this paradigm, I suspect things will get worse before they get better.
But a final key takeaway from the foregoing discussion, and probably the most important, is that the theories themselves are not really “the enemy.” The conservative critique of “tenured radicals” is entirely misplaced, as is the common tendency to strike out against some theorist’s body of work, not by reading and engaging with it but by taking stock of what other academics and low-grade theoryheads get from it. Critics of academia often misguidedly focus on ideology while ignoring the structural and material conditions that allow its worst aspects to thrive. The problem with elite discourse is not to be found in the ideas of Jacques Derrida, Michel Foucault, Gilles Deleuze, or even the hackier, less intelligent post-structuralists such as Alain Badiou, Judith Butler, Jacques Lacan, or Bruno Latour (and there are many, many more hacks, to be sure). It is not difficult at all to imagine a stable culture in which reasonable, level-headed people discuss such philosophical ideas in a relatively isolated setting, taking care to allow their best ideas to make an impact while preventing misprision or emotion-laden distortion from their potential followers.
The real problems occur with popularization and thus the necessary simplifications of the ideas, which allow the passions to run wild. American conservatives are liberal in nature, and they find the notion of elite exclusivity ipso facto offensive. But as we have seen, exclusivity emerges spontaneously on a level playing field, even despite the best wishes of some of its participants. Rather than double down on the ideal of liberal inclusivity, critics of academia would do better to appreciate that liberalism and even democracy are both gradually losing their meaning with no reason to suspect a reversal in the near future. For critics far on the outside, finding ways to create new elite hierarchies rooted in expression ought to be a serious matter of consideration, because these current patterns and tendencies will likely remain. Look around, and you’ll find that the process has already begun.