All-American Derrida

Kerwin Fjøl
79 min readJan 21, 2021
The father of deconstruction was a 100% American original

Note: I have never read any of Derrida’s work in the original French, and I don’t feel even the slightest bit of shame about it for reasons that will become clear if you care to read on. Nevertheless, this is a tainted paper, because I have read his work in translation. The ideal version of this essay would have been written by someone who never read any of his work at all, only the secondary literature. Oh, well. One cannot be perfect.

Introduction: The Head Trickster for the Postmodernist Movement

People have been declaring the death of deconstruction since the late 80s. That is, after all, when the excitement for it started to decline and the cult of Foucauldian power analysis began to take over — the same time in which postmodernity stopped being about having fun and playing games and started to be about emotionally incontinent women of color screaming about heteronormativity at their hapless white male professors. Yet the shadow that deconstruction’s inventor Jacques Derrida has cast over the humanities simply won’t go away. Its obstinacy ought to have been apparent in 2004, right when he died. Numerous outlets from the mainstream press put out an obituary both explaining and showcasing the strong opinions toward his philosophy of deconstruction that remained strong throughout his career, but each one demonstrated little understanding as to what any of it actually meant. The NY Times mostly quoted from his detractors, while The Economist, amid various other screw-ups, attributed the pun “logical phallusies” to him in an attempt to discredit him intellectually (he never made such a pun).

The same year, Stephen R.C. Hicks put out the book Explaining Postmodernism. It names Michel Foucault, Jacques Derrida, Jean-Francois Lyotard, and Richard Rorty as its four figureheads, emphasizing their similarities and downplaying their differences, leaving an altogether mistaken perception as to how politically motivated each thinker was (Foucault was by far more political than Derrida, and Rorty’s mellowed-out, quasi-unitarian left-liberalism is downright compatible with the liberal pluralism of some of postmodernism’s most aggressive critics). That book, whatever its informational value, made an impact on the psychologist, professor, and viral internet sensation Jordan Peterson, who took from it that Derrida was the “head trickster for the postmodernist movement” and seemed especially bothered by Derrida’s (admittedly lame) coinage of the word “phallogocentrism,” which disapprovingly identifies the deep cultural association of reason and logic with masculinity. Claire Lehmann, editor of the centrist outlet Quillette, recently stated on social media, “The tragedy about poststructuralist theory is that it is a complete waste of time studying it, but unless one has studied it, one doesn’t really have a grasp on what we’re up against,” after which she cited Derrida as one of its foremost representatives. His name has been kept alive and well in the American university system, and people have continued to publish on him regularly since his death. Just last year, a new sympathetic biography on Derrida came out from Verso books.

Nearly everything has been said on Derrida that has needed to be said, from his detractors and apologists alike. Attempts to refute this or that aspect of his theory have been made over and over, but because Derrida’s theory often took the form of an elaborate performance that metatextually illustrates itself (that is, he likes to “show” the theory instead of “telling” it), anyone who has tried to corner him on some statement or misreading has only exposed himself as a brooding fuddy-duddy; a real no-fun-nik. This is something the aforementioned Richard Rorty figured out over forty years ago, and evidently, most didn’t get the message. Yet still, if we want to understand critical theory and its role in the university — have a grasp on what we’re up against, as Lehmann put it — Derrida is absolutely inescapable, though not in the way the critics tend to think.

With Derrida, people place too much emphasis on the “what” and the “who,” but never so much the “why” or “how.” His detractors have always had the tendency to focus on the figure of either Derrida or his theory but rarely the system they came to inhabit. This “theory/theorist-first” tendency follows for most accounts of postmodernism in general, wherein the American university assumes the role of a virgin maiden whose innocence has been defiled by some shadowy forest beast. Rather than asking “what exactly about the American university made itself uniquely vulnerable to these theorists?” such accounts always assume a total lack of agency on the part of university intellectual life, instead opting to emphasize the alien quality, the foreignness of the theorists whose ideas have apparently ruined it. Allan Bloom, one of the major critics of continental philosophy and existentialism during the late 1980s, made a point of highlighting the Germanic character of what he saw as the most destructive modern philosophers, Heidegger and Nietzsche. Noam Chomsky, a major critic of postmodernism, has speculated on the peculiarities of Parisian culture as a reason for why so many of the poststructuralists who ruined the humanities were French, and he has discussed it in various interviews. Frustratingly, it never occurs to anyone to ask him, “Well, if the Parisians are so backwards and confused, then why does anyone listen to them outside of France?”

This notion of Derrida and poststructuralism as quintessentially French thought goes both ways, because their supporters love the air of sophistication associated with Paris. And Derrida, for his part, embraced the role of the French outsider facing the hostile climate of Anglo-American philosophy for his whole career. In one study on Derrida’s rivalry with the American analytic philosopher John Searle, the author Jesus Navarro underscores the national character of the disagreement, constantly referring to Derrida as “the Frenchman” throughout the book, and insisting that his style is French — sometimes “very French” — in character. This is because Derrida’s essay attacking Searle, “Limited Inc.,” itself draws direct attention to the national character of their disagreement. Moreover, the phrase “French intellectualism” has become synonymous with the poststructuralist theory that Derrida was largely responsible for popularizing within the American university system, even though this style amounts to a mere fraction of French thought. When we think of French intellectualism, we don’t typically think of Henri Poincaré, Henri Bergson, Jacques Ellul, Bertrand de Jouvenel, or (God forbid) Alain de Benoist. No, we think of the goofballs. Jacques Bouveresse, one of the few French analytic philosophers, wrote an essay entitled “Why I Am So Very UnFrench,” reluctantly reinforcing the perception. And even until his death, Derrida evidently felt the need to keep reminding everyone that he isn’t American, he doesn’t see himself as American, and he is So Very French.

But the less-discussed aspect of Derrida is that his style had become exhausted of its potential in France by the time he took America by storm. The French intellectual milieu, once warm and receptive to his philosophy, had grown weary of his contributions and no longer wanted much to do with him. America, by contrast, greeted him with open arms, ready and willing to turn him into a superstar. Of course, being an Algerian Jew from France (and thus, as some have noted, doubly foreign) added a certain, ah, oh dear, how do you say, je ne sais quoi for his American peers who had been on the lookout for the next big thing. Yet few want to consider the possibility that Derrida’s thought and career, and even a large amount of poststructuralism by extension, weren’t really all that French to begin with.

Be advised that this essay makes no contribution to the metaphysics of nationhood, or anything like that. It only offers this humble proposition, if only to help the conservatives and centrists of the Anglophone world disabuse themselves of the myth to which they so tenaciously cling, viz. the myth of their beloved university raped by foreign scoundrels. Derrida did four decades of work in the American university system, navigating its vicissitudes like the championship surfer Kelly Slater riding a big, beautiful wave off the Malibu coast — which means, of course, that he was more successful in America than he ever could have been in France. And that is because Derrida’s philosophy was American to the core. It was just as American as apple pie, monster trucks, skateboards, breast implants, and Coca-Cola. As soon as Jacques Derrida embraced structuralism and allowed its internal logic to suffuse his work, little did he know that he was on the path to becoming a quintessentially American philosopher. And though most see him as an instrumental force in converting Americans to the French mode of thinking, it was really the Americans who converted him, for his success as a philosopher was only possible when the Americans first looked at him, saw themselves, and immediately fell in love.

This argument has been in some sense anticipated twice: first by Camille Paglia’s essay “Junk Bonds and Corporate Raiders” (1991), which accurately describes the economic exigencies of the academic system as the impetus behind French poststructuralism’s warm reception. And second, by Francois Cusset in his French Theory (2008), which analyzes the unique American interpretation of French poststructuralism, arguing that it was by no means the necessary or even correct reading of the theory itself. Both studies have their problems, however. In Paglia’s case, she sets up the French poststructuralists in opposition to some fabled authentic America that came together amid the zeitgeist of the 1960s. It consists of black church women clapping their hands and singing; smelly upper-middle-class Jewish guys with stained T-shirts playing the bongo drums; gay men fabulously prancing around in leather riding chaps; otherwise boring Irish Catholic girls refusing to shave their legs and getting into Buddhism; and a bunch of other non-WASPy things that have nothing whatsoever to do with America’s founding as an Anglo-Saxon commonwealth. I frankly have no time to dignify this sort of nonsense with a well-considered response. Cusset, for his part, is nothing like Paglia. Though he has written an informative and competent work, his problem is simply that he is too attached to the French theory he writes about. He has gone native; the poor guy! As someone who feels he must always bolster the value of the theory he discusses, he finds himself largely unable to look beyond the conceptual frameworks he has been tasked with historically contextualizing in order to perceive the material dynamics driving them forward, something Paglia does far better. Around two thirds through his book, he starts profiling theorist after theorist in a manner that resembles a mail order catalogue rather than an objective work of research. So this essay can be taken as the logical culmination of what those two and surely others have grasped toward yet ultimately failed to apprehend.

I. Why the French Got Sick of Derrida and His Nonsense

Though Derrida is remembered fondly by a bunch of French intellectuals today, it is a plain fact that by the late 1960s, Derrida was losing favor in France. It is also not terribly difficult to understand why. Even his most fervent supporters acknowledge that Derrida’s philosophical approach is parasitic by nature. He takes a text, finds some minor slippage, and then pushes its implications until the entirety of the author’s work, if not his whole oeuvre, has become undone — the author having been exposed as some garden variety metaphysician, as naïve as your average street corner mystic. Michele Foucault’s reaction to Derrida’s style is a telling example of how Derrida lost favor. In 1963, Derrida wrote an essay on Foucault’s History of Madness. Though “deconstruction” had not yet been officially invented, his critique was still essentially a deconstruction. In this case, he seized upon a citation of Descartes at the beginning of the second chapter, and then used what he saw as Foucault’s strained reading to infer the futility of his entire project of writing a history “of madness,” i.e. a history from the perspective of the mad. At the time, Foucault (Derrida’s former teacher) didn’t seem to mind. According to Benoit Peeters’s biography of Jacques Derrida, Foucault wrote Derrida a letter reassuring him of his excellent work. Among other things, he said:

“I was impressed […] by the rectitude of your remarks that went, unerringly, to the heart of what I wanted to do, and beyond it. This relationship between the Cogito and madness is something that, without the least doubt, I treated too cavalierly in my thesis: via Bataille and Nietzsche, I came back to it slowly and by way of many detours. You have magisterially showed the right road to take: and you can understand why I owe you a profound debt of gratitude” (qtd. in p. 40).

A mere nine years later, though, Foucault’s attitude had changed entirely. In a second edition to his History of Madness, he attached a lengthy appendix entitled “My Body, This Paper, This Fire,” in which he thoroughly dressed down not just Derrida’s critique but his whole philosophical approach:

“I would not say that it is a metaphysics, metaphysics itself, or its closure, that is hiding behind [Derrida’s] ‘textualisation’ of discursive practices. I would go much further: I would say that it is a historically well-determined little pedagogy, which manifests itself here in a very visible manner. A pedagogy which teaches the student that there is nothing outside the text, but that in it, in its interstices, in its blanks and silences, the reserve of the origin reigns; that it is never necessary to look beyond it, but that here, not in the words of course, but in words as crossings-out, in their lattice, what is said is ‘the meaning of being.’ A pedagogy that inversely gives to the voice of the masters that unlimited sovereignty that allows it indefinitely to re-say the text” (p. 573, emphasis in orig).

So essentially, Derrida, while posing as a radical who threatened to upend the entirety of western metaphysics, was doing little more than medieval hermeneutics, finding hints of the Christian logos within pagan or otherwise heterodox texts. There is probably no stronger way to insult the ego of these sorts of philosophers than to describe them as faithful continuants of the Western tradition, so we can infer that Foucault was really out for blood. Since he published this in 1972, and Derrida had already planted one foot in the United States with a few warmly-greeted conference appearances and guest seminars, one might surmise that Foucault was merely jealous of Derrida’s success. But at the same time, one also gets the sense that he had been waiting to say that for a good while.

After all, Derrida had established his whole shtick by that point, having deconstructed so many of his other friends, acquaintances, and influences in lengthy essays similar to the one in which he deconstructed Foucault (you can read some of them in the essay collection Writing and Difference, which came out in 1967, the same year as his two other foundational works, Of Grammatology, and Voice and Phenomenon). Even before he had fully refined his philosophical justifications, it is mainly with this frustrating sort of pedantry that Derrida had decided he would work. Various sociologists have written essays analyzing the cultural milieu that allowed French poststructuralism to take off, focusing mostly on the conditions that allowed intellectuals to gain their star status. But when reading Derrida’s correspondence with his peers, one can personally sense something obvious and thus seldom remarked upon, yet important all the same: the Parisian intellectuals were a tight-knit bunch. They formed a real society of letters oriented around the university, and appear to have been mutually supportive for the most part, at least until the intra-left wing political clashes of 1968 changed things. Because they were located in the same area and their communication had to be so intimate, it was probably not hard to take advantage of their good faith, as Derrida realized.

But ultimately, one can only endure so much. Just imagine some pipsqueak sending you a letter saying, in so many smarmy words, “Oh, hello, my friend. I sure am glad we are friends. Nice friend. Good friend. And by the way, friend, I just published an essay tearing your magnum opus a new asshole. Hope you don’t mind! Toodle-oo!” And then you talk to five other people the next day and learn that he did the same exact thing with them. Who in the world would want to put up with someone like that? Yet that is what they did. It is hard to speculate why exactly, but it took a good stretch of time before the French intellectuals turned against Derrida with open hostility. Finally, when the riots and protests of 1968 broke out, fracture lines began to emerge within the left, and some of Derrida’s greatest detractors apparently saw it as a ripe opportunity to push him out of their circle.

Derrida’s conservative critics today see him as an apologist of Marx and thus something of a crypto-communist, but to his credit, Derrida was not terribly partisan or dogmatic as a figure within the left. He did not issue attention-whoring political proclamations or pump his fists in tandem with convulsed mobs. Most would be surprised to learn that his unease with the political developments of the late 1960s into the early 1970s was what earned him, at least nominally, so much censure in France. Benoit Peeters’s biography illustrates the scene well. In 1969, the Communist philosopher Jean-Pierre Faye wrote an attack on the avant-garde literary journal Tel Quel, which had been so instrumental in promoting poststructuralism and deconstruction, and much of his attack implicated Jacques Derrida’s work above all. As time went on, criticisms from other Communists gradually mounted. Then, in 1971, Tel Quel formally broke all of its ties to the Communist party, turning toward Maoism and the Cultural Revolution. But Derrida did not go with them in this direction, and instead, he broke away from Tel Quel altogether, wanting no formal affiliation with either Maoism or the Communist party. The decision had to have angered its founder Philippe Sollers, since Derrida’s work was so often the unstated target of criticism even while Sollers would be the one receiving the brunt of the attacks. In 1972, Derrida’s friend Jean Ristat, a Communist, published an issue of the review journal Les Lettres françaises in homage to Derrida’s work, despite Derrida’s steadfast unwillingness to join the party officially. But this gesture, along with Derrida’s compliance, was enough to cause Tel Quel to publish an issue dedicated to attacking Derrida, their former friend. It was soon after this point that Michel Foucault decided to bash Derrida’s entire intellectual project, as mentioned above.

Though Derrida remained on good terms with Roland Barthes and Maurice Blanchot (two major contributors to Tel Quel), and he maintained friendships with others such as the young Jean-Luc Nancy, most of the big figures associated with what we now call “poststructuralism” turned against him, including Jean-Francois Lyotard and Jacques Lacan (the latter had had a previous unpleasant encounter with Derrida at a conference and eventually bashed him on the record a few years later in the mid-70s). It also did not help that Derrida hated the work of Foucault’s good friend, the up-and-coming Gilles Deleuze, calling his Anti-Oedipus “a very bad book” and “confused” in private company. Around this point, hostile questions started to surface at Derrida’s conference presentations, and cryptic political associations also started to emerge regarding his philosophical indebtedness to the Nazi Martin Heidegger (back then, the left was just as bothered by Heidegger as think-tank-funded conservative intellectuals are today). Derrida decided he would need to start over somewhere else, and America, which had greeted him so warmly before, seemed the best bet.

While this dryly recounted sequence of events appears to place political tension as the major reason for Derrida’s waning influence, others have seen the problem in Derrida himself. For his biography on Derrida, Benoit Peeters interviewed Jean-Marie Apostolidès, a Greek-French scholar, novelist, and playwright. Being no fan of Derrida and unashamed to admit it, his account did not find its way into Peeters’s work, which is certainly not fawning yet still errs toward measured admiration. And perhaps Apostolidès’s account is unfair. All the same, he was quoted extensively in a later biography on Rene Girard (for some reason), and according to Apostolidès, Derrida

“was humiliated — belittled and passed over in France, probably because he was arrogant. Derrida’s obsession was success. He tried to place his spoon everywhere in America […] He had no success in France. He was a humiliated little Jew and he wanted revenge. He took to America as his field for revenge” (p. 142).

Many of Derrida’s acquaintances have adamantly stated that in person, Derrida was not arrogant at all; quite the opposite. But given the nature of Derrida’s philosophical style and its development over time, I suspect nevertheless that Apostolidès is onto something. Derrida’s reluctance to “take sides” politically may have proven to be the precipitating event in his rejection from the Parisian intellectual scene, but there is good reason to believe that politics served also as a convenient pretext. The parasitic approach to philosophy may have some real limitations when one needs to maintain friendships. Yet this reluctance to take sides is striking. As minor as it may seem, his even-handedness and hesitance to take action in that political moment when the correct action was unclear already brought him closer in temperament to the original WASPs of the American ivy leagues than to his teachers, Michele Foucault or Louis Althusser, or any of his rowdy Parisian colleagues. So while he may have been arrogant professionally, his approach to the political instead suggested something like the American statesman Henry Adams’s description of his Harvard University colleagues in the class of 1858. This duality evidently frustrated the French, but in America it would contribute to much of his mystique.

II. The American University and its Structural Exigencies

Derrida’s difficulties with the Parisian scene intensified in the early 1970s, but his first major breakthrough in America had already taken place in 1966. It was that appearance that really blew everyone’s mind, the presentation of “Structure, Sign, and Play in the Discourse of the Human Sciences” at Johns Hopkins University, in which Derrida deconstructed Claude Levi-Strauss for the first but not the last time. Then in 1968 and 1971, he took two extended stays there as a visiting lecturer for some seminars he helped prepare. At that point, he was presenting all of his work in French. He took on a visiting professorship at Yale in 1975, well after he had alienated his friends in Paris, and returned back there frequently. And then, in 1986, at the height of deconstruction’s popularity, he took on a teaching position at UC Irvine, made a point of lecturing entirely in English, and continued to teach there part-time until his death.

In 1987, right after Derrida took his new job at Irvine and right before the academic world was deeply scandalized by the revelation that his good friend and fellow deconstructionist Paul de Man had been (kinda, sorta, not really) a raving Nazi and unhinged anti-Semite once upon a time, the sociologist Michele Lamont wrote an article entitled “How to Become a Dominant French Philosopher: The Case of Jacques Derrida,” analyzing the strategy Derrida used to conquer the American academic landscape. The basic difference between France and America, Lamont notes, is that in France, the intellectual-as-celebrity phenomenon was fueled by more traditional, relatively centralized cultural media, whereas in America, the ambitious intellectual would have to navigate through niche audiences via longstanding institutions and journals, and so Derrida made his mark in literary criticism via English and comparative literature departments. This is all basically correct, but the dryness and tepidity of Lamont’s argumentation fails, perhaps by necessity, to answer the essential questions: why the English department? And why was there such excitement for Derrida anyway? While Lamont is correct to assume there is nothing intrinsic to the theory of deconstruction itself to have earned its meteoric rise, he only posits the existence of structured, interrelated cultural and institutional systems that allowed for its explosion. He doesn’t go one step further and probe these systems to reveal the unique material and psychological dynamics that created not just a nice fit, but rather an intense thirst for this kind of thought.

Perhaps the half-formed nature of Lamont’s kind of analysis is why Benoit Peeters felt obligated to cite the essay and qualify it. He downplays the impression it gives of Derrida’s cutthroat careerist instincts, emphasizing instead Derrida’s good fortune — the unique circumstances that appeared as if by dumb luck that enabled deconstruction to thrive. I agree with that qualification and will thus elaborate on those circumstances. But it must be pointed out: the dumb luck that enabled his great success in America does nothing to hinder our revisionist mythology of Derrida the All-American. Rather, it only bolsters it, and a mere glance at America’s iconography should illustrate why. The American confidence man, the grifter, the traveling carnie, the robber-baron-in-miniature selling you the snake oil upon which he founds a dynasty — is it ever owed entirely to his cunning that he achieves such success? Of course not. It is precisely the ambiguity that defines his symbolic significance, the lingering mystery as to whether he ever really had a clue as to what he was doing or simply toiled and tinkered before finally striking gold. In America, John D. Rockefeller is a great businessman, and so too is Forrest Gump. In the Major Arcana of the Tarot, the Magician is placed next to the Fool.

To assess the serendipity of the situation, we must begin at the beginning. In 1957, the year Jacques Derrida made his first professional visit to America, got married, and was starting to think about producing a translation of The Origin of Geometry by Edmund Husserl — the philosopher on whom he had cut his teeth in the academic world — the Soviet Union launched a satellite called Sputnik around the Earth’s orbit. This event, taking place at a time in which the United States ought to have been the unquestionably supreme country following World War II, prompted national anxiety in America about the strength of its scientific research institutions. While America had already been expanding its research through funding initiatives following the war, the Sputnik launch proved the determining factor to eliminate all ambiguity about where the country’s finances ought to go. The baby boomers were about to become university students, which would mean a boom in enrollment, and the United States decided to take advantage of this by going all-in on research. Sputnik accelerated the demand for original scientific findings so that the United States could out-compete the Soviets, and the university system would be the way to get those findings. Consequently, all parts of the university system grew as a result of this increased funding. Private institutions, for their part, would accompany this expansion. Roger L. Geiger, the pre-eminent authority on the history of the American university, has called the 1960s its “golden age.” When Derrida gave his presentation on Claude Levi-Strauss at Johns Hopkins in 1966, he was doing so at the peak of this golden age.

Even at the crest of wealth in higher education, however, not everything was hunky-dory. In the humanities, questions began to emerge about what its output ought to look like. The post-Sputnik boom in higher education was primarily for the purpose of research, and when you consider how research works in the natural sciences, the process is fairly straightforward. A scientist reads up on the work from his field, finds an opening wherein he can build upon prior findings, comes up with a hypothesis, tests it through an experiment according to a rigorously defined standard method, and if it goes as predicted, he publishes the results. Since the natural world is an inexhaustible reserve of unsolved mysteries, and the study of each aspect therein is virtually self-justifying, this process can go on for quite some time without any problems. Edmund Husserl may have been right to argue back in the 30s that there is an epistemological crisis in the natural sciences, but if so, it wasn’t dire enough to impede their institutionalization and growth. But things are not so simple with the humanities. To put it simply, the concerns of the humanities do not resemble those of the natural sciences, and yet the humanities were swept along with them and held to comparable expectations for findings.

For those fields in the humanities concerning man’s historical accomplishments (i.e. political, cultural, and so on), the output thereof would be considered equivalent to the natural world as an endless source of novel research discoveries. Yet in this case, not every research “finding” justifies itself so obviously. In various fields of cultural history, there are canons suggesting which texts and artifacts deserve special attention, and one soon realizes that it is possible simply to run out of interesting things to say about them. The canon that westerners inherited largely gained its legitimacy from previous debates on aesthetics that grew out of the enlightenment. But none of those debates between the bourgeois intellectuals who so enthusiastically participated in them were ever definitively resolved. It was simply a friendly pastime for good-natured professionals in various societies of the 18th and 19th centuries who typically did other things to make a living. Some worked as merchants; others, as lawyers; others still, as government bureaucrats. So, when one considers that the United States was the first nation ever to achieve a system of mass, expansive secondary education, and thus the professionalization of dabbling in poetry, or art history, or whatever, all of those unresolved problems pertaining to the humanities would attain a dire new importance. For each new scholarly claim about a work within some cultural canon, it would thus be fair to ask what’s at stake in order to evaluate its significance. And if one might wish to break from the canon and examine something outside of it, there would need to be some additional justification.

Before the humanities had been steadily receiving more money for research than it apparently knew what to do with, these problems were not so immediately self-evident. But during the golden age of research, they quickly exposed themselves, and fierce debates erupted about how to approach the study of culture academically. These debates took place in the context of student rebellions from the New Left, which targeted research institutions for applying their scientific findings to the development of weapons used in the Vietnam War, leaving a profound impact on the culture of the college campus. The New Left failed miserably in preventing the business of the university system to continue as usual, but they were quite successful in advancing a confrontational moral agenda that would impact the university’s internal self-understanding for decades to come. The humanities did not fail to change as a result, as more and more politically active professors had been climbing aboard its ranks. In 1969, the MLA (Modern Language Association) members made a declaration in its annual conference that the MLA would encourage an antagonistic stance toward capitalism as well as widespread examination of latent social biases in historical literature. As the English professor Margery Sabin put it, “Confrontation became established as advanced intellectual style in 1969.”

So when Derrida was giving deconstruction a test-run in the mid-60s, the warm welcome was itself the perspiration of both the anxieties regarding how to professionalize and scale up the study of culture, as well as the growing ideological unrest of the New Left. On the one hand, Derrida’s assertion that even the quasi-scientific veneer of anthropological structuralism still rested upon pseudo-scientific metaphysics reassured American professors that their interests were no less valid than those of the professors of hard sciences. Moreover, his added solution of using “play” to navigate the tensions within anthropological structuralism, and by extension the broader study of culture, stoked the professors’ imaginations to consider what might be possible for publication. But, on the other hand, the warm response also reflected a shift in moral priorities away from those of the general population and toward a certain kind of confrontational personality that would increasingly become dominant in the decades to come.

Without a doubt, the chic nihilism implied in a word like “deconstruction” added to all the excitement surrounding Derrida. But Derrida’s conservative critics go a step too far when they connect the program of deconstruction to any distinct left-wing ideology such as Marxism or Anarchism. No one has to be reminded that 1960s was a time of left-wing ideological exuberance, some of it whimsical and utopian, some of it violent and chaotic. But the post-war economic boom was an essential condition for the inflammation of these tendencies, and the ideas that won out (such as deconstruction) did so because they fit perfectly within the machinery of liberal society — boring, procedural, milquetoast, limpwrist liberal society. In a recent short piece for First Things, Mark Bauerlein correctly states that deconstruction wasn’t essentially political at all, recalling a claim by Derrida himself, made much later during a Q&A, that “there can be a deconstruction of the left and a deconstruction of the right.” But the provocation of the word made it certainly seem politically left-wing in some vague sense, and that was enough to increase its notoriety and success.

There is also something to the native disposition of academics that lurks behind all of their radical posturing, and it precedes the political, or even the ideological. Nietzsche went to great pains to perceive it in the late 19th century, while the uneducated perceive it effortlessly. Academically inclined individuals lead sedentary, often lonely lives, and with the waning influence of a church or religious institution to keep their emotional excesses at bay, some of the particularly alienated among their ranks will feel possessed by the urge to lash out against the happy and well-adjusted people around them. When institutional crises erupt, such people feel emboldened to seek out attention for themselves by expressing their hostilities for not just the beautiful and healthy, but even beauty and health as such. Yet, knowing that their resentment would be rather easy to detect if expressed straightforwardly, such people camouflage it with fancy words and circuitous rhetoric. Their disposition may find its expression through the vehicle of ideology or politics, but its essence is better understood through mundane, physically-grounded considerations: does this person get enough Vitamin D? Did she call her mother over the last couple months? Does she have a husband? A boyfriend? A dog? Does she at least have a dog? And how frequently does she walk it? Does she drink enough water daily? Does she drink alcohol? How about sports? Does she play a sport? Or does she just “jazzercise” once every three months in a futile attempt to feel physically active?

There is moreover a paradox at the heart of these academically-inclined radicals. While they do want to watch the world burn around them, they also simultaneously want everyone in that world to love them. It just can’t be helped. Off they go, putting out one long treatise after the other, releasing them into the competitive arena of ideas and abstractions like soldiers at war lobbing grenades onto the battlefield. Watch as they rail against a world whose indifference to them feels burdensome and oppressive. And yet, while they indirectly extend their scorn toward all of those indifferent bystanders and civilians who are unfortunate enough to penetrate their field of perception, they still yearn to direct such people’s attention toward themselves, eliciting from them unending praise and flattery. Like a kitten, nothing would satisfy the embittered intellectual more than to have a crowd of attractive people gather around him, petting him gently and speaking to him in baby-language as he flops about the floor and purrs and purrs in mirthful gratitude, wiggling his little hands and feet.

So all of this is to say — at that time of 1960s, the ideal solution for the ideological crisis in the humanities would be to contain the excrescences of these pampered would-be radicals through a simple bait-and-switch. Allow them to feel like they are blowing up the earth as they express themselves in a manner impenetrable to the rest of society, but then give them what they really want: a pat on the head for their efforts. The true purpose of the institutionalized humanities is often mystifying, because one can always ask, “Why did it need to be institutionalized in the first place? Are the humanities not what we humans just do spontaneously?” But the intense debates regarding what the humanities ought to be about wound up creating a new purpose for it — just not the purpose any of the participants in these debates had in mind. That purpose was not to spread knowledge for its own sake, or encourage responsible behavior within society, or to think critically to make the world a better place, or anything silly like this. It was to nullify the potential harm of the perennially resentful intellectual.

The university would achieve this nullification by dividing the intellectuals’ various potential fields of interest into discrete units, which would work effectively as little cages, each containing a hamster wheel connected to a rotor, which would spin a power generator. Then, the institution would reassure the intellectual that he is indeed quite radical and offer him a career in one of these cages as an incentive to run on the hamster wheel for the rest of his life, either not seeing the mechanism for what it is, or being satisfied enough not to care. And then, the energy generated from the rotation of all these hamster wheels would be enough to fuel the system! Of course, not everyone drawn to the humanities wanted to “stick it” to “the man,” but enough of them did to where the pretense of radicalism had to be assimilated into its constitutive structure, and so deconstruction served this purpose starting in the mid-70s as more and more people became career-oriented.

III. The Literature Departments and the Appeal of Deconstruction

This bait-and-switch that characterized Derrida’s work will be discussed more in a bit, but for now, it must be established that in order for it to thrive, the humanities first needed to take a beating. And that’s what happened; the golden age of research funding did not last. By the beginning of the 1970s, it was already apparent that the high hopes of endless enrollment and expansion would have to be reconsidered, and by the mid-70s there was some uncertainty regarding the future of the research university system as a whole, because while it was still continuing to grow, it was doing so below prior expectations. The humanities were disproportionately impacted. The 1970s was a time of overall slowdown for university growth, and this led in particular to reductions in students pursuing a humanities major as well as stagnancy in the job market for new faculty. In 1974, one year before Derrida took his first visiting professorship in the United States, new enrollments for bachelor’s degrees started to plummet, and this relatively low number would persist for about a decade. The reason owed largely to the lowered expectations for career opportunities in college graduates, prompting a gradual shift toward vocationalism in student attitudes, which meant choosing more profitable majors. As Geiger notes, the percentage of freshmen considering it “very important” to be “well off financially” was in the low 40s at the beginning of the 1970s, it went up to 50% in 1975, then shot up to 71% in 1985 (2006, p. 66). But the reason surely also owed plenty to the new attitude of the humanities — its feminism, its pseudo-Marxism, and, yes, increasingly after some time, its deconstruction. Derrida’s Voice and Phenomenon was first translated into English in 1973 (as Speech and Phenomena), while Of Grammatology was translated into English for the first time in 1976, and these translations were the way through which most American academics would read his work.

At this point, Derrida’s philosophy, for most of his followers, became synonymous with deconstruction itself, which would become “a literary exercise,” not a strictly philosophical discipline. At any rate, that is how it was treated by the American academic establishment. When Derrida was visiting American universities in the 1960s, he was doing so as a philosopher. But by the 1970s, it was increasingly clear that he would not be able to hack it as one. In philosophy, a field in which it does not suffice merely to work as an intellectual historian, the threat of a potential legitimacy crisis has always lingered in the background more prominently than in other humanities subjects. The philosophers must therefore routinely take care to ward off this threat. So as higher education expanded in the Anglophone world while the job market became increasingly competitive, in order for new philosophical claims to be made, the requirements for clarity in each statement (and in the definitions for the words therein) became rather stringent. Plus, the questions a philosopher could ask and answer for publication became increasingly rarefied to the point where, typically, no one with prior training would even think to ask them. Finally, expectations tightened for the philosopher to show competence in formal logic. In such a field, Derrida never could have succeeded. His disposition was too aloof; his attitude too playful; his contributions too negative; his terminology too imprecise. So his philosophy transformed, as if by magic, into literary theory, while philosophy actually wound up being the only department in the humanities actively hostile to his work.

The tightening of standards that impacted philosophy accompanied similar stringencies in other departments. In order to get a job as well as secure tenure, publishing one’s scholarship became increasingly important. Each field in the humanities, in order to meet this expectation, would drift off into its own niche territory as the work became more voluminous, each department with its own set of obscure standards to determine the quality of new scholarship. And even within each department, one’s work would have to become increasingly compartmentalized. If you look at the art historians of the early twentieth century, for instance, you will notice that they could publish on a wide range of time periods. As the decades went on, that became increasingly difficult. Consequently, the departments would become increasingly noncommunicative with one another, the communication among the divisions within each department would grow increasingly tense, and the output would grow altogether more distant from the interests of the average person. The 1970s was when scholarship on literature could be produced from various new theoretical vantage points. All the while, Kenneth Burke’s metaphor from 1941 depicting scholarship as an unending conversation at a parlor became increasingly favored, not merely as a description of how scholarly endeavors tend to work under undisturbed conditions but as the prescription for how the institutionalization of scholarship must proceed forward. Ubiquitous as the metaphor may be, it rarely dawns on anyone to wonder, “If the participants in this parlor conversation have to be paid to keep talking, how interesting could the subject matter possibly be in the first place?” The very absence of such a question is a testament to the success of America’s mass institutionalization of intellectual life.

For English departments before the 1960s, the main working theory to meet the demands of America’s institutionalization of the study of literature had been New Criticism, a sort of formalism that assessed the text alone, shorn of all other considerations, to determine its literary value. We will say more about New Criticism later. But for now, it will suffice to say that it did not last as a basis for scholarship because it relied upon naïve aesthetic assumptions that could easily be questioned, and it eschewed important considerations such as historical context and authorial biography. It also was devised as a way to justify the pre-existing literary canon while selecting newer entries (these would largely include the modernists, such as Pound, Eliot, and Joyce). It was inherently limited, and so it would not suit the growth of the university system that was occurring throughout the 1960s, nor would it suit the ideological demands of the New Left. Once researchers felt freer to dismiss New Criticism during that decade, another theoretical approach emerged in historicism, the study of the surrounding historical circumstances of a text. Then psychoanalytic criticism came along. And then, when the pressure increased during the 70s, a whole bunch of theories exploded onto the scene, with deconstruction being among its most fashionable.

Derrida’s elevation could not have happened without the influence of Paul de Man and J. Hillis Miller, two Yale literature professors who became the most well-known American defenders and advocates of deconstruction. Their application of Derrida to literary criticism would later come under heavy fire from the small handful of career philosophers writing on Derrida, particularly Rodolphe Gasché, who felt that using deconstruction as a tool of literary criticism undermined its radical philosophical implications. But this critique only came along when the practice was already losing some steam. In its formative days, deconstruction was so attractive because it offered a way to produce new scholarship on just about any text the scholar could think of, so long as she used that text to buttress Derrida’s underlying philosophy in some way, however perfunctorily. With deconstruction, a literary scholar would never actually make a new philosophical argument. She would instead use the corpus of world literature as a store of evidence to show the validity of Derrida’s work, “proving” it over and over with a new text each time. When Jonathan Culler’s On Deconstruction came out in 1983, the approach had been fully proceduralized, as he provided a readable recapitulation of deconstruction as a reversal of binary hierarchies found within the text (like man vs. woman, speaking vs. writing, etc.), and this reversal would be done not to create a new hierarchy but rather to expose the contrivance of the distinction that supposedly separates the two poles. So, basically, you find some binary oppositions and screw around with them to prove that they’re phony. And all the while, the deconstructor would disclaim any personal agency in this sort of maneuvering, because according to Derrida’s philosophy, one does not singlehandedly deconstruct a text. The deconstruction is not done by any one writer — which is to say, there is no “deconstructor” (I’m still going to use the word anyway) — because the work deconstructs itself.

Now, of course, since literature does not propose to accomplish what philosophy does, one approach to deconstruction would involve not deconstructing the work of literature per se, but rather some ideology, tradition, or “discourse” to which it might be attached. The work of literature, in this arrangement, would act as the locus of contradictions that expose the artifice of its discursive background. This method of deconstruction could be relatively easy to do, especially any grad student putting together some slapdash MA thesis. The literary critic wouldn’t need to be particularly careful with the text as a standalone document, since one could play a bit fast and loose with defining its textual parameters. As long as you copied some of the techniques Derrida used to deconstruct stuff, then applied them to your stuff, this would be good enough.

Alternatively, for many literary critics, relying on Derrida meant making an argument that the author of some literary work was keenly aware of the same philosophical problems that Derrida had uncovered, even if that author predated him by hundreds of years. This sort of scholarship would usually involve establishing that the poet was secretly writing a poem about the process of writing a poem, or the novelist was secretly writing a novel about the process of writing a novel, or the playwright was secretly writing a play about the process of writing a play, and the scholar could “prove” it by searching for moments of self-reflexivity in the writing — that is, moments where the writing seems to acknowledge its own contrivance as a work of writing. As curious as this approach might sound, plenty of ink was spilled over a whole bunch of books by so many writers who had identified that basic strategy and decided to implement it.

My favorite example of this approach is a book by R. Howard Bloch called The Scandal of the Fabliaux (1986) about a style of humorous, vulgar, often sexually perverted French poems from the 12–13th centuries. His argument is that the real “scandal” of the fabliaux wasn’t that they’re dirty and perverted, but rather that they’re really about the instability of language in determining meaning. So, Bloch interprets various images, such as clothes or dead corpses, as linguistic signs that create confusion or other interpretive problems. In one passage, he discusses the anonymous poem “The Piece of Shit” (La Crote), about a man whose wife hands him a turd she produced and tells him to guess what it is. Bloch points out that feces are, technically speaking, food divested of their nutritional value, which corresponds to the notion of a signifier divested of any connection to its proper referent. The piece of shit is a “dead letter,” so to speak, without any connection to its assumed meaning, i.e. the spirit which quickens the text (as in 2 Corinthians 3:6). Ah… yes. Of course. Now, one might ask: does this claim have any historical legitimacy at all? Was the author of “The Piece of Shit” aware of such a possible interpretive association? Would the audience have been? Could they have been? If a piece of shit deconstructs itself in the forest, and no one is there to read it, does it make a smell? Well, it really doesn’t matter; what matters is that this argument fits with Derrida’s theory, and the critic was clever to have made such a metaphoric association, and that’s all that anybody cares about.

So despite its appearance of being the great obliteration of western philosophy, deconstruction was instead that bait-and-switch alluded to at the end of the last section. A young graduate student might get into deconstruction, thinking, “I’m going to do the yippies proud! Maybe Bernadine Dohrn from the Weather Underground will think I’m cool! Far out, man!” but then as soon as the work begins, she would find herself hooked onto an altogether different enterprise, even while thinking herself to be a real badass all the while. And of course, once it became clear that there is nothing even remotely subversive about this sort of work, the scholar’s blood would have cooled down enough for it not to matter anymore. On the back of the 1997 paperback edition of Of Grammatology (trans. Gayatri Spivak), a blurb by one Roger Poole says of Derrida’s work, “A handy arsenal of deconstructive tools are to be found in its pages, and the technique, once learnt, is as simple, and as destructive, as leaving a bomb in a brown paper bag outside (or inside) a pub.” Radical, dude. This description is most definitely part of the bait-and-switch. It’s pure fantasy — about as representative of the product as the pictures of McDonald’s hamburgers in the advertisements. Just open the book, consult its pages, and one will find nothing so chaotic as what Poole describes.

Of course, Derrida was invoked at times for political reasons, particularly as the 1980s progressed. Jonathan Culler’s aforementioned work was instrumental in drawing attention to the possible usages of deconstruction in the context of feminism (though the feminists, too, would eventually stop caring about Derrida). And Derrida himself would make various ambiguous statements as to the political utility of his work, allowing people to see in that possibility what they wanted. But little in Derrida’s writing suggests the outright iconoclasm these ideologues saw in him. This statement requires some explanation. Derrida’s critique is always just as open to the same criticisms it levels against all other works. At best, it might allow the reader the ability to psychically neutralize a text’s powers of persuasion — not a bad thing, until one realizes that it can do the same to just about any text, including that same text that once seemed like a weapon. This means the reader, should she pursue the implications of deconstruction, will eventually be brought back to square one, recognizing that “deconstruction” can be applied so diffusely as to be harmless. Nothing, ultimately, is at stake. In Explaining Deconstruction by Kathleen Wheeler, the author concedes that Derrida did not ultimately escape “logocentric” writing himself, even though this was the kind of writing deconstruction is meant to target. Unsurprisingly, she doesn’t seem to mind.

This aspect of deconstruction is actually quite obvious to the person hearing about it for the first time. The stoned third-year English student in his “Introduction to Literary Theory” class raises his hand and asks the professor, “Wait a minute, like… if you can deconstruct all this stuff, then can’t you also just, like… deconstruct… the deconstruction?” And he finds himself quite confused when his teacher, rather than acknowledging the futility of the whole enterprise, instead widens her eyes enthusiastically and replies with a nod, “Yes, that’s right! And you could deconstruct the deconstruction of the deconstruction as well!” The student then thinks to himself “what the fuck” before looking out the window, distracted by a bird who just flew by. A day later, he has forgotten the exchange took place, and in fact, just about everybody forgets this exchange including the most attentive students, because they have thought to themselves, “Gee, I believe that this is a question that I will need to revisit later,” only to be swept up in the cognitive demands of deconstruction as a literary discipline. The mental exertion that goes into doing it proves so distracting that the diehard deconstructionist can go her whole life without once feeling compelled to return to the question, “Why does any of this stuff matter, exactly?”

Now, if so many texts are equally susceptible to being deconstructed, can Derrida’s technique erode the legitimacy of any single one above all others? Not really, because it is meant to be a never-ending process, and so that one text is just one text of indeterminate importance, it’s whatever. But, according to the philosophy, this means deconstruction also cannot radically equalize all things, either, as with dialectical monism or, I don’t know, Communism or something like this. Even its reversal of binary hierarchies can only take place within a bounded context, as they are never allowed to produce a synthesis or negotiation and actually go somewhere, progressing toward something definitive. Deconstruction is thus not even allowed to have a final moment of victory in which it has successfully salted the earth, forcing all its inhabitants to take every book from that point onward with one grain each. This lack of finality one way or the other robs its critics of the ability to impute to deconstruction an apocalyptic agenda, and it also robs its politically zealous followers of the ability to pooh-pooh some text or “discourse” in order to enact social change — at least not with any real philosophical justification. When a text is deconstructed, it is both affirmed and denied, and there could be no other way.

But if it seems that deconstruction can make one text look uniquely bad, as it evidently did to many of Derrida’s acolytes, it was because the true power of deconstruction is pragmatic in nature, and this pragmatic power lies at the caprice of the deconstructor. Derrida’s philosophy claims that no single point of departure is justifiable to make its case. But of course, one can’t just accept that for what it is and find something else to do, like fly a kite, or learn to crochet. One must go ahead and choose somewhere all the same. Thus, wherever the choice is made, the object of analysis in that direction will feel as though it takes on a new level of importance simply by dint of the fact that it was chosen. In 1991, Dinesh D’Souza, in his book Illiberal Education, attempted to show that deconstruction is used one-sidedly as a weapon to attack the legitimacy of a text when he said, “Deconstructionists treat some works with uncharacteristic respect, and their authority is left unchallenged. Marx, for instance, never seems to be deconstructed, neither does Foucault” (p. 182). Well, Derrida had already deconstructed Foucault’s History of Madness long before 1991, and he did so in a way that effectively delegitimizes the book’s stated purpose. Then, just two years after D’Souza said that, Derrida deconstructed Marx in Spectres of Marx while extending to him the highest praise and compliments. So, rather than a radical punk rockin’ anarchy bomb used to blow up the corporate narc pub run by fascists, it is perhaps better to say that deconstruction works like a magnifying glass. Being a device of contemplation, the glass reveals previously unnoticed or unseen details in whatever the viewer holds it up against. But if he so chooses, the viewer can tilt the glass just a bit and concentrate the sunlight upon the object so acutely that it slowly burns to a crisp. Just so, in deconstruction, it is up to the verbal stylist if she wants to delegitimize the text or playfully redeem it throughout the course of writing about it, her words having the power to tilt the lens or keep it steady.

But here, the similarities end, and a qualification is in order: we are only playing with words. There is no real burn or “violence.” Few people would bother to look through the glass anyway, and the only ones who can understand the vision are other low-risk academics — academics who fantasize of revolt and uprising but nevertheless prove unwilling to endure the burden of martyrdom, opting for cushy careers instead. So regardless of what the deconstructor wants, the deconstructed remains in circulation and continues to take up rent space in the heads of intellectuals. Whether they choose to do so with nice words or mean words, they are still talking about the text. And, as the American business maxim goes, no publicity is bad publicity.

But for the diligent Derridean scholar, such an act of “violence” was usually not the goal anyway. Many of them would deconstruct texts within the literary canon because publishing on the literary canon was (and perhaps surprisingly still is) the easiest way to secure employment for literature students, and as a bonus, they actually would come to enjoy the material. Moreover, the scholarship wouldn’t constitute mere mental taxation. Deconstruction, at its absolute best, meant the joy of thoughtfully untangling its puzzles and trickery as well as producing one’s own for the next person to untangle. That deconstruction had to focus so closely on the expressive modes and strategies of its target while remaining constantly mindful of its own mode of expression meant that the actual deconstructions would often be riddled with paradoxes and ironies that practically summon themselves. So for plenty of Derridean scholars, the long process of combing through these tangled writings would have a soul-cleansing effect and prove to be its own reward. When a self-described “theory-head” — let’s just say it’s an aging ex-hippie professor in her 70s — tells her students that she “gets high on theory” as a way of enticing them to study it, undoubtedly she is being sincere, attempting to describe to them a sort of satisfaction that lies beyond the mundane realm of experience. In the 87th exercise (verse 111) of the Vijñāna-bhairava-tantra, an ancient book of spiritual practices for Hindu Shaivites, the adept is instructed to spin himself around and around in a circle and then fall onto the ground while very dizzy. At that moment, if performed with the right spiritual readiness and intent, he will achieve a state of awareness of the all-pervading divine consciousness. The next verse and practice says that overcoming an intellectual task of great difficulty can create the same exact effect.

And difficult it could be. During its heyday, if a scholar really wanted to deconstruct something well, she would need to be capable of A) demonstrating at least some erudition in the history of philosophy, B) flaunting an understanding of more than just one or two languages, C) engaging in counterintuitive lateral thinking, D) actually knowing something about the literature she’s discussing, and perhaps E) making obnoxious postmodern puns by manipulating a word’s morphemic structure with parentheses, or playing on its etymology to make a point relevant to the argument. All this, plus showing mastery over Derrida’s notoriously difficult prose. So, before it meant revolutionary political chaos, deconstruction at its core meant job security, because it functioned as a rough surrogate for determining both verbal intelligence and conscientiousness. Deconstruction was primarily a tool of business in a newly competitive field. In the parlor of the never-ending scholarly conversation, some of the occupants had found a way to set aside for themselves not more contributive discussion, but rather, a parlor game.

Being influenced by Marx, and thus more sensitive to the material realities of the university system, Terry Eagleton, just when deconstruction was getting big, looked upon it with greater perception than Derrida’s conservative critics have done all the way up to now. Eagleton, in his Literary Theory, calls Anglo-American deconstruction “a power-game” with the same fundamental pattern of “orthodox academic competition,” and he associates it with “liberal skepticism.” Plus, he acknowledges that Derrida’s philosophy is not nihilistic, and it does not deny the existence of meaning or reality. He even attributes to Derrida the view that some of the American reception of his work effects an “institutional closure” that serves the dominant interests of the American political economy. But of course, Eagleton’s conclusion from all this is that the hope of a rising embrace of more aggressively political poststructuralists like Michele Foucault is an encouraging development. That’s sort of funny. Go watch a viral video of the affluent cult-like students of Evergreen State College screaming about white supremacy during their notorious meltdown, or that video of the crowd of Yale girls berating their professor while half of them click their fingers in the background as if they’re watching a beatnik poetry recital. Go do that, and then ask yourself: was Derrida really so bad when he was on top?

What Eagleton failed to see is that Foucault’s newfound dominance in academia did not ultimately amount to a strong-armed critique of American imperialism, capitalism, or liberalism — Foucault’s defenders will argue that most of today’s campus radicalism is just a bastardization of Foucault’s work — nor did it, for that matter, leave any kind of net positive impact on political affairs from practically any vantage point save that of a university professor. On the contrary, it reflected a newfound anger in the students and professors. But rather than put this anger to productive use, Foucault’s ideas were absorbed within liberalism’s own internal machinery, and the flames of radicalism that his writings would eventually play some role in stoking have since been met with enthusiastic approval by both the managerial class and the economic elites who run the world’s most profitable corporations.

Derrida’s work, too, wound up assisting in the business dimensions of the university system, as I’ve shown. But unlike with Foucault and others, the effect of Derrida’s writing was that it did comparatively little harm. While true that various political activists claimed (and to some extent still claim) to be influenced by Derrida, such influence always had to be buttressed by some other French theorist to make it stick. The main downside to Derrida was that the radical bait-and-switch he represented proved too obvious to the academic soldiers of ressentiment who really did want their books to be weapons and their words, violence. And so it was their hunger for vengeance that ultimately won out, prompting a shift in moral priorities for the humanities once again in the late 1980s as the university system’s overall financial situation worsened, administrations became increasingly bloated, tuition costs had to be raised, and — this is just a tad important — a new generation of affirmative action beneficiaries, particularly women, proved the determining factor in bailing out the universities, thus allowing for more and more expansion and bloat. The right-wing critic Niccolo Soldo calls his publication brand “Fisted by Foucault,” alluding to both Foucault’s status as the number one cited writer in modern scholarship, as well as Foucault’s real-life penchant for anal fisting. But if it’s true that the university system got fisted by Foucault in some spiritual sense, then Derrida was the one who supplied the lube. If deconstruction had somehow managed to stay on top, the scholarship would have continued to be a blissfully aimless pageant of cerebral eggheads performing daunting feats of abstraction for one another. And perhaps you’ve noticed this already, so I’ll just put it gently: the soldiers of wokeness are not exactly cerebral eggheads.

IV. The Underlying Americanness of Deconstruction

Back in 1984, Derrida said, “Were I not so frequently associated with this adventure of deconstruction, I would risk, with a smile, the following hypothesis: America is deconstruction” (emphasis in orig). We should not fail to appreciate his effective use of apophasis here. It’s not that he’s really making that point. He would be making it, except that he, a Frenchman, is associated with deconstruction, and besides, such words and concepts are characterized by instability anyways, so it simply cannot be made. But this point would not go away. After some time had passed, he became the subject of the documentary Derrida (2002), and in it, he seems aware that the Americanization of his philosophy has become just about finalized. He also betrays some mounting anxiety about it, though it is not so obvious in the main feature itself. Sure, he comments on America here and there, and he jokes a bit about having become Americanized, but there is nothing too striking. The same, however, cannot be said of the bonus interview footage in the special features. At one point, he goes on at length criticizing one of the interviewers for interviewing him in an “American” style because she asked him some (admittedly stupid) open-ended questions about love some time beforehand. And elsewhere, he continues discussing the issue of America and him:

“This film is an American film, which means that some images in the states will be granted some privilege over others. And my life in the states is just a small part of my life — it came late, it’s just a few weeks a year — so I have a feeling that the American image will erase a lot of other things, quantitatively and qualitatively. And then finally, the virtual audience of this film will have the image of someone who is — not American, of course (it’s obvious that I’m not American) — but who spends a lot of his life in the States […] This film will in any case survive me.”

Well, yes, that’s exactly right, monsieur — and he would know! It is an observation perfectly in line with his own philosophy. For after all, what is this essay if not a loving homage to this undeniable point that the writer as an historical subject will inevitably be eclipsed by that which he leaves behind — a point most certainly found in deconstruction? In rearranging the red, white, and blue on his flag while Uncle Sam stomps some mudholes in the face of the Marianne, converting Derrida to the cult of America — and, in doing so, revealing his underlying arche-américanisme, that inner vegetable-oil-grease flame that guided his actions all along — do we not pay the greatest tribute to Jackie that one could possibly pay? But in order to achieve the final part of this tribute, we must go beyond simply pointing out that Derrida helped fill the structural lacuna left behind by the expansion of the university system and the urgent need for the humanities to produce scholarship with the same rapidity as the natural sciences. We must look closely at his career in America specifically and how it corresponds to the Americanism in Derrida’s actual writing.

Though Peeters and Cusset both downplay Derrida’s strategic cunning in seducing the American universities, perhaps for good reason, the direction his career took after the 1960s nonetheless suggests an early awareness as to where his bread would soon be buttered. Derrida didn’t really convert the Americans to a different way of thinking, but they certainly converted him to an approach more in line with what they wanted from him. He first came up with the idea for his Glas (1974) in 1971, a highly experimental commentary on Hegel and the French novelist Jean Genet, after he had taught two guest courses at Johns Hopkins and had befriended Paul De Man and J. Hillis Miller. Though Glas was not wholly without an argument, it was still much more literary than his previous work, resembling the kind of experimentation one might find in the modernist novels of Joyce or Beckett. Without the friendship and encouragement of the American literature professors, it is hard to imagine that the book would have been written. In 1980, after Derrida’s success in the American university was an established fact, The Post Card: From Socrates to Freud and Beyond came out, written in the style of an epistolary novel. When a conference was arranged in 1988 entitled “Deconstruction is/in America,” inspired by his ironic comment mentioned above, Derrida contributed to it by presenting a paper that ignored the question of whether or not deconstruction really is America, and instead focused on a single line from Shakespeare’s Hamlet, “the time is out of joint,” which would later become a motif in his Specters of Marx. In other words, when asked to talk about America, he did some literary criticism.

But although Derrida didn’t care to address the question, and few of the other presenters had anything interesting to say about it, there is one major reason as to why Derrida was so revered in America that hasn’t been addressed up to now, and it had everything to do with how his career evolved once the Americans accepted him: he became an aspirational figure. When people pitch a multi-level marketing scheme to a fresh acquaintance, they’ve typically been instructed to try and entice that person to join by mentioning the fabulous wealth and success of someone near the top of the pyramid, maybe the company’s founder, or his nephew, or whoever. The same basic strategy can be seen when sex workers with OnlyFans accounts will mysteriously “go viral” when they make a social media post about how much money they have, despite the fact that exceedingly few camwhores will make a livable income. And the same thing happens in the academic system. While deconstruction was flourishing in America, Derrida was showing everyone that his theory, diffuse as it is, could allow him to write about whatever he wanted. In other words, he achieved the dream that so many humanities scholars have. After all, who, at the age of 23, wants to become some mopey Edmund Spenser expert, spending his whole life working on Edmund Spenser, adorning his office with Spenser-related memorabilia, occasionally doing an interview with NPR about something vaguely associated with The Faerie Queene, and so on? How could this be anyone’s dream? Derrida the Chad, by contrast, was winin’ and dinin’, wheelin’ and dealin’, delivering paper after paper on just about any cockamamie thing that came to his pea-pickin’ mind, and everybody was going positively apeshit for it. With his theory, he became the icon of the fully free, fully self-actualized, and — most importantly of all — paid in full intellectual, capable of doing whatever scholarship he wanted, however he wanted to do it.

Even when Derrida’s influence began to wane in the early 1990s, he had already been doing professional photoshoots, so his visage would loom enticingly over the heads of those who looked up and said, “Some day, my theory might allow me to be as free as that man.” After all, what could be better than reaching the status of the fully emancipated cosmopolitan public intellectual, free to globetrot about the international conference circuit, free to discuss whatever topic happens to suit your fancy? Could there be anything more American than this dream? When Derrida said, “This film will […] survive me,” his work had already “survived” him, and the phantasm of his presence in America was already helping to recruit new drones to the big scholarship factory of academia. So this broader notion of the “real person,” the historical subject, being obscured by his own works, or works about him, illustrates the crux of Derrida the American quite well, for it points to a deeply American question. It is the question of the individual — the very thing that Jordan Peterson and the other conservatives felt that Derrida was trying to erase — and it is to that question that the rest of this discussion must turn.

Now, you will perhaps have noticed that up to this point, I have refrained from explaining what deconstruction actually is, hoping that the implications of the word “deconstruction” itself would be enough to suit the context. But to explain why the technique, and Derrida himself, ought to be taken as American, that will not do. One major component of deconstruction must be explained, namely the critique of the metaphysics of presence and its accompanying, closely related critique of logocentrism. This explanation will not exhaustively discuss every aspect of Derrida’s deconstructive practice, such as the reversal of binaries and other neat tricks. Deconstruction involves a bunch of different tactics more or less irrelevant to its Americanness, and thus outside the scope of our discussion. So be advised that the following will be a reasonably technical account of one incomplete aspect of deconstruction. I will try to make it as painless as possible.

In the simplest possible terms, Derrida’s philosophy aimed to demonstrate the total instability of meaning — not that there can be no meaning, or that meaning doesn’t exist, or that everything is meaningless. Just that meaning, i.e. the way we apprehend and make sense of things, doesn’t work the way we would like to assume; it is slippery and eludes our grasp. Though this point could be demonstrated in various ways, in his most widely read and most straightforwardly written works, Derrida focused on how we attribute meaning to signs. The question of “what constitutes a sign” is itself a bit complicated, so for our purposes, we’re just going to restrict the discussion to man-made signs, like cultural products or texts, which is mostly what the American deconstructionists did anyways. For Derrida, there can never be a situation in which a sign transmits information from its source to its interpreter while absolutely maintaining its integrity, from the moment of its conception to the moment of its reception. This point is worth dwelling on for just a moment. When we convey our ideas through language, we typically want our words to work like a teleportation platform, as in science fiction, so that the idea jumps instantaneously from the deliverer to the recipient, both parties understanding the message as though in perfect unison. Or, to put it another way: when we convey a message, we want the language or symbols we use to express it to function as a vehicle so light and compact as to be virtually weightless and invisible, so ideal in its role as a transmitter that it practically doesn’t even exist. We want it to contract the passage of time so that when we have finished taking in a message, it will have arrived unblemished and will remain perfectly intact in our minds from its point of origin, the very instant that it came out of whoever or whatever conveyed it. This sort of semantic unity is referred to as “presence,” and we like to think that we take in signs as if what they convey is, in fact, “present” to us.

But the problem is that when we assume this presence in what we encounter, we have already succumbed to an unsubstantiated notion of how the world presents itself to us, spatially and temporally. Martin Heidegger, one of Derrida’s major influences, identified this assumption of presence to be a consistent feature of metaphysics — i.e., explanatory systems that philosophers construct to explain the nature of objects and things in the world, including how they relate to one another and to the person who encounters them. Increasingly, metaphysics was taken to be an unscientific area of inquiry that depended upon baseless assertions, and so philosophers after the enlightenment, particularly in the 20th century, put in a concerted effort to philosophize without metaphysics. The natural scientists, for their part, were thought not to rely on metaphysics at all. Derrida, following Heidegger, took the assumption of presence in philosophical systems as a symptom of metaphysical thought, and much of his early work would try and demonstrate that some writer who was trying not to be metaphysical had failed. For instance, one of the major works that cemented Derrida’s status as a serious philosopher (Voice and Phenomenon) was a demonstration that even his greatest influence of all, Edmund Husserl, a philosopher who attempted to eschew all metaphysics from his work, was unable to free himself from its clutches (whether or not Derrida succeeded in his critique of Husserl is another question entirely, but never mind that).

To understand Derrida’s thinking, here, one has to be familiar with the concept of “logos,” a complicated and multi-faceted word from the ancient Greek. Logos, originally from “legein” (to gather), has come to carry many meanings, but in ancient metaphysics, it often refers to the immaterial binding force that holds things together in a unity, like a cosmic adhesive. Most accounts of logos state or at least imply that its essence can be elucidated through intricate symbolic systems including especially language but also mathematics, music, and formal logic. Though the concept of the logos had been discussed very early in Greek philosophy, at first rather elliptically in the fragments of Heraclitus, logos received its first comprehensive account by the Stoics, whose account of it indirectly influenced the Christians via Philo of Alexandria and Justin Martyr (see Hiller 2012). For them, the logos was an immanent principle contained within a universal living spirit (pneuma) pervading the world, distributed and accessible through all things in the world. The Stoics were the first to say that the logos shapes and creates everything from itself, and in fact is the seed of the universe, which gives to humans their own “seed powers” (spermatikoi logoi) that participate in it, meaning that each person has his own distinct “logos” that forms a part of the larger universal logos. When Christianity arrived, the logos became synonymous with Christ (as in John 1:1), but many of the Stoics’ previous metaphysical assertions about it remained, like the idea that the logos was there in the beginning alongside God even before finding its material instantiation on earth. Both accounts thus suggest that logos is the subtle, primary ordering principle of the material universe through which the latter’s essence can be grasped, and in Derrida’s account of logocentrism, the purity, proximity to language, and primariness of the logos was therefore key.

For Derrida, the metaphysics of presence is logocentric. The metaphysics of presence suggests that you can have immediate access to the underlying essence of what the sign merely conveys, and that essence belongs to the logos, nature’s great hidden binding force, which one might perceive in its totality when consulting the philosophical account that gets things just right. So in other words, if I believe that a message has given to me the idea it conveys so directly that this idea is “present” to me, then it must belong to the logos, unblemished and immaculate as it is. To show that a thinker had fallen into the metaphysics of presence, Derrida would find instances in which that person showed a preference for a maximally “immediate” type of sign or communication, something that would seem to convey the original idea in a more direct way than through some other method.

According to Derrida’s thinking, if you prefer a more seemingly immediate form of communication, then this preference indicates that you have fallen prey to the belief that there can be a type of sign that could possibly allow for the achievement of presence in one’s apprehension — because otherwise, why even care? And, furthermore, an “immediate” mode of expression will be something more supposedly proximal to the logos, its most subtle and distilled possible form. Many of Derrida’s critics were vexed when, in his seminal work Of Grammatology, he claimed that phonocentrism (the preference for the spoken word over the written word) was a clear indication of logocentrism. But given his own premises, it makes at least some degree of sense. In Derrida’s view, western philosophers had shown a preference for the spoken word over the written word time and time again — from Aristotle to Plato to Rousseau to Husserl to even the linguist Ferdinand de Saussure — and this was because they felt that the best mode of expression must be as close to the logos as possible. The written word was just a highly contaminated approximation of the spoken word, which was a contaminated approximation of the original thought within someone’s mind, which was an approximation of the most refined possible presentation of what this thought might be in all its purity, and that maximally pristine version, its logos, would belong to the universal logos that is coextensive with universal pneuma, or God, or whatever. Derrida believed this was the prejudice from which the philosophers were operating.

The association of the logos with a text’s voice of origin could perhaps lead some to suspect that Derrida was equating the logos with the individual, whereas in, say, classical Aristotelian rhetoric, ethos would be more closely associated with the individual. But this would be an overly hasty assertion. Although each individual has his own logos — at least, according to the Stoic treatment of the subject, which Derrida’s implied definition of logos most closely approximates — Derrida was more concerned with the bias toward the universal logos, the supposed origin of everything. So even if someone expressed the desire to read another’s thoughts directly, this would not be a straightforward demand to grasp the logos, though it would imply such a desire. Derrida’s approach was quite loose, you see, because it assumed that when given two options, the preference for something closer to the universal logos amounts to a yearning for it — whether or not the chooser shows any awareness of logos in the first place.

This implied but not direct association of the logos with the person who first thought up any sign or text would prove quite valuable to the literary critics. Although it is not quite appropriate to identify the individual qua the individual with the logos in an absolute sense, it is in any case clear that Derrida’s philosophy did not treat a text as a straightforward creation of its author, the mind that supposedly thought it all up by itself. Derrida was interested in destabilizing the concept of the author, and this interest was enough for the American deconstructionists to follow suit in seeing a reader’s concerns about the author’s intentions and motivations as logocentric in nature. Derrida was not the first Frenchman to criticize concerns over the author, however. Roland Barthes, a friend and decisive influence on Derrida’s work, published an essay (in America) entitled “The Death of the Author” in 1967, the same year Derrida’s three seminal works came out, but Barthes’s basic idea had already been put into practice in 1963 for his work On Racine. The “death of the author” means that we should see the text as a depersonalized collection of linguistic signs, part of a broader array of texts that contributed to its formation, as well as a product whose discursive meaning will inevitably change based on the circumstances surrounding those who read it for themselves. We should therefore not see the text as the product of “the author,” some deity-like creator with perfect originality and precise control over each of his creative decisions, for in Barthes’s view, “the author” is more a myth than a reality.

Derrida’s stance was more or less identical. In Of Grammatology (1967), he says,

“It would be frivolous to think that ‘Descartes,’ ‘Leibniz,’ ‘Rousseau,’ ‘Hegel,’ etc. are names of authors, or the authors of movements or displacements that we thus designate. The indicative value that I attribute to them is first of all the name of a problem” (p. 99).

And later, when Derrida responded in Limited Inc. (1988) to the American philosopher John Searle’s criticisms of him, he refused to address him as “Searle,” claiming that he is only responding to the writing Searle has left behind. Instead, he opts to call him “Sarl,” an acronym for Société à responsabilité limitée, which is a kind of French limited liability company that occupies an ambiguous legal position between a privately owned company and a joint-stock corporation. The point being not just that Derrida will condescendingly put his criticism toward an American in the language of business and finance (as if that’s all the Americans can understand), but also to say that the author has a similarly unstable, ambiguous position in relation to his work as the owners of an LLC to its capital. So conservatives are not wrong to say that postmodernism, which some see as epitomized by Derrida’s thinking, includes a critique of the individual, if we understand this critique to take place via the author. But how radical is it, really? And how new?

As mentioned above, before a whole bunch of literary theories exploded onto the American university in the 1970s, the main one was New Criticism, which aimed to analyze the literary text alone, shorn of all other considerations. New Criticism had been going on informally and organically for a while without a proper name, but it was codified into a system sometime in the late 1930s by Cleanth Brooks and Robert Penn Warren, who were building on the work of other critics and making room for others. One of the big literary criticism mistakes that the New Critics sought to get rid of was named in the title of their essay “The Intentional Fallacy,” first published by W.K. Wimsatt and Monroe Beardsley in 1946. When a critic had committed the intentional fallacy, he would read a work while trying to determine what the author was attempting to accomplish, and then judge the work based on if the author had achieved his goal. The so-called intentional fallacy was a major component of the neo-Kantian approach to aesthetics and was perhaps epitomized by the critical work of Benedetto Croce (interestingly, Wimsatt and Beardsley also criticize the perennial traditionalist/weirdo occultist Ananda Coomaraswamy as a representative of this fallacy).

According to the New Critics, the intentional fallacy was a fallacy because it ignored the question as to whether or not the work was any good. Think of it this way: everyone agrees that murder is bad, so should we give a lesser sentence to those who achieved the murder precisely according to their own intent? No, because bad is bad, no matter how perfectly achieved a bad thing might be. According to the New Critics’ line of thinking, the question of author psychology might be valid as a separate consideration for intellectual history, but it ought to be removed altogether from the question of aesthetics, which they saw as the heart of academic criticism. But this was not the only reason to ignore authorial intent. It was also criticized because it undermines the extent to which the author is always influenced by other works that precede him, thus making a naïve assumption about personal agency. This contention of the New Critics certainly did not come from out of nowhere, for it was already understood by the English-speaking poets themselves. D.H. Lawrence, in his “Song of a Man Who Has Come Through,” cried “Not I, not I, but the wind that blows through me!” And T.S. Eliot, in his “Tradition and the Individual Talent” (1919), argued that great poetry is achieved when the writer opens himself up to the tradition that informs the work he is trying to write, allowing it to pass through him and answer to the work’s own unique demands, putting his own emotions and feelings aside. These, too, are critiques of the individual.

Critics of Derrida and the poststructuralists tend to oppose their ideas to the political philosophy of liberalism, and they typically characterize liberalism as being primarily individual-oriented, because liberalism defends inalienable individual rights. But placed under scrutiny, this is something of a gross vulgarization. Of course liberalism does show concern for the rights of the individual. But people forget all too quickly that liberalism always carried a tension between, on the one hand, the individual as the primary consideration of its political philosophy, and on the other hand, its actual practitioners: the tight-knight Christian communities, with their benevolent and selfless acts of Christian charity, without whom liberalism could not have happened. When the New Critics began to draw criticism from the MLA in the 1960s, the left-wing radicals saw them as the self-appointed Brahmins of the bourgeois liberal establishment. And in many ways, that’s correct; they were! But as we have seen, their attitude toward the individual was clearly not that of the many conservatives who nowadays label themselves “classical liberals,” a largely meaningless phrase. The New Critics represented the other, hidden attitude toward the individual within liberalism that is so often swept under the rug by today’s conservatives, centrists, and libertarians.

To grasp liberalism’s ambivalence toward the individual, it is necessary to understand the religious faith accompanying it in both England and America. In Calvinist theology, which played such a key role in the northern half of America’s early development, there is an understanding that God’s divine providence not just accounts for but predetermines life’s outcomes. It’s the idea that if you happen to be filthy rich, it’s not just because of your own excellence or “merit,” but also partly because God has incorporated your success into His own inscrutable vision, His great plan for how the story of the world will inexorably unravel. Essentially, God’s divine providence means that you cannot take credit for everything you have accomplished, because it was already preordained even before your parents conceived. This deeply ingrained attitude was partly why the so-called robber barons such as John D. Rockefeller and Andrew Carnegie felt morally obligated to set up big philanthropic foundations and put some of their resources back into the community with public libraries and other useful services. Whereas by contrast, the charity of today’s secularized tech oligarchs is largely given to NGOs and nonprofits, some of which prove highly divisive according to public opinion polling, and from whom the benefit provided to the average person is questionable at best.

Despite the secularization of America, however, there are still lingering vestiges of this aspect of liberalism that remain in the university system even today. For instance, in Lynn Z. Bloom’s “Freshman Composition as a Middle-Class Enterprise” (1996), Bloom notes that when students in those classes use the first-person pronoun “I” for their final papers, it typically causes the teachers to lower their grade in proportion to the amount it is used (pp. 660–661). In Bloom’s view, the first-person “I” is stigmatized by composition teachers (even while being hotly debated by rhetoric and comp specialists) because it violates the middle-class values of decorum and propriety, which were established by bourgeois liberal rhetoricians trying to approximate the rhetoric of the aristocracy. This is all correct, and perhaps surprisingly still true despite the article being twenty-five years old. And it was in the context of Anglophone liberal middle-class values that the “I” initially became an obscenity. Just recently, the bow-tie-wearing former libertarian Anglo-Saxon uber-liberal himself, Tucker Carlson said, “Since this show began … I’ve really tried not to talk about myself on the air or even use the first-person pronoun. The last thing this country needs is more narcissism” (7/20/20). When the religion of Rastafarianism was formed in the slave-holding island of Jamaica with more than a tinge of (justified) resentment toward the British colonists, they made a point of emphasizing the first-person pronoun as integral to their mode of expression, removing the object form “me” altogether, and creating a bunch of words with “I” shoehorned into them (see the dialect Iyaric). This was, in part, an act of rebellion against the Anglophone bourgeois sensibility. And elsewhere, the libertine occultist Aleister Crowley, for his part, was so right to observe that English is a great language for making the graphic representation of the first-person pronoun a phallic symbol, obtrusive and shocking as phallic symbols tend to be. So what is all of this university-ordained pushback against the “I” if not a way of tempering the excesses of individualism within a liberal discourse?

Given that qualification, the “intentional fallacy” of New Criticism begins to make sense. But if it negated the individual on some level, it did not do so altogether. After all, it belonged to a middle-class liberal way of thinking. One of the major distinctions that separated the New Critics from the poststructuralists was that the former were humanists, and their criticism was done in the name of fortifying the literary canon. When the New Critics encouraged readers to forget about the motivations of the author, the exercise was just meant to be a temporary suspension. Ultimately, the reader would be expected to come back to the author and praise his brilliance, having gone through the preliminary discipline of forgetting about him and focusing solely on his work as a fully exteriorized product. Because New Criticism, and American scholarship more broadly, took the idea of a unified, standalone “text” as a given, it follows that the text’s authorship would be held in a similarly unquestioned position. For this reason, when the young Milman Parry sought to prove during the 1920s that Homer’s epic poems were the product of a deeply ingrained oral tradition rather than the result of Homer’s individual genius, he had to publish those works in France alongside the early structuralists despite obtaining his M.A. at UC Berkeley. The classics departments in America were much less tolerant to an argument that sought to decenter both the standalone text and the author who produced it.

And yet, though it was Barthes’s “Death of the Author” that made structuralism’s critique of the individual crystal clear — and Barthes did indeed identify the preoccupation with the individual as part of bourgeois capitalism — he was not the one who would go on to become the great American superstar. Part of the reason is that his essay could only make sense in France. As Clara Claiborne Park notes in her essay “Author! Author!: Reconstructing Roland Barthes,” when Barthes “killed” the author, he was writing in revolt against a French literary and educational culture that allowed little innovation to the French language (still the case today), had 40 canonical authors (still the case today), and each of those authors wrote with a relatively limited vocabulary, less experimental and polyglossic than that of Chaucer, less rich and worldly than that of Shakespeare. As noted before, the school of French structuralism had already been questioning the sanctity of authorship outside French literature, but Barthes had used those techniques on Racine, a French author. That usage was much more shocking, and the shock was inseparable from the French nation and its own self-conception. Much of what Barthes announced with revolutionary flair had already been said quietly and modestly by the New Critics about their own literature, within an Anglophone literary culture that was already less linguistically stable. And so it was Derrida, whose stance regarding the individual would prove more complex and ambivalent, that would become the American icon.

Even while denying the importance of the individual as the creator of the text, one of the tensions common to both Derrida and the New Critics involved dealing with the matter of individual psychology, since both camps perhaps unexpectedly engaged in such speculation to make their points. Consider the case with New Criticism. When you hear about something like “the intentional fallacy,” you might be inclined to think that these critics would want to remove all psychological considerations from their analyses, since the author’s own psyche would be off-limits (and the subjective reaction of the reader was, too, according to what they called “the affective fallacy”). But psychology came into play nonetheless. For the New Critics, analyzing literature could involve as much psychological speculation as you wanted, so long as you made sure to say that some sort of narrative persona was resting between the author himself and the actual work. You didn’t need to cite Freud to execute this maneuver. For a novelist like Nabokov, known for his unreliable narrators with their own carefully constructed motivations and complexities, such an approach makes sense; it is downright necessary. Nabokov was working with the novel in the age of print, a particularly hospitable genre and time for characters and narrative personae so believable as to be almost three-dimensional. But this approach could get quite thorny when dealing with earlier literature that bore traces of the oral tradition and for which the idea of an altogether separate narrator was still undeveloped and hazy.

Under the influence of New Criticism, scholars would approach medieval literature this way, particularly Geoffrey Chaucer. The idea was that because Chaucer wrote The Canterbury Tales, a frame narrative in which each pilgrim tells a tale, every tale must represent an ethos that is distinct from Chaucer’s. This approach makes plenty of sense for stories like The Wife of Bath’s Tale, but makes very little sense for stories like The Knight’s Tale, which was likely composed before Chaucer had even thought of the idea for Canterbury Tales. Most of the early tales, in fact, were almost certainly not written with any particular narrator in mind. Yet nevertheless, a psychological portrait of Chaucer would emerge from these critics via negativa, wherein he would become the embodiment of everything his pilgrims were not. This same unreliable-narrator-oriented critical approach would later spread to other medieval writings without frame narratives, justified only by the narration sometimes making conversational “asides” to the audience, interrupting the plot (typical among stories designed for oral delivery). And over time, these interpretations increasingly were done to affirm 20th century liberal prejudices. As one medievalist notes, under this critical approach in which the “narrator” would belie the work’s true meaning, “texts that apparently celebrate a warrior ethic turn out to be really pacifistic, those that apparently express misogyny turn out to convey feminist values, those that apparently satirize ‘unnatural’ sexual behavior turn out to sympathize with it, those that apparently admire powerful rulers turn out to condemn them as tyrants, and so on” (Spearing, p. 4). The point is, one cannot cleanly avoid “the intentional fallacy” when analyzing narrative voice, since the analysis will always imply at least some degree of authorial intention. And as soon as one identifies a usage of literary irony that reveals an altogether different message than what the text seems to suggest, the implication becomes unmistakable. So even when attempting to sidestep the problem of authorial psychology, the New Critics and their descendants eventually wandered right back in.

Though there is asymmetry between the approach of the New Critics and that of Derrida, this same ambivalence toward authorial psychology can be found in the latter’s work. Recall that one of Derrida’s key assumptions in Of Grammatology is that a preference for a more “immediate” mode of expression is itself indicative of logocentrism. Now, when you were reading about that a few paragraphs above, did you buy it? After all, why should it be so? Let’s say you talk to someone who thinks that communication is done better through interpersonal conversation rather than written letters, and then she explains that this is because oral dialogue allows for clarifications to be made more easily. Is her claim logocentric? And how, if it bears directly on the pragmatic differences between each medium, differences that Derrida would never have denied? Well, it really doesn’t matter in the end, because if he wanted to, Derrida could simply “read” between the lines of that person’s statement to conjure up “evidence” that she is logocentric. And though Ferdinand de Saussure’s reasoning for preferring the spoken word was a bit murkier, this is still essentially the treatment that Derrida gave him.

The linguist, semiologist, and father of structuralism Ferdinand de Saussure, in his Course in General Linguistics, made it clear that linguistics must always privilege oral over written communication. Saussure, quite wrongly, claimed that the sole purpose of writing is to represent spoken language, and that the only true bond between a sign and referent should be understood as one of sound. He also famously discussed examples of the “tyranny of writing,” in which academics and grammarians with no substantive linguistic knowledge will pedantically overvalue the importance of how words are written, failing to observe the long history of indifference toward punctuation and standardized spelling in even the highest intellectual institutions. He even, as Derrida notes, described as “pathological” the shifts in word pronunciation stemming from the establishment of spelling conventions (the English Great Vowel Shift being a good historical example). Now, let us put these prejudices in context for a moment. It is worth bearing in mind that Saussure spent most of his career tracking shifts in languages throughout ancient history by determining their phonetic articulation. He did this by examining how words were rendered orthographically, then delicately made his conclusions through a sophisticated and logically precise comparative method. He even, going beyond the comparative method, conjectured that the proto-Indo-European language probably had some laryngeal consonants that couldn’t be reconstructed merely through the comparative method because they weren’t assigned a letter or character, such as word-initial H’s (aspirates) that preceded what only appeared to be vowel-initial words in early Indo-European languages. When Hittite was discovered and translated, he was proven right. So this was a man who surely spent many nights in solitude seriously thinking about how ancient words would have been pronounced, trying to look past the mere letter of the text to make stronger contextual inferences and deductions. For him, lack of standardization in spelling was useful, and as a scientist and historian, the idea of obscuring the pronunciation of his own day with a written record that has an over-refined spelling system was probably unsettling, given the difficulties it might create for later historians and linguists (and during his time, the possibilities of the newly invented phonograph were still unclear). So if his stance on writing was a bit over-the-top, his feelings probably came from pragmatic concerns rather than metaphysical ones. At least that would be my quaint, unsophisticated guess.

Now, when turning to Derrida’s account of Saussure and his stance on writing, we find a somewhat different picture emerge. According to Derrida (comic-book-style emphases mine), “Saussure’s vehement argumentation aims at a more than theoretical error, more than a moral fault: at a sort of stain and primarily at a sin” (p. 34). Elsewhere, we learn that for Saussure, writing is a “garment of perversion and debauchery, a dress of corruption and disguise, a festival mask that must be exorcised.” His science of language must “restore its absolute youth, and the purity of its origin, short of a history and a fall which would have perverted the relationships between inside and outside” (p. 35). Saussure wants to “punish writing for a monstrous crime” and place it within an “intralinguistic leper colony” in order to “contain and concentrate the problems of deformation through writing” (pp. 41–42, and cf. Freundlieb, who uses the same quotations to make a different point). Even without all the emphases, his interpretation, I hate to say it, is utterly unhinged. But it must be so in order to show that Saussure is concerned with purity and cleanliness; that he yearns for a prelapsarian state of affairs when Adam had just named the beasts and God still walked in the garden. Derrida must show that Saussure’s emotions are high because he is enthralled by the logocentrism and metaphysics of presence that has come to him from the philosophical tradition, whether directly or perhaps through a strange osmosis. Otherwise, we are simply looking at a gifted language expert showing a bit too much passion when emphasizing the validity of his robust research methods. And that’s really not too interesting.

Clearly, in order to demonstrate the logocentrism of a preference for some supposedly “prior” category, like oral communication over writing, the person identifying it must engage in psychological speculation regarding whoever has the preference. The reason for this preference is everything. Otherwise, with no way of showing that the person is invested in the deepest origins of whatever he prefers, the deconstructor would have to conclude that the human voice, or the human mind, or whatever, is itself the logos, which makes no sense — or he’d have to stop the charade altogether. Derrida uses the language of impurity, contamination, and disease when describing Saussure’s motives, but one can see clearly that he himself must identify Saussure’s motives by treating the latter’s preference symptomatically. So where is the real logocentrism? Is it there in Saussure, or is it merely a figment of Derrida’s imagination, a prejudice that he has projected upon the author, whose psychological availability he takes for granted? And, moreover, isn’t it curious that at the very same moment that Derrida pounces upon the logocentrism of another writer, he himself uses language that most conspicuously seems to suggest a literary voice, a distinct and unique style underlying his own writing?

Of course, Derrida’s many defenders will respond to these question with something to the effect of, “No, don’t you see? That’s the whole point. The indeterminacy of the sign’s ontological status is precisely Derrida’s concern — of course you can deconstruct him just as easily as he did Saussure, because the ‘Saussure’ he describes is merely the instability of the deferred moment of textual semiosis, the ‘trace’ as it were, which he himself acknowledges later on when blah blah blah, BLAH blah blah, blah-BLAH-blahblah,” yes, yes, OK, I get it. I’ve seen the eyes of his aficionados and enthusiasts light up this way before. I also, incidentally, have seen the part of Pee-Wee’s Big Adventure in which Pee-Wee Herman attempts a trick with his bike, falls off of it, and tells a group of onlookers, “I meant to do that.” But the point here is not to refute the philosophy, nor is it to defend Saussure. It is merely to highlight the ambiguity pertaining to the status of the individual in Derrida’s work, which in turn resembles the same ambiguity that pervaded the approach of New Criticism. When Derrida, much later in Limited Inc., responds to John Searle by renaming him “Sarl,” he reassures us,

“I hope that the bear­ers of proper names will not be wounded by this […] device. For it will have the […] advantage of enabling me to avoid offending individuals […] in the course of an argument that they might […] consider, wrongly, to be polemical” (p. 36).

How nice. But everyone knows of course that he’s doing polemic against one person: John Searle. There’s a wink and a nudge implied throughout the whole essay. This is why if you look up the reviews for Limited Inc. on Goodreads.com, the reviewers are quite excited to see how catty their intellectual hero could really be. And perhaps the cleverness in his approach is enough to ignore the bloviating quality of his prose style (hence all my ellipses; sorry if they were a distraction), extend a warm compliment on account of his self-awareness, and avoid asking the uncomfortable question as to whether such a degree of self-awareness in writing is advisable to begin with. Perhaps.

In any case, this same indecisiveness pertaining to the individual would resurface in different forms throughout Derrida’s later career. Consider his aforementioned decision to pose for professional photographs. As Peeters notes, some of Derrida’s longstanding French acquaintances such as Bernard Pautrat were disappointed to find that he had allowed himself to be photographed in America, whereas in the 1960s and 1970s, he refused it as a matter of principle (pp. 442–443). This decision is significant because Americans are uniquely preoccupied with the human face; it is a peculiarity essential to understanding the culture in both its highbrow and lowbrow forms. In professional wrestling, for instance, Mexico and Japan are perfectly fine with their champions wearing masks. In Mexican wrestling (lucha libre), it is actually preferred: El Santo, its most famous wrestler, donned a mask that tapped into a deep Roman Catholic mythological current wherein the conflict between Christianity versus the forces of Satan is its central ongoing struggle. As a wrestler, El Santo could play the allegorical role of the good saint, and his mask was so important to this end that he was actually buried wearing it after he died. And even today in lucha libre, to lose one’s mask is still considered a major humiliation; one can no longer belong to the distal realm of perennial archetypes. In America, by contrast, one’s naked face is absolutely paramount; it really is the money-maker, as they say in show business. I have long suspected that its heightened importance in America, with its ability to both suggest yet also obscure a distinct personality depending on the circumstances, may have corresponded to America’s instant enthusiasm for psychoanalysis and the therapy industry in some way. In the WWF/WWE, only two masked wrestlers have won the company’s highest championship title: Kane (who only held it for one night) and Mankind. And when Mankind won the title, the audience understood that the dark and brooding character, with his stage name befitting a warped medieval morality play, was just an alter-ego — a narrative persona, one might say — of the real Mick Foley (his actual name) whose facial appearance the fans knew well. So whereas masks for non-Anglophone countries amount to a transference into a depersonalized world of symbols, in this case the mask was appreciated as a supplement augmenting the complexity of Mick Foley’s psychological profile. Even in day-to-day life, the importance of the naked face is significantly greater in America than most other countries. When the coronavirus pandemic occurred, the Eastern hemisphere was very comfortable with masks, as always; the Catholic countries in Europe (like France) were the most compliant with their mask regulations, save for Germany; Africa, wild as it may be, was surprisingly accepting of mask mandates where they occurred; and yet the United States and Canada took unique umbrage at the idea that masks would have to be worn in public. It was the opposite of lucha libre: putting on the mask was taken by many to be a form of humiliation.

So, when Derrida started posing in photo sessions with professional photographers, this decision could not have been lightly made, nor would the photographs fail to take on their own symbolic importance in light of his philosophy. While he claimed that the first public photos were taken of him by the press at a conference in Sorbonne in 1969, and that he only gradually became more accepting of professional photography as part of the profession, it is clear that by the 90s, he had become an absolute pro at bearing the countenance of the contemplative intellectual. He didn’t just stand there uncomfortably like in those daguerreotypes from the 19th century. He was playing for an American audience. He really took some photos, you know. This was Hollywood-grade stuff. Some of those photos, by the way, were made into posters that would adorn the walls of so many English professors’ offices. You can bet that they were all aware of the irony and were quick to acknowledge it if prodded, perhaps even with a tinge of enthusiasm in pointing out yet another matter of controversy — but they still looked up to Derrida as their aspirational figure of freedom all the same. So although Derrida’s success in the United States may not have been the fruit of any methodical shrewdness or cunning, it is undeniable that he played his success to the hilt once he got it.

As time wore on, a handful of deconstruction apologists began to argue that the notion of the naked face was actually an integral component to understanding Derrida’s whole philosophical project (see summary in Hägglund 2008). Recognizing that Derrida turned his attention to political theorists such as Karl Marx and Carl Schmitt with a newfound interest in ethics toward the end of his career, a small handful of commentators started to claim that all of deconstruction contained an ethical bent from the very beginning. They made this argument by linking Derrida to Claude Levinas, whom he discussed on several occasions, most notably in his long essay “Violence and Metaphysics.” Derrida’s work on Levinas, though critical in key ways, never quite shot him down with the same eagerness as in his treatment of Foucault, Levi-Strauss, Husserl, Saussure, and others. Levinas held in high regard the practice of the face-to-face encounter, because the naked face, in his view, compels one to confront and hold oneself accountable to the Other [sic] as truly “other,” not merely as an extension of one’s own self. This claim from Levinas amounts to a rejection of the common metaphysical notion that one must strive to see all things as participating in an essentially unified self (e.g. this is the meaning of the Chandogya Upanishad 6.8.7 from the Sama Veda, which tells us “Tat Tvam Asi,” or literally “That thou art.”) For those who say Derrida endorses the “face-to-face encounter” of Levinas, the indeterminate nature of deconstruction constitutes, for them, a hesitance that affirms alterity itself, or “the otherness of the Other [sic].” Those who disagree argue that the face-to-face encounter is, for Derrida, always discriminatory by its nature, since it must exclude the Third [sic], and so therefore the face-to-face encounter with the Other [sic] cannot be understood as fundamentally ethical. Given that the answer to any question regarding deconstruction is usually something like “yes and no,” or “it’s both this and that,” or “it’s complicated,” one senses the debate will never reach a sound resolution.

But whether or not this scholarly conundrum ever gets resolved, the mere fact of the controversy is instructive. As Derrida began to take professional photographs, allowing the possibility for a cult of personality surrounding him to spring up and surely aware of it, he began writing more on ethics and politics. So it is quite fitting that the academics observed this shift in his writing, read it back all the way into the origins of deconstruction, and said that this whole time deconstruction was a celebration of the face-to-face encounter as per Levinas. The truth is that one cannot really know the extent to which Levinas influenced Derrida in that area, for the question amounts to another point of ambiguity regarding the latter’s stance on what the face really entails — its status as vehicle of ethos, the preference for it as a possible symptom of logocentrism, its ontological meaning as either an extension of the self or the uncompromising Other [sic], and so on. And this ambiguity regarding the status of the face corresponds to the same ambiguity regarding the status of the author — and by extension the status of the individual — that was always there in New Criticism, only now repackaged with maximum potential for additional research, coupled with verbal impenetrability, which is to say, job security.

More importantly for our conservative and centrist friends who worry that the French invasion is what sullied the virginal institutions of higher learning, those same ambiguities were always there in Anglophone middle-class liberalism. Some will say, “But Derrida himself repeatedly criticized liberalism. And he hated how his philosophy was turned into a formula. And no one was more critical of the university’s appropriation of his work than he!” But when Derrida was brought in as the mascot of these essentially liberal tensions within the literature departments, his own “authentic” opinions — philosophical, political, or otherwise — ceased to matter. The author was being represented on different soil to a different crowd as a supplement to himself, just as one would expect from his own theory. After all, it was the English translations that proved more influential than Derrida’s original French writing, not only within America, but all over the world as he became popular because of his American reception, and what is a translation if not a means to supplant one’s words? America was the vortex of swirling energies that summoned Derrida with its irresistible lure — beckoning him like a bitch in heat to form a single beast with two backs — and if Derrida hadn’t existed, America would have simply invented him.

We will conclude by contrasting two quotations characterizing not just Derrida but the whole school of poststructuralism that dominated American academia in the wake of his success. Jacques Ellul, in his Humiliation of the Word (1981), tells us from France,

“In our day, in this place, a sort of social discourse flows endlessly and is repeated twenty hours out of every twenty-four, expressed by individual mouths. The discourse is completely anonymous, even though it may sometimes be affirmed with force and conviction by a particular individual. (On the intellectual level, of course, l consider the books of Jacques Lacan, Foucault, Jacques Derrida, etc., utterly typical of this anonymous social discourse. These writers constitute in themselves a demonstration of what they say about all individuals who speak.)”

While Camille Paglia, in her Sexual Personae (1990), tells us from America,

“The English language was created by poets, a five-hundred year enterprise of emotion and metaphor, the richest dialogue in world literature. French rhetorical models are too narrow for the English tradition. Most pernicious of French imports is the notion that there is no person behind a text. Is there anything more affected, aggressive, and relentlessly concrete than a Parisian intellectual behind his/her turgid text? The Parisian is a provincial when he pretends to speak for the universe.”

Two entirely different characterizations from two entirely different vantage points — one French, one American; one male, one female. But can either statement be judged incorrect, exactly? While the French intellectual is aggressively and iconoclastically tearing down the idol of the “I,” he is stylistically anonymous and absent — he is a shade, or maybe even a trace. And yet, by the same token, as he assiduously, even mind-numbingly reassures us that there is no author behind the text, there he is nevertheless, with all of his narcissism, all of his gratuitous self-awareness enshrined into his needlessly wordy prose. The only error here is in claiming that this paradox of the individual was a “French import,” when in fact it was both anticipated and aggravated by the uniquely Anglo-Saxon, and especially American value system all along. Old Jackie, the great destroyer of the humanities, the head trickster for the postmodernist movement, was never French at all, not for one second, I’m sorry to say. It’s a shame that the burden falls upon my shoulders to play the role of the 911 operator in that old urban legend who tells the blonde, bronzed, and big-breasted cheerleader, “The phone call is coming from inside the house,” but the muddle-headed conservative establishment has left me with no other choice. Jacques Derrida was an American export. And that is that.

Some books and papers mentioned:

A.C. Spearing, Medieval Autographies: The “I” of the Text. University of Notre Dame Press, 2012.

Anselm Haverkamp ed., Deconstruction is/in America: A New Sense of the Political. Conference proceedings. NYU Press, 1995.

Benoit Peeters, Derrida: A Biography (trans. Andrew Brown). Polity Press, 2013 (first published 2010).

Camille Paglia, “Junk Bonds and Corporate Raiders: Academe in the Hour of the Wolf,” Arion: A Journal of Humanities and the Classics 1.2 (1991): 139–212

Camille Paglia, Sexual Personae: Art and Decadence from Nefertiti to Emily Dickinson. Yale UP, 1990.

Clara Claiborne Park, “Author! Author! Reconstructing Roland Barthes” pp. 318–329 in Theory’s Empire: An Anthology of Dissent eds. Daphne Patai and Will H. Corral. Columbia UP, 2005.

Cynthia L. Haven, Evolution of Desire: A Life of Rene Girard. Michigan State UP, 2018.

Dieter Freundlieb, “Deconstructionist Metaphysics and the Interpretation of Saussure” The Journal of Speculative Philosophy 4.2 (1990): 105–131.

Dinesh D’Souza, Illiberal Education: The Politics of Race & Sex on Campus. The Free Press, 1991.

Francois Cusset, French Theory: How Foucault, Derrida, Deleuze, & Co. Transformed the Intellectual Life of the United States (trans. Jeff Fort). University of Minnesota Press, 2008 (first published 2003).

Jacques Derrida, Limited Inc. trans. Samuel Weber and Geoffrey Mehlmann. Northwestern UP, 1988.

Jacques Derrida, Of Grammatology trans. Gayatri Spivak. Johns Hopkins UP, 1997 (first American ed. 1976, first published 1967).

Jacques Derrida, Writing and Difference trans. Alan Bass. University of Chicago Press, 1980 (first published 1967).

Jacques Ellul, The Humiliation of the Word trans. Joyce Main Hanks. William B. Eerdmans Publishing Company, 2001 (first published 1981)

Jaideva Singh trans. Vijñānabhairava or Divine Consciousness: A Treasury of 112 Types of Yoga. Motilal Banarsidass Publishers, 2006 (first published 1979)

Jesus Navarro, How to Do Philosophy With Words: Reflecions on the Searle Derrida Debate. John Benjamins Publishing Company, 2017.

Jonathan Culler, On Deconstruction: Theory and Criticism after Structuralism. Cornell UP, 1982.

Kathleen Wheeler & C.T. Indra, Explaining Deconstruction. Macmillan India Limited, 1997.

Lynn Z. Bloom, “Freshman Composition as a Middle-Class Enterprise” College English 58.6 (1996): 654–675

Margery Sabin, “Evolution and Revolution: Change in the Literary Humanities, 1968–1995” pp. 84–106, in What’s Happened to the Humanities? ed. Alvin Kernan, Princeton UP, 1997.

Marian Hiller, From Logos to Trinity: The Evolution of Religious Beliefs from Pythagoras to Tertullian. Cambridge UP, 2012.

Martin Hägglund, Radical Atheism: Derrida and the Time of Life. Stanford UP, 2008.

Michel Lamont, “How to Become a Dominant French Philosophy: The Case of Jacques Derrida,” American Journal of Sociology 93.3 (1987): 584–622.

R. Howard Bloch, The Scandal of the Fabliaux. University of Chicago Press, 1986.

Richard Rorty, “Philosophy as a Kind of Writing: An Essay on Derrida” New Literary History 10.1 (1978): 141–160

Roger L. Geiger, “Demography and Curriculum: The Humanities in American Higher Education from the 1950s through the 1980s” pp. 50–72, in The Humanities and the Dynamics of Inclusion Since World War II ed. David A. Hollinger, Johns Hopkins UP, 2006.

Roger L. Geiger, Research and Relevant Knowlesge: American Research Universities Since World War II. Oxford University Press, 1993.

Stephen R.C. Hicks, Explaining Postmodernism: Skepticism and Socialism from Rousseau to Foucault. Scholargy Publishing, 2004.

Terry Eagleton, Literary Theory (2nd ed.), Blackwell Publishing, 1996 (first ed. published 1983).

--

--

Kerwin Fjøl

Semiotics, media ecology, intellectual history, art & literature.