Fooling the “experts”: what makes academic hoaxes tick?
The relatively recent academic hoax crafted and executed by Helen Pluckrose, James A. Lindsay and Peter Boghossian can be judged successful according to more than one metric. The extent of the reputational damage suffered by the barely sarcastically baptized “Grievance studies” (cultural studies, gender studies, identity studies and the like) targeted by the hoax can be a matter of debate. What seems undeniable is that publishing four papers intentionally designed as a parody of scholarship (and getting three more on track for publication) in some of the fields’ most emblematic journals does constitute a blow of non-zero magnitude to the scientific credentials of at least some of the practitioners within the ranks of these disciplines.
Hoaxes and other forms of intellectual scams can and indeed are fueled by various motives, some noble, some not quite noble. The Grievance studies hoax had a clearly defined pair of goals: to expose the unscientific epistemic methodology of the Grievance studies (very reminiscent of the now classic “Sokal hoax” which sought to exhibit the same flaw in what the physicist Alan Sokal identified as postmodern cultural studies) and to expose their more than questionable ethical commitments. Both of which fall into the “noble” category of motives, I would say.
The bogus papers that got either published or accepted were purposefully “outlandish or intentionally broken in significant ways”. Specifically and most importantly, the epistemological methodology that “supported” their claims was intentionally flawed. Still, they passed with flying colors through the editorial selection filter and the academic peer-review process which lies at the heart of the scientific enterprise. (it is appropriate to remark that not all the submitted papers made it to printer, six of them were irrevocably rejected. A note on that later).
I will not dedicate this article to find an under-explored piece of the wreckage left by this (or any other) hoax in order to contribute some new insight to the debate about what exactly has been exposed. That has been picked apart and discussed extensively in places like here, here or here. I do nevertheless think that hoaxes of this type reveal (or confirm) a worrying trajectory in the path of some areas of the social sciences. But that is just as much as I will flesh out my position regarding the conclusions to be drawn from the aftermath of the hoax. I am more interested in answering a broader set of questions.
How could seven journals be systematically hoaxed in this way? For that matter, how did Alan Sokal’s hoax paper that is openly sympathetic to the idea that quantum gravity can be considered an “archetypal postmodern science” got to be published in the scientific journal Social text? Why can academic hoaxes be perpetrated at all? Or alternatively, what are the most basic elements that need to be accounted for in order to explain the “mechanics” of a hoax?
I believe that it is possible to identify a series of fundamental elements that are present in all possible academic hoaxes irrespective of which goal they pursue. Furthermore, it seems that some of these elements are closely related to those exploited, consciously or unconsciously, by honest and dishonest intellectual gurus alike. The most relevant of these concepts and tools have been developed by the cognitive scientists Dan Sperber and Deirdre Wilson and the author and sociologist Sarah Thornton (mostly). This article draws heavily from their contributions to uncover what makes academic hoaxes and other types of intellectual dumbfounding tick; one could say that it intends to lay the basic blocks of something of a proto-mechanistic theory of hoaxing and intellectual dumbfounding.
It all starts with a recognition.
It is not easy to make no sense, or, sense is often in the brain of the beholder.
It seems to be the case that as long as one respects the rules of syntax it might be impossible to craft propositions that instantaneously strike a listener (or reader) as indisputably devoid of meaning. This is a central claim not only for what follows but a claim that cuts through the heart of human comprehension. Our brains come pre-programmed with cognitive rules of thumb that predispose us to make sense of almost any verbally posed proposition. Even when said proposition is meticulously designed to make no sense. We have explored this phenomenon in a previous post, here we will revisit it briefly.
To get a clear and disconcerting glimpse of how you cannot help yourself to try to make sense of essentially any verbally-coded proposition that is syntactically correct, turn your analytical mind’s eye to the operations of your reasoning while you read that “colorless green ideas sleep furiously”. Surely you caught yourself repeatedly trying and failing to unpack what that sentence might mean. Whether you succeeded or (much more likely) failed to extract a sensible interpretation out of it is secondary for our purposes here. The point is that you were unable to completely shut down the cognitive machinery that tasked with decoding the semantics of the sentence. Even more disconcerting than that, the same would have happened if I had told you in advance that what I was about to tell you made no sense and immediately after I proceeded to drop a syntactically correct but semantically nonsensical proposition.
To entirely desist from trying to make sense out of nonsense is a lesson that can be comprehended theoretically but never quite implemented in practice. The “semantic hub” of your brain (possibly the middle temporal gyrus) seems to be the kind of device that is permanently ready to use metaphorical thinking as a tool to resolve whatever semantic riddle it can get its hands upon.
Expectation of relevance
So we can recognize that the semantic hub in your brain is always one step ahead than the more conscious and “analytic” part of you in the quest for meaning. We can even acknowledge that this readiness for decoding is generally a handy feature, even if we simultaneously accept that it is an attribute that can occasionally backfire such as when we are occasionally confronted with downright nonsensical statements. It backfires by making us invest time and mental effort into shoving meaning into (quasi) meaningless statements. Hard to think about a literally more senseless project than that. Still, if the net benefits of having a decoding switch permanently turned on were not sufficient to outweigh the deficits associated with sporadically wasting energy in conjuring convoluted interpretations that lead to zero cognitive rewards, then surely natural selection would have been taken care of giving us less eager semantic hubs.
The cognitive scientists Dan Sperber and Deirdre Wilson gave a plausible epiphenomenological account what makes the semantic hub of our brains tick. Relevance theory, they say, may be seen as an attempt to work out in detail the following claim: “that an essential feature of most human communication, both verbal and non-verbal, is the expression and recognition of intentions.”
Two central assertions lie at the core of Relevance theory, one focusing on answering the question “what is relevant?” and another concerned with how to rate the relevance of various utterances that effectively contain the same amount of useful information.
So, according to Relevance theory a message is relevant whenever it causes a “positive cognitive effect” in the recipient. A positive cognitive effect is anything that produces “a worthwhile difference to the [listener’s] representation of the world – a true conclusion, for example. False conclusions are not worth having. They are cognitive effects, but not positive ones.”
By this single criterion, the higher the cognitive effects (potential usefulness) the more relevant an utterance is. Nevertheless, the second central criterion of Relevance theory acts as a kind counterbalance to the ranking produced by this first criterion. It establishes that higher mental processing effort correlates negatively with relevance. Roughly speaking, the harder you need to concentrate to unpack the useful information contained in an utterance, the less relevant it becomes to you. This is best illustrated by an example provided by Sperber in a later work about a phenomenon that he baptized as “Guru effect” (more on that later):
“It would be more relevant for you to be told of the next train to Manchester, “it is at 5:16” than to be told, “it is twenty two minutes after 4:54” (unless, of course, the lapse between 4:54 and the departure of the train is of special relevance to you).”
Both phrasings have the potential to produce the same “amount” of positive cognitive effects because they convey the same information, however, the more direct way to relay the message requires less processing effort and it is therefore more relevant.
Thus, everything else being equal, gratuitous verbosity brings relevance down. Semantic messiness maximizes irrelevance because it causes processing effort to skyrocket in order to extract disproportionately low positive cognitive effects.
The aspect of Relevance theory that is more relevant for this article than relevance itself is that utterances raise expectations of relevance not because we instinctively assume that the person talking to us is a cooperative and non-deceitful agent, but because “the search for relevance is a basic feature of human cognition, which communicators may exploit.” Expectation of relevance cannot be shut down.
The Guru effect
To lay down the next blocks of a mechanistic theory of hoaxing we need to take a deeper dive in the phenomenon termed as “the Guru effect” by Dan Sperber. The abstract of his seminal paper opens with two sentences which neatly encapsulate what the Guru effect is all about:
“Obscurity of expression is considered a flaw. Not so, however, in the speech or writing of intellectual gurus. All too often, what readers do is judge profound what they have failed to grasp.”
I bet my left kidney that these lines resonate with experiences anyone can recognize. The prototypical situation that forecasts the manifestation of the Guru effect starts when we privately experience an unsettling feeling of “not getting it” while we try to comprehend a proposition. From that moment all that is needed to officially become a victim of the Guru effect is to take our failure to “get” what the proposition mean as a token of its profundity, depth or wisdom. The Guru effect essentially trans-mutates incomprehension into perceived erudition.
The likelihood of falling prey to the Guru effect is modulated by a set of entangled elements. They have to do with context and with the specific structure of the proposition.
The first element is the expectation of relevance that is part and parcel of every act of communication. We are predisposed to invest some amount of processing effort to decode whatever it is said to us. However, the “level” to which we expect relevance is not constant across every context, it can be tuned depending on supplementary variables. In particular, it depends on the interplay between two built-in features of communication: the inherent complexity of the idea one wishes to transmit and the substantial freedom one has in choosing the words to express it. The first one is fixed. An implicit commitment to grapple with the inherent difficulty of an idea is made whenever we choose the idea we want to transmit. Then it all comes to choosing the words that can make the idea more accessible (provided one wants to clarify rather than confuse one’s audience).
There is a catch. How much one can simplify the language to express an idea is bounded by the core complexity of the idea itself. Some concepts are so inherently complicated that the repertoire of “simple words” that can be used to express them comprehensively and economically is rigidly limited. The concept of “black hole entropy” cannot be explained without also explaining concepts from information theory, quantum mechanics, thermodynamics, Planck units, etc., all of which are complicated in their own right. One needs to write a book to translate all of them into relatively simple language. Linguistic simplicity brings down the amount of processing effort necessary to digest an idea but that too has limits. No matter how much we scramble for easy language to express inherently complex ideas, the demands of processing effort will remain considerably taxing.
The converse situation is also possible. Unnecessarily complex language can be and indeed is deliberately employed to raise the mental processing demands required to understand modestly complex ideas and with it, their perceived depth. I can say “it has been convened by comparable measures of arbitrariness and perceptual sensibility to our surroundings that a full turn of our planet around the Sun can be fractionated exactly into a quartet of stages” instead of simply saying “the year has four seasons”. Extremely pedantic, but it does the trick.
There are certain contexts though, in which we are not only willing to tolerate inordinately complicated language, in fact, we almost feel disappointed if we do not get it. Consider the following quote by the influential French philosopher Jacques Derrida:
“The future states that there is no time other than the collapsation of that sensation of the mirror of the memories in which we are living. Common knowledge, but important nonetheless.”
Do not try to figure out what the best interpretation of this quote is (assuming that such interpretation exists), although, it might be already too late for this admonition given that expectation of relevance cannot be shut down.
The point here is that I want to extract a confession from you: I want to get you to admit that you tried particularly hard to comprehend the quote. Furthermore, very plausibly, the reason for you to voluntarily take the extra cognitive load is that I attributed the quote to the “influential French Philosopher Jacques Derrida”. For this I must apologize. I lied in order to trick you into spending an inordinate amount of mental processing power for no cognitive rewards whatsoever. The quote is not even from a philosopher. It is from the comedian and musician extraordinaire Reggie Watts, whose trademark humor is characterized by delivering intentionally nonsensical speeches in an unfittingly dignified style. It is a quote from one of his stand-up routines and as you might be guessing by now, it is meant to be nonsense. But if I had been completely honest about the quote’s origins, I would have failed to make my point. Your expectation of relevance was spiked in advance by me attributing the quote to a philosopher and, simultaneously, the amount of processing effort that you were willing to invest also peaked (and could not be lowered even when the proposition became progressively more illogical). This happened because I primed you to expect philosophy from an influential and therefore relevant philosopher. You probably assumed that there were positive cognitive rewards in store if you cranked up your intellectual capacity only for a minute. If additionally, even after failing to grasp the quote you were still under the impression that wisdom was locked in it but you could not extract it because your cognitive abilities were not up to the task, then you were officially a victim of the Guru effect. Presumably, none of that would have happened if I had told you that the quote was from the comedian Reggie Watts. The reason is simple, one usually attributes more intellectual authority to a philosopher than to a comedian. The fact that one can attribute an intentionally nonsensical quote from Watts to Derrida without most people raising an eyebrow is a remarkable phenomenon in its own right… and yes, “remarkable” was intended as a euphemism.
(Charlatan-type Gurus were not of the kind assumed by Sperber as his subject study. He was interested in studying the audience’s response to hart-to-grasp utterances coming from “honest Gurus”, those who indeed intend to communicate a profound idea albeit through highly abstruse language.)
So, what gives? Up to this point we have extracted some elements that will be incorporated onto the master algorithm of intellectual hoaxing:
- Pick a difficult idea to communicate.
- Make it even more difficult to digest by wording it in cumbersome language.
- Have it pronounced from a position that inspires intellectual authority.
Following those steps increase the chances of eliciting charitable interpretations for barely comprehensible statements. Hoaxes can turn obscurity of expression from a flaw into an asset.
Do not (generally) go full Guru
After what has been said one misapprehension might arise naturally: So, are you recommending to always cross the barrier from abstruse language to downright unintelligibility in order to maximize hoaxing potential? Or, take that even further, to deliberately craft unintelligible but sophisticated sentences to mask pure meaninglessness (the modus operandi of dishonest Gurus)? No, that generally does not work. Only a consummated Guru (whether that individual fancies him/herself as such) figure can successfully pull that off.
When Deepak Chopra says:
“you can free yourself from aging by reinterpreting your body and grasping the link between belief and biology.”
…or Jordan Peterson says:
“God is how we imaginatively and collectively represent the existence of an action of consciousness across time; as the most real aspects of existence manifest themselves across the longest of time-frames but are not necessarily apprehensible as objects in the here and now.”
…and both are deemed as profound, they are effectively and very likely unintentionally cashing out their status as intellectual Gurus. (I do not believe that they were consciously acting as “dishonest Gurus” when pronouncing such statements).
Academic hoaxes do not generally operate like this. Journal reviewers are not that gullible. Instead of going full Guru, hoaxers typically align their language and logic to conform to the “island epistemology” or “ways of knowing” of whatever field or community that they are trying to hoax.
However, some hoaxes allow for going almost full Guru. If one manages to use a thin semantic thread to connect concepts or ideas coming from very distinct epistemological domains, then, it is possible to get away with strings of utterances that can trigger the envy of any Guru. This is Alan Sokal writing in his classic hoax paper:
“One characteristic of postmodern science is its stress on non-linearity and discontinuity: this is evident, for example, in chaos theory and the theory of phase transitions as well as in quantum gravity. At the same time, feminist thinkers have pointed out the need for an adequate analysis of fluidity, in particular, turbulent fluidity.”
If this is not true-to-the-bone Guru style communication, then I don’t know what is.
In general, and this is crucial, it is possible to build an ample wiggle room to embellish your prose with guru-esque language if the core message is neatly aligned with the confirmation biases of the target audience. There is effectively no amount of linguistic abstruseness that can make you inadvertently swallow a claim that is in opposition with your worldview… provided that the statement is intelligible enough so that its central claim can be unpacked. We read or listen much more critically to utterances that directly challenge our deeply rooted views. However, if we happen to agree with the central claim presented before us, then pedaling to our cognitive biases using disorienting and sophisticated language constitute an irresistible carrot-at-the-end-of-a-stick that can smoothly lead to accept all sorts of nonsensical conclusions. Even nonsensical conclusions espoused to ethically bankrupt commitments.
Purchasing subcultural capital
Communities held together by a strongly idiosyncratic subcultural identity are incredibly finicky about the language used by their members and, above all, the wannabe members. One needs to be more than just a generically eloquent speaker to be adopted by such a community. One need to speak in its terms, in its “slang”.
Slangs are very diverse, however, they share a commonality, they all have a high net worth of “subcultural capital”. The concept of subcultural capital was developed by Sarah Thornton as a natural extension of the idea of cultural capital put forward by the sociologist Pierre Bourdieu. In her triggeringly titled book “Kill all normies”, Angela Nagel summarizes Thornton’s position about the role slang:
“While cultural capital was once earned through being urbane and well-mannered, subcultural capital is earned, Thornton argued, through ‘being in the know’, using obscure slang and using the particularities of the subculture to differentiate yourself from mainstream culture and mass society.”
Slang is precious for a subculture not just because it is a necessary identity marker that aspiring members need to master, but also because its mastery is a great tool to gain prestige once inside the community.
Just as the mainstream has its fastidious language purists who obsessively police the community of speakers over every “misuse” of language, every subculture has its “slang purists”. But unlike mainstream language purists which are a minority in any given generation, almost every member of a subculture is a slang purist. Moreover, slang purists go beyond just pointing out speech infractions, they are ready to take a toll from infractor from their funds of subcultural capital. And it does not stop there, slang sentinels are not only sensitive to violations in the technical use of slang, they are even more preoccupied with the authenticity with which it is spoken. They are highly sensitive to subjective aspects of communication like rhythm, cadence, “swag” and overall naturality.
This exalted sensitivity to slang and above all authenticity that is typical of subcultures makes perfect sense if one considers that what keeps a subculture alive is its ability to maintain strict markers that differentiate it from the mainstream. For a subculture identity is paramount, and it can only remain true to it as long as it does not become infiltrated by fakes, pretenders or anyone who is perceived to have only a shallow familiarity with the subculture’s core “hipness”. Relaxing the requirement for authenticity is a death sentence via a slow dissolution of the subculture into the mainstream.
The darker side of the stringent policing of authenticity is in display when it turns into disdain or even hatred of fakery and shallowness.
So, if your social experiment requires you to infiltrate a highly idiosyncratic community, you better become fluent in its slang and deploy it credibly.
Hoax mechanics applied to the Grievance studies hoax and the last piece of the puzzle
We have laid out some insights into human cognition and social psychology that seem to be at the center of what makes a successful academic hoax. To turn them into fundamental blocks of a “theory of hoaxing” they need to interact in a sufficiently well-orchestrated manner. Let’s apply this conceptual toolkit to a bit from the Grievance studies hoax to see if they fulfill this requirement.
Believe it or not, analyzing only the title of one of the published papers is sufficient to get a good sight of every single concept discussed here, namely: expectation of relevance, aspects that propitiate the “Guru effect” (like verbosity and authority) and the importance of speaking the correct “slang” to be taken seriously in a community with strong subcultural identity. So, for reasons of space I will limit the analysis to a title of one of the bogus papers but be sure that glimpses of this representative mechanics can be spotted spread throughout the content of the papers.
Olivia Goldhill opens her piece for Quartz with a rhetorical question that quickly leads her to put her finger on one of the main issues discussed here:
“Why do men go to Hooters? This hardly seems like an academic question. How about ‘An Ethnography of Breastaurant Masculinity: Themes of Objectification, Sexual Conquest, Male Control, and Masculine Toughness in a Sexually Objectifying Restaurant?’ That has a certain scholarly ring.”
In these three lines we can see every element revised in this article at play.
First, expectation of relevance. But who’s expectation of relevance are we supposed to gauge? Not ours, we have the vantage point of knowing that this “Etnography” is in fact a hoax. The expectations of relevance that we need to consider are those of the reviewer who has been asked by the journal Sex Roles (where this paper got published) to critically asses this article. The reviewer knows that he/she is part of the first and most crucial line of gatekeepers whose task is to filter out unsound research and let through only what belongs into the halls of established scholarship. In such a context, it is natural and indeed encouraged to crank up the reviewer’s expectation of relevance whenever a paper lands on their inbox. Reviewers exists in a context of permanently heightened expectations of relevance. As a side effect, their chances of falling prey to the odd hoax coming their way once or twice per decade are increased.
Is intellectual authority swaying the odds in favor of the hoaxers? Sure it is. As a rule reviewers do not consider every author of every submitted article to be a superlative academic eminence, however, they do assume that the authors are substantially cognizant of the field in question. And of course in the vast majority of cases this is indeed the case. So, when the reviewers see that the “Etnography of Breastaurant Masculinity” was written by “Richard Baldwin, PhD” (a made up name and title) affiliated to “Gulf Coast College”, they dutifully confer to him enough intellectual authority to take seriously whatever he has to say about nostalgic and frustrated wannabe patriarchs cramming the nearest Hooters.
Then there is the critical role of linguistic framing which is at the center of Goldhill’s opening. Would the bogus paper have the same chance for serious consideration if the title had plainly been “Why do men go to Hooters”? Surely not. Straightforwardness in the hoaxing business would not have worked because such a title does not have “a scholarly ring”. The authors understand that verbosity and complex language effectively disguise simple ideas to make them pass as profound ones (as explained in the “Guru effect” section). But the authors did not just use generic sophisticated language, they used carefully selected terms are solid subcultural capital in the community of gender studies: breastaurant (slang hit!), objectification, male control, masculine toughness… check, check, check, check.
This last point is key. The trio’s first rounds of bogus papers kept on bouncing off the targeted journals’ editorial barriers. Why? They explain. They openly confess that in the beginning, they crafted their first papers according to what they thought would be suited to prove that they could “penetrate the leading journals with poorly researched hoax papers” but that approach did not work. The trio realized that breezily mimicking the Grievance studies communities’ themes and lingo was a disguise too lousy to cheat anybody. It wasn’t nearly authentic enough. The rogue scholars had to actually engage with the “existing scholarship of these fields more deeply” to understand the moral structure and ideology of the Grievance study community profoundly enough and only then they were able to their use of the community’s slang, motifs, etc. and then use their terminology to rub the cognitive biases of the community the right way. Addressing “pressing issues” in a way that members of the community judged authentic, in accordance to their island epistemology and endorsing moral directives aligned with their biases was the winning recipe. The papers finally carried enough subcultural capital to buy them access credentials into the Grievance studies crowd. Time to pop the Champagne.
Theory of hoax mechanics redux
What is the theory of hoax mechanics then? We have four main elements: expectation of relevance, intellectual authority, sophisticated lingo, exploiting biases and subcultural capital. The first element is of a fundamental character given our cognitive make up. The remaining ones can be tuned/exploited in order to raise the probability of success of a hoax. How so? Roughly like this:
- Intellectual authority raises expectations of relevance high enough so that a large amount of processing effort seems like a reasonable and worthy expenditure from the part of the target audience.
- The communicated idea is worded in a sufficiently sophisticated and technical language in order to maximize its perceived wisdom, intellectual worth or depth. If the authority of the proponent is big enough, then it might be possible to almost trespass the boundaries of intelligibility while preserving or even increasing perceived profundity.
- If the audience is part of a particularly ideological subculture with strong identity markers, then, the sophisticated language has to translated to the typical slang and linguistic motifs of the community.
- Crucially, the styled-up prose needs to accord to the cognitive, idiosyncratic and even moral biases of the community. Showing this level of familiarity and idiosyncratic compatibility reflex true authenticity credentials. Only then once can purchase sufficient subcultural capital to at least have a chance to be heard by the community.
Explanations are not justifications
One final note is due. The fact that it is possible to substantiate reasons related to cognitive biases that we all can fall prey to or contextual cues that increase the chances of being intellectually scammed, should not be taken as equivalent to offering an exculpatory argument to expiate the blame of the journal reviewers that offered their approval to deeply flawed (and ethically compromised) scholarship to enter into the mainstream of their fields. They were in a position of responsibility that demands of them to keep their guard to be at its maximum level of alert, a task that they plainly failed to accomplish successfully. To explain how and why this might have happened it is not to say that it is OK that it happened.